Blockchain may help address some fears about AI as Geoffrey Hinton leaves Google

[gpt3]rewrite

Amidst all the excitement surrounding the wave of new AI applications taking the world by storm in 2023, there has been a steady chorus of high-profile AI experts warning us of the dangers.

The Godfather of AI, Geoffrey Hinton, recently made headlines when he stepped down from Google (NASDAQ: GOOGL ) so he could openly express his fears about the potential for AI to cause significant damage to the world.

This tells us two things: Google is not a place where dissenting voices can express themselves freely, and the dangers ahead are worrisome enough to make top experts resign from high-profile jobs to warn us about them.

What are the potential harms of AI?

The first and most obvious damage AI can cause is the widespread displacement of workers. Pretty much everyone knows that especially artists, writers and creators of all kinds who have seen the first wave of AI tools, easily do what took them years or decades to learn. Call center workers, customer service representatives and even doctors and lawyers fear what AI could mean for their professions.

Still, the automation of jobs is nothing new, and previous technological breakthroughs were met with the same expressions of concern. While he mentions it as a concern, it’s not enough in itself to make a heavy hitter like Hinton tell the New York Times that he now partially regrets his life’s work.

Hinton’s concerns are more serious; he fears the mass spread of fake images, videos and text online – fake news about steroids. As if truth is not already difficult enough to distinguish from fiction, the mass influx of images and video that are almost indistinguishable from reality will amplify this problem a hundredfold. He also fears that in the future AI systems will learn to manipulate people and will learn new behaviors from the mass volume of data they are trained on.

Hinton is just one of many experts who have warned us about AI’s potentially harmful side effects in recent months. “Look at how it was five years ago and how it is now. Take that difference and spread it further,” he told the NYT.

How can blockchain technology help reduce some of the dangers?

As I said in my article about the most common blockchain myths, this technology could not solve all problems and certainly will not stop all the potentially negative consequences of AI. Blockchain technology can’t stop the automation of jobs, and it can’t stop some rogue AI developed in some black box from becoming Skynet and destroying humanity.

However, blockchain can help in two critical areas: verifying data as authentic, including images, text and video, and creating accountability for AI developers and researchers.

Verification of information from official sources

First, blockchain can make it possible to verify that the information is from a legitimate source and that it is valid. Let’s imagine a video appearing on the internet purporting to be an official Whitehouse statement on a given subject. It’s starting to spread like wildfire and there’s no way to tell if it’s fake just by looking.

Blockchain technology can help verify or debunk such videos by enabling governments, companies and others to cryptographically sign official documents and media, enabling immediate verification of authenticity. I’m not saying that most people will bother to do it, but journalists and those who want it can immediately look for e.g. an official Whitehouse signature to confirm that a given piece of information is genuine.

Due to the large amount of misinformation likely to be spread as a result of AI, it will be impossible to address each part on a case-by-case basis. Some sort of system will be needed to allow interested parties to verify authenticity quickly and easily, and blockchains are an ideal tool for this.

Companies can also use blockchain technology to verify the authenticity of songs, movies and other media that claim to be produced by them. Recently, an AI-generated song by Drake made the rounds, sending executives from Universal Music Group to clarify that it was not an official release and issuing legal threats.

Creating accountability in the AI ​​world

At least two potential parties must be held accountable in the age of AI; those who create and disseminate false information and AI engineers and developers who develop systems that can destroy humanity and/or who openly steal data from creators to train their systems. Blockchain technology can help in both cases.

First, one of the biggest problems with the internet and the spread of false information today is that there is no way to tell who created or uploaded it. This is partly due to anonymous accounts on platforms such as Twitter.

Blockchain can change that, making it possible to know who is really behind pseudonyms on social media platforms and who is responsible for spreading false information. Although the public does not necessarily need to know who is behind a given account, it will be both possible and good for the authorities to know in cases where the law is broken.

If you object to this on political grounds, consider a scenario where someone generates fake porn of someone in your family, or another loved one and uploads it to social media. Do you want someone to be able to track and identify who uploaded it first and hold them accountable?

Blockchain can also make it possible to track and trace who is doing what in a given system. What if AI models were open source, or if governments made laws that said companies could only develop their private AI systems if all development was tracked on a blockchain, making it possible to tell who made changes that led to catastrophic consequences, so that they could be held to account? Timestamp servers such as blockchains are ideal for this, and tools such as Sentinel Node show how it is possible to keep track of what is happening inside systems that use blockchain.

Likewise, blockchain technology allows artists and creators to own their own data. While AI systems such as ChatGPT and Midjourney have been able to train on web pages and images without permission from their owners, using blockchains such as BSV will give the owners of the data control, enabling them to either grant or deny permission to AI developers who want to train their models in their work. Creators can also receive payment for it if they decide to allow it. Learn more about micro- and nano-payments to understand how this might work.

Of course, before any of this is possible, blockchain technology needs to scale to billions of transactions per second, systems and applications need to be built, and relevant laws and regulations need to be passed to define what is and isn’t legal about AI. That’s another matter for another day, and we’ll see how it all turns out.

For now, blockchain can be a positive thing that helps solve some of these very real problems that are coming our way. While it can’t stop an AI overlord from sending an army of robots into your city and wiping everyone out, it can make it possible to tell what’s real from what’s fake. It can create some much-needed accountability for everyone involved. At least it’s a start!

CoinGeek Weekly Livestream: The future of AI Generated Art at Aym

YouTube video

width=”562″ height=”315″ frameborder=”0″ allowfullscreen=”allowfullscreen”>

New to Bitcoin? Check out CoinGeeks Bitcoin for beginners section, the ultimate resource guide for learning more about Bitcoin – as originally envisioned by Satoshi Nakamoto – and blockchain.

[gpt3]

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *