Web2’s lesson for AI: Decentralize to protect humanity
This is going to sound presumptuous coming from a guy who doesn’t write code, let alone have any direct experience with machine learning or AI research.
But I have to say this: The recent alarmist call for a six-month pause or even a military-enforced shutdown of AI research—from people with experience, money, and influence in the AI industry—is based on a fundamentally flawed thinking that will encourage the same destructive outcome for humanity that we are trying to avoid. That the US government is simultaneously orchestrating a crackdown on the crypto industry, a field of open-source innovation that is developing the kind of cryptography and network coordination technologies needed to deal with AI threats, makes this a particularly dangerous moment for all of us.
These doomsday men are computer scientists, not students of economic history. The problem is not in itself that an out-of-control AI could evolve to kill us all. (We all know that. For decades, Hollywood has taught us that it is.) No, the task is to ensure that the economics of AI do not inherently encourage that terrible outcome. We must prevent concentrated control of input and output from AI machines from impeding our capacity to act together in the common interest. We need collective, collaborative software development that creates a computational antidote to these dystopian nightmares.
You read Money reimagined, a weekly look at the technological, economic and social events and trends that are redefining our relationship with money and transforming the global financial system. Subscribe to receive the full newsletter here.
The answer does not lie in shutting down AI innovation and locking ChatGPT creator OpenAI, the industry leader that has taken the field to its current level of development, in pole position. On the contrary, it is the best way to ensure that the nightmare comes true.
We know this from the breakdown of Web2, the ad-driven, social platform-based economy in which the decentralized Web1 internet was re-centralized around an oligarchy of all-knowing data-aggregating behemoths including Google, Facebook and Amazon. They became the beneficiaries of what Shoshana Zuboff has called “surveillance capitalism,” and we humans became its victims, a passive source of personal data extracted from us and recycled into behavior-modifying algorithms.
All this happened not because the platforms were morally inclined to abandon Google’s “Don’t Be Evil” maxim, but because the logic of the market pushed them into this model. Ads provided the revenue, and an ever-growing pool of users on their platforms provided the data with which the internet titans could shape human behavior to maximize returns on those ads. Shareholders, demanding that the exponential gains continue, pushed them to double down on this model to “make the numbers” every quarter. As network effects kicked in and the platforms attracted more users through a self-reinforcing growth function, data mining models became more lucrative and harder to abandon as Wall Street’s expectations grew ever higher.
This exploitative system will go into overdrive if AI development occurs under the same monopoly structure as the defaulter. The solution is not to stop research, but to incentivize AI developers to find ways to subvert this model.
For centuries, the market capitalism system encouraged competition among entrepreneurs for market share, generating wealth and productivity for all. It promoted wealth inequality, but in the long run, with the help of antitrust, union and social safety net laws, it produced unprecedented welfare gains worldwide.
But the system was built for an analog economy, one that revolved around the production and sale of physical things, a world where the constraints of geography impose a heavy capital cost on growth opportunities. The internet age is very different. It is one of self-reinforcing network effects, where the efficiency of software production allows market leaders to quickly expand market share at very low marginal costs, and where the most valuable commodity is not physical, like iron ore, but intangible – it is human data.
We need a new model of decentralized ownership and consensus governance, one that is built on incentives for competitive innovation, but has a self-correcting framework that drives innovation towards the commons.
Inspired by Jacob Steeves, a founder of decentralized AI development protocol Bittensor, I believe crypto-technology can help define what the future looks like – even if we need handrails for that.
“We’re saying let’s build open ownership of AI,” Steeves said of Bittensor’s tokenized decentralized building model for AI on this week’s “Money Reimagined” podcast. “If you can contribute, you can own it. And so, let’s let the people decide.”
The philosophical idea is that sufficiently decentralized ownership and control would prevent a single party from dictating AI development, and that the group as a whole would instead choose models that are beneficial to the collective. “If we all had a piece of this, this thing won’t come back and hurt us, because at the end of the day, the basic currency of AI is the basic stake in your wallet,” Steeves said.
Too utopian? May be. The long list of scams in crypto history means that many will instinctively imagine a crypto-AI model being hijacked by nefarious actors.
But if we are going to create a common project based on open source innovation and collective governance, the economic phenomena that most resemble what we need are the ecosystems that have emerged around blockchain protocols.
“Ethereum and Bitcoin are the largest supercomputers in the world, measured in hashes,” Steeves said. “These networks – for better or for worse, whether you place yourself on both sides of the power debate or not – are megastructures. They are the largest megadata structures that humanity has ever created … they are hundreds of times larger than the data warehouses of companies like Google.”
OpenAI, the company behind ChatGPT and GPT1-through-4 large language models (LLM), is structured very differently than the blockchain ecosystems. It’s a private company, one that just took in a $10 billion (with a B) investment from tech giant Microsoft.
And while CEO Sam Altman has yet to join Tesla CEO and OpenAI investor Elon Musk as one of more than 25,000 signatories to an open letter calling for a six-month pause in AI development, many believe that if the letter’s requirements were implemented, the company would be a direct beneficiary, making it harder for any competitor to challenge OpenAI’s dominance while also giving Altman’s company control over AI development going forward.
“The letter serves to rally public support for OpenAI and its allies as they consolidate their dominance, build an extended innovation pipeline, and secure their advantage over a technology fundamental to the future,” cryptocurrency pioneer Peter Vessenes wrote in a CoinDesk op-ed this week . “If this happens, it will irreparably harm Americans — our economy and our people.”
Consider, Vessenes wrote, “in 1997, Microsoft and Dell had issued a similar ‘pause’ letter, calling for a halt to browser innovation and a ban on new e-commerce sites for six months, citing their own research that the Internet would destroy brick-and-mortar stores and aid terrorist financing. Today, we would recognize this as self-serving alarmism and an attempt at regulation.”
OpenAI is now based on a closed system, but the LLM approach to machine learning is now out in the wild and being replicated in all sorts of ingenious ways. How on earth is an agreement by US scientists, or even an act of Congress, going to stop this technology’s progress – especially by criminal actors backed by rogue states with every reason to ignore America’s pleas?
Couple this with the US government’s recent hostility to crypto, manifested in the Securities and Exchange Commission’s series of actions against industry leaders and in the sanctions against the open-source Tornado Cash protocol, and a worrying convergence emerges. That crypto companies are now leaving US shores is more than a threat to the digital asset industry. It is a blow to the very form of open source innovation that is necessary to avoid AI’s dangerous capture by self-serving centralized interests.
For all the losses that token speculators have suffered recently, the waves of money chasing these riches funded some of the biggest leaps in cryptography of all time. Zero-knowledge proofs, for example, which are likely to play a role in how we protect sensitive information from ubiquitous AI snooping, have advanced many orders of magnitude more than they did in the pre-encryption era.
There’s also a wisdom-of-the-crowd advantage that comes from crypto’s permissionless innovation ethos. Non-conforming fringe ideas tend to bubble up more easily than those that are top-down by corporate management. OpenAI’s innovation structure is very different from that. Sure, it figured out how to tap into the internet’s vast array of data and how to train an incredibly effective LLM on it. But having abandoned its open-source, nonprofit status, it is now a closed, black-box operator, dependent on the profit-maximizing demands of the new corporate investor.
We have a choice: Do we want AI to be captured by the same concentrated business models that captured Web2? Or is the decentralized ownership vision of Web3 the safer bet? I know which one I would choose.