AI Boosters would sacrifice humanity for a Simulacra

Humans have always had a strange relationship with their technology. Whether it’s fire or the written word or artificial intelligence (AI), the things we create to empower and expand ourselves often come to seem like strange outside forces, their transformative impact on our lives and minds as profound and unpredictable as the to capricious gods.

This article is taken from The Node, CoinDesk’s daily roundup of the top stories in blockchain and crypto news. You can subscribe to get the whole newsletter here.

In the 21st century, there are many technologies whose impact we still haven’t adapted to, such as social media. Others, notably artificial intelligence and cryptocurrency, are not yet fully developed but will have a major impact on how we live. That includes shaping the way we think about the world: Human groups are often deeply affected by the built-in biases of the technologies they rely on. (This relationship between humanity and its technology was the main focus of my academic career before I switched to journalism.)

The cases of AI and cryptocurrency are particularly fascinating. Their respective technological foundations each map to much broader ideas about society and even individual morality. The two ideas also seem diametrically and fundamentally opposed in many ways. On the one hand you have crypto-ideologues, often some variation of libertarian; on the other, you have the techno-utopianism of the AI ​​boosters.

The core principle of blockchain technology is universality. The whole point is that anyone, anywhere can access these tools and services, whether someone in authority is OK with it or not. A consequence of this principle is to value privacy very highly. These values ​​lead to a significant amount of chaos, but the underlying ethos includes a belief that such chaos is productive, or at least a necessary trade-off for human flourishing.

Artificial intelligence society tends to a much different view of the world, also closely related to the structure of the technology. The basic role of an AI is to observe human behavior and then copy it, in all its nuances. This has many implications, but the most important ones include the end of human labour, technical centralization and a bias towards total data transparency.

These may seem harmless or even positive. But to put it bluntly, the implications of AI’s technical biases seem pretty bleak for humanity. That makes it all the more fascinating to see AI boosters not only embracing the grim implications, but working tirelessly to make them real.

The most obviously malevolent manifestation of the AI ​​worldview may not look like one. Worldcoin is apparently a cryptocurrency project designed to distribute a “universal basic income” (UBI) in exchange for harvesting uber-sensitive biometric data from vulnerable populations.

But while it is a “crypto” on the surface, Worldcoin is actually an AI project. It’s the brainchild of Sam Altman, also the founder of OpenAI. The goal of identifying individuals and giving them UBI is based on a future where artificial intelligence takes over all human jobs, which requires a centrally managed redistribution of AI-generated wealth.

An often overlooked implication of this is that in this future, people like Sam Altman will be the ones who still have meaningful work and individual freedom, because they will drive the machines. The rest of us, it seems, have to live on worldcoin.

If that sounds dehumanizing, try this on for size: human rights for machines. In an essay published in The Hill on March 23, contributor Jacy Reese Anthis declared, “We need an AI rights movement.” As many have pointed out, it is philosophically suspect because there is no convincing evidence that artificial intelligence will or ever could have any subjective experience, or know suffering in a way that would necessitate protection.

The AI ​​hypesters have gone to great lengths to obscure this issue of subjective consciousness. They have floated a series of maliciously misleading arguments that AIs – including absurdly existing language models – have some form of consciousness, or the subjective experience of being in the world. Consciousness and its origins are among the great mysteries of existence, but AI advocates often seem to have a superficial understanding of the problem, or even of what the word “consciousness” means.

Anthis, one of the founders of something called the Sentience Institute, repeats in his essay the cardinal logical fallacy of much flawed AI research: uncritically accepting the visible output of AI as a direct sign of an internal subjective experience.

No matter how wrong the demand for AI rights is, we still haven’t touched the furthest fringes of misguided AI hypedom. In late March, no less a figure than aspiring philosopher-king Eleizer Yudkowsky suggested that his followers should be prepared to commit violent attacks on server farms to slow or stop the progress of AI.

Some may be surprised that I include Yudkowsky with pro-AI forces. Yudkowsky is the leader of an “AI Safety” movement that grew out of his “rationalist” blog LessWrong, and claims that AI could potentially take over the world and destroy humanity. The validity of these ideas is up for debate, but the important thing here is that Yudkowsy and his so-called rationalist peers are not against artificial intelligence. The Yudkowskyites believe AI can create a utopia (again, a utopia they would be in charge of), but that it carries serious risks. They are basic AI advocates dressed in skeptics’ clothing.

It contrasts with the more genuine and reasoned skepticism of figures including Timnit Gebru and Cathy O’Neil. These scientists, who do not have nearly the following of Yudkowsky, are not worried about a far future apocalypse. Their concern is with a clear and present danger: that algorithms trained on human behavior will also reproduce the flaws of that behavior, such as racial and other forms of discrimination. Gebru was notoriously fired from Google for having the audacity to point out problems with AI that threatened Google’s business model.

You can note the similarity here with other ideological movements, especially the related ideas of “effective altruism” and “long-termism.” Emile P. Torres, a philosopher based at Leibniz University in Germany, is among those who describe the deep alignment and connection between Yudkowskyites, “transhumanists,” effective altruists, and long-termers, under the acronym TESCREAL. As just one small example of this alignment, FTX, the fake crypto exchange staffed essentially by effective altruists and led by a man pretending to be one, made large donations (using allegedly stolen funds) to AI security groups, as well like other long -thermic causes.

This package of worldviews, Torres argues, boils down to “colonize, subjugate, exploit and maximize.” In Worldcoin in particular, we preview the implacable authoritarianism of a world where machines endlessly copy human behavior and replace them—except for a handful of humans with their hands on the tiller.

There is much more to say here, but for those in the crypto world, a statement from the TESCREAL axis deserves special attention. In a very recent essay, Yudkowsky declared that the future threat of dangerous artificial intelligence may require “very strong global surveillance, including breaks all encryptionto prevent hostile AGI conduct.”

In other words, for these people, the privacy and autonomy of people in the present must be discarded in favor of creating the conditions for the safe and continuous expansion of artificial intelligence in the future. I leave it to you to decide whether it is a future, or a present, that anyone should desire.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *