Fears of artificial intelligence (AI) have dogged humanity since the beginning of the computer age. Until now these fears have focused on machines using physical means to kill, enslave or replace people. But in the last couple of years, new AI tools have emerged that threaten the survival of human civilization from an unexpected direction. Artificial intelligence has acquired some remarkable abilities to manipulate and generate language, whether it be with words, sounds or images. Artificial intelligence has thus hacked the operating system of our civilization.
Language is the substance of which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. Rather, they are cultural artifacts that we have created by telling stories and writing laws. Gods are not physical realities. Rather, they are cultural artifacts that we have created by inventing myths and writing scriptures.
Money is also a cultural artifact. Banknotes are just colored pieces of paper and currently over 90% of money isn’t even banknotes, it’s just digital information on computers. What gives value to money are the stories that bankers, finance ministers and cryptocurrency gurus tell us. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all extremely capable storytellers.
What would happen once a nonhuman intelligence becomes better than the average human at telling stories, composing tunes, drawing pictures, and writing laws and scriptures? When people think about Chatgpt and other new AI tools, they are often drawn to examples like schoolchildren using AI to write their essays. What will happen to the school system when children do? But this kind of question misses the big picture. Forget schoolwork. Think about the upcoming US presidential race in 2024 and try to imagine the impact of AI tools that can be built to mass-produce political content, fake news and scripture for new cults.
In recent years, the cult of qAnon has joined anonymous online postings, known as q drops. Followers collected, venerated, and interpreted these q drops as a sacred text. While, as far as we know, all previous q-drops were made up of humans and robots merely helped spread them, in the future we may see history’s first cults whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their sacred books. It could soon be a reality.
On a more prosaic level, we may soon find ourselves conducting lengthy online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we think are humans but are actually AIs. The problem is that it is utterly pointless for us to spend our time trying to change an AI bot’s stated opinions, whereas the AI might be honing its messages so precisely that it has a good chance of influencing us.
Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews. While there is no indication that the AI has a consciousness or feelings of its own, to promote false intimacy with humans it is enough that the AI can make them feel emotionally attached to it. In June 2022 Blake Lemoine, an engineer at Google, publicly claimed that the AI chatbot Lamda, which he was working on, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr. Lemoine’s statement, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most effective weapon, and artificial intelligence has just acquired the ability to mass-produce intimate relationships with millions of people. We all know that in the past decade social media has become a battleground to control human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to pretend intimate relationships with us, which can then be used to get us to vote for certain politicians or buy certain products?
Even without creating false intimacy, new AI tools would have a huge influence on our opinions and worldviews. People can come to use a single AI consultant as a unique, all-knowing oracle. No wonder Google is terrified. Why bother looking, when I can just ask the oracle? The news and advertising industries should be terrorized as well. Why read a newspaper when I can simply ask the oracle to tell me the latest news? And what’s the point of advertising, when I can just ask the oracle to tell me what to buy?
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated side. History is the interaction of biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process by which laws and religions shape food and sex.
What will happen to the course of history when AI takes over culture and starts producing stories, melodies, laws and religions? Earlier tools such as print and radio helped spread the cultural ideas of human beings, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create a whole new idea, a whole new culture.
At first, the AI will likely mimic the human prototypes it was trained on in its infancy. But as the years go by, the AI culture will boldly go where no human has gone before. For millennia humans have lived inside the dreams of other humans. In the coming decades, we may find ourselves living in the dreams of an alien intelligence.
The fear of artificial intelligence has only been haunting mankind for the last few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and create illusions. As a result, human beings have feared being trapped in a world of illusions since ancient times.
In the 17th century, Ren Descartes feared that perhaps an evil demon was trapping him in a world of illusions, creating everything he saw and heard. In ancient Greece Plato recounted the famous Allegory of the Cave, in which a group of people are chained inside a cave for life, facing a bare wall. A screen. On that screen they see various shadows projected. The prisoners mistake the illusions they see there for reality.
In ancient India Buddhist and Hindu sages pointed out that all human beings lived trapped within Maya, the world of illusions. What we normally take for reality is often just fiction in our minds. People can wage entire wars, killing others and wanting to be killed themselves, because of their belief in this or that delusion.
The artificial intelligence revolution is bringing us face to face with the Descartes demon, with Plato’s cave, with the Mayas. If we are not careful, we may be trapped behind a curtain of illusions, which we may not tear away or even realize is there.
Of course, the new power of AI could be used for good purposes too. I won’t dwell on that, because people who develop AI talk about it enough. The task of historians and philosophers like me is to point out the dangers. But certainly AI can help us in countless ways, from finding new treatments for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that new AI tools are used for good rather than evil. To do this, we must first appreciate the true capabilities of these tools.
We have known since 1945 that nuclear technology could generate cheap energy to benefit humans, but it could also physically destroy human civilization. We have therefore reshaped the entire international order to protect humanity and to ensure that nuclear technology is used primarily for good. Now we have to deal with a new weapon of mass destruction that can annihilate our mental and social world.
We can still regulate new AI tools, but we need to act quickly. While nukes cannot invent more powerful nukes, AI can make AI exponentially more powerful. The crucial first step is to require rigorous security controls before powerful AI tools are released into the public domain. Just as a pharmaceutical company can’t release new drugs before testing them for both short- and long-term side effects, so tech companies shouldn’t release new AI tools before they’re made safe. We need a Food and Drug Administration equivalent for new technologies, and we needed it yesterday.
Won’t slowing the public deployment of AI cause democracies to fall behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI implementations would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation and conversations are based on language. When AI hacks into language, it could destroy our ability to have meaningful conversations, thus destroying democracy.
We have just encountered an alien intelligence here on Earth. We don’t know much about it, except that it could destroy our civilization. We should end the irresponsible deployment of AI tools in the public sphere and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for the AI to disclose that it is an AI. If I’m having a conversation with someone and I can’t figure out whether it’s a human being or an artificial intelligence, that’s the end of democracy.
This text was generated by a human.
#Artificial #intelligence #hacked #operating #system #human #civilization