AI And The Future Of Humanity by Yuval Noah Harari
This is the full text of Yuval Noah Harari’s talk titled “AI And The Future Of Humanity” at Frontiers Forum Live 2023 conference.
In his keynote at Frontiers Forum Live 2023, Yuval discusses the rapid development of artificial intelligence (AI) and its potential implications for society. He spoke on the potential for artificial intelligence to become the first inorganic lifeform on our planet, and how it might change the very make-up or meaning of the world’s ecological system.
Hello, everybody. Thank you for this wonderful introduction. And yes, what I want to talk to you about is AI and the future of humanity.
Now, I know that this conference is focused on the ecological crisis facing humanity. But for better or for worse, AI, too, is part of this crisis. AI can help us in many ways to overcome the ecological crisis, or it can make it far, far worse.
The Emergence Of Inorganic Agents
Actually, AI will probably change the very meaning of the ecological system. Because for four billion years, the ecological system of planet Earth contained only organic life forms. And now, or soon, we might see the emergence of the first inorganic life forms, so four billion years, or at the very least, the emergence of inorganic agents.
Now, people have feared AI since the very beginning of the computer age, in the middle of the 20th century. And this fear has inspired many science fiction classics, like The Terminator or The Matrix. Now, while such science fiction scenarios have become cultural landmarks, they haven’t usually been taken seriously in academic and scientific and political debate, and perhaps for a good reason.
Because science fiction scenarios usually assume that before AI can pose a significant threat to humanity, it will have to reach or to pass two important milestones. First, AI will have to become sentient and develop consciousness, feelings, emotions. Otherwise, why would it even want to take over the world?
Secondly, AI will have to become adept at navigating the physical world. Robots will have to be able to move around and operate in houses and cities and mountains and forests, at least as dexterously and efficiently as humans. If they cannot move around the physical world, how can they possibly take it over?
And as of April 2023, AI still seems far from reaching either of these milestones. Despite all the hype around ChatGPT and the other new AI tools, there is no evidence that these tools have even a shred of consciousness, of feelings, of emotions.
As for navigating the physical world, despite the hype around self-driving vehicles, the date at which these vehicles will dominate our roads keeps being postponed.
However, the bad news is that to threaten the survival of human civilization, AI doesn’t really need consciousness and it doesn’t need the ability to move around the physical world.
Over the last few years, new AI tools have been unleashed into the public sphere, which may threaten the survival of human civilization from a very unexpected direction. And it’s difficult for us to even grasp the capabilities of these new AI tools and the speed at which they continue to develop.
Fundamental Abilities Of The New AI Tools
Indeed, because AI is able to learn by itself, to improve itself, even the developers of these tools don’t know the full capabilities of what they have created and they are themselves often surprised by emergent abilities and emergent qualities of these tools.
I guess everybody here is already aware of some of the most fundamental abilities of the new AI tools, abilities like writing text, drawing images, composing music, and writing code. But there are many additional capabilities that are emerging, like deepfaking people’s voices and images, like drafting bills, finding weaknesses both in computer code and also in legal contracts and in legal agreements.
But perhaps most importantly, the new AI tools are gaining the ability to develop deep and intimate relationships with human beings. Each of these abilities deserves an entire discussion, and it is difficult for us to understand their full implications.
it simple. When we take all of these abilities together as a package, they boil down to one very, very big thing: The ability to manipulate and to generate language, whether with words or images or sounds. The most important aspect of the current state of the ongoing AI revolution is that AI is gaining mastery of language at a level that surpasses the average human ability.
And by gaining mastery of language, AI is seizing the master key, unlocking the doors of all our institutions, from banks to temples. Because language is the tool that we use to give instructions to our bank and also to inspire heavenly visions in our minds.
Another way to think of it is that AI has just hacked the operating system of human civilization. The operating system of every human culture in history has always been language.
In The Beginning Was The Word.
We use language to create mythology and laws, to create gods and money, to create art and science, to create friendships and nations. For example, human rights are not a biological reality. They are not inscribed in our DNA. Human rights is something that we created with language by telling stories and writing laws.
Gods are also not a biological or physical reality. Gods too is something that we humans have created with language by telling legends and writing scriptures. Money is not a biological or physical reality. Banknotes are just worthless pieces of paper, and at present more than 90% of the money in the world is not even banknotes. It’s just electronic information in computers passing from here to there.
What gives money of any kind value is only the story that people like bankers and finance ministers and cryptocurrency gurus tell us about money. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff didn’t create much of real value, but unfortunately they were all extremely capable storytellers.
Now, what would it mean for human beings to live in a world where perhaps most of the stories, melodies, images, laws, policies and tools are shaped by a non-human, alien intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind, and also knows how to form deep and even intimate relationships with human beings? That’s the big question.
Already today, in games like chess, no human can hope to beat a computer. What if the same thing happens in art, in politics, economics and even in religion? When people think about ChatGPT and the other new AI tools, they are often drawn to examples like kids using ChatGPT to write their school essays. What will happen to the school system when kids write essays with ChatGPT? Horrible.
But this kind of question misses the big picture. Forget about the school essays. Instead, think for example about the next US presidential race in 2024, and try to imagine the impact of the new AI tools that can mass-produce political manifestos, fake news stories and even holy scriptures for new cults.
In recent years, the politically influential QAnon cult has formed around anonymous online texts known as QDrop. Followers of this cult, which are millions now in the US and the rest of the world, collected, reviewed and interpreted these QDrops as some kind of new scriptures, as sacred texts.
To the best of our knowledge, all previous QDrops were composed by human beings, and bots only helped to disseminate these texts online. But in future, we might see the first cults and religions in history whose reviewed texts were written by a non-human intelligence.
And of course, religions throughout history claimed that their holy books were written by a non-human intelligence. This was never true before. This could become true very, very quickly, with far-reaching consequences.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, or about climate change, or about the Russian invasion of Ukraine, with entities that we think are fellow human beings, but are actually AI bots.
Now the catch is that it’s utterly useless, it’s pointless for us, to waste our time trying to convince an AI bot to change its political views. But the longer we spend talking with the bot, the better it gets to know us, and understand how to hone its messages in order to shift our political views, or our economic views, or anything else.
Battlefront Shifting From Attention To Intimacy
Through its mastery of language, AI, as I said, could also form intimate relationships with people, and use the power of intimacy to influence our opinions and worldviews.
Now, there is no indication that AI has, as I said, any consciousness, any feelings of its own, but in order to create fake intimacy with human beings, AI doesn’t need feelings of its own. It only needs to be able to inspire feelings in us, to get us to be attached to it.
Now in June 2022, there was a famous incident when the Google engineer Blake Lemoine publicly claimed that the AI chatbot LaMDA, on which he was working, had become sentient. This very controversial claim cost him his job, he was fired.
Now, the most interesting thing about this episode wasn’t Lemoine’s claim, which was most probably false. The really interesting thing was his willingness to risk and ultimately lose his very lucrative job for the sake of the AI chatbot that he thought he was protecting.
If AI can influence people to risk and lose their jobs, what else can it induce us to do? In every political battle, for hearts and minds, intimacy is the most effective weapon of all, and AI has just gained the ability to mass-produce intimacy with millions, hundreds of millions of people.
Now, as you probably all know, over the past decade, social media has become a battlefield for controlling human attention. Now, with the new generation of AI, the battlefront is shifting from attention to intimacy, and this is very bad news.
What will happen to human society and to human psychology as AI fights AI in a battle to create intimate relationships with us, relationships that can then be used to convince us to buy particular products or to vote for particular politicians? Even without creating fake intimacy, the new AI tools would have an immense influence on human opinions and on our worldview.
People, for instance, may come to use, are already coming to use, a single AI advisor as a one-stop oracle and as the source for all the information they need. No wonder that Google is terrified. If you’ve been watching the news lately, Google is terrified, and for a good reason.
Why bother searching yourself when you can just ask the oracle to tell you anything you want? You don’t need to search. The news industry and the advertisement industry should also be terrified. Why read a newspaper when I can just ask the oracle to tell me what’s new?
And what’s the point, what’s the purpose of advertisement when I can just ask the oracle to tell me what to buy? So there is a chance that within a very short time the entire advertisement industry will collapse, while AI, or the people and companies that control the new AI oracles, will become extremely, extremely powerful.
What we are potentially talking about is nothing less than the end of human history. Now, not the end of history, just the end of the human-dominated part of what we call history. History is the interaction between biology and culture. It’s the interaction between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which religions and laws interact with food and sex.
When AI Takes Over Culture
Now, what will happen to the cause of this interaction of history when AI takes over culture? Within a few years, AI could eat the whole of human culture, everything we’ve produced for thousands and thousands of years, digest it, and start gushing out a flood of new cultural creations, new cultural artifacts.
And remember that we humans, we never really have direct access to reality. We are always cocooned by culture, and we always experience reality through a cultural prism. Our political views are shaped by the stories of journalists and by the anecdotes of friends. Our sexual preferences are tweaked by movies and fairy tales.
Even the way that we walk and breathe is subtly nudged by cultural traditions. Previously, this cultural cocoon was always woven by other human beings. Previous tools, like printing presses, radios, or televisions, helped to spread the cultural ideas and creations of humans, but they could never create something new by themselves.
A printing press cannot create a new book. It’s always done by a human. AI is fundamentally different from printing presses, from radios, from every previous invention in history, because it can create completely new ideas. It can create a new culture.
And the big question is, what will it be like to experience reality through a prism produced by a non-human intelligence, by an alien intelligence?
Now at first, in the first few years, AI will probably largely imitate the human prototype that fed it in its infancy. But with each passing year, AI culture will boldly go where no human has gone before.
So for thousands of years, we humans basically lived inside the dreams and fantasies of other humans. We have worshipped gods, we pursued ideals of beauty, we dedicated our lives to causes that originated in the imagination of some human poet or prophet or politician.
Soon, we might find ourselves living inside the dreams and fantasies of an alien intelligence. And the danger that this poses, or the potential danger, it also has positive potential, but the dangers it disposes are fundamentally very, very different from everything or most of the things imagined in science fiction movies and books.
Previously, people have mostly feared the physical threat that intelligent machines pose. So the Terminator depicted robots running in the streets and shooting people. The Matrix assumed that to gain total control of human society, AI would first need to get physical control of our brains and directly connect our brains to the computer network. But this is wrong.
Simply by gaining mastery of human language, AI has all it needs in order to cocoon us in a Matrix-like world of illusion. Contrary to what some conspiracy theories assume, you don’t really need to implant chips in people’s brains in order to control them, or to manipulate them.
For thousands of years, prophets and poets and politicians have used language and storytelling in order to manipulate and to control people and to reshape society. Now, AI is likely to be able to do it. And once it can do that, it doesn’t need to send killer robots to shoot us. It can get humans to pull the trigger if it really needs to.
Fear of AI Revolution
Now, fear of AI has haunted humankind for only the last few generations, let’s say from the middle of the 20th century. If you go back to Frankenstein, maybe it’s 200 years. But for thousands of years, humans have been haunted by a much, much deeper fear. Humans have always appreciated the power of stories and images and language to manipulate our minds and to create illusions.
Consequently, since ancient times, humans feared being trapped in a world of illusions. In the 17th century, René Descartes feared that perhaps a malicious demon was trapping him inside this kind of world of illusions, creating everything that Descartes saw and heard.
In ancient Greece, Plato told the famous allegory of the cave, in which a group of people is chained inside a cave all their lives, facing a blank wall, a screen. On that screen, they see projected various shadows, and the prisoners mistake these illusions, these shadows, for the reality.
In ancient India, Buddhist and Hindu sages pointed out that all humans lived trapped inside what they called Maya. Maya is the world of illusions. Buddha said that what we normally take to be reality is often just fiction in our own minds. People may wage entire wars, killing others and being willing to be killed themselves because of their belief in this fiction.
So the AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, a curtain of illusions could descend over the whole of humankind, and we will never be able to tear that curtain away, or even realize that it is there, because we’ll think this is reality.
Social Media and AI
And social media, if this sounds far-fetched, so just look at social media over the last few years. Social media has given us a small taste of things to come.
In social media, primitive AI tools, but very primitive, have been used not to create content, but to curate content which is produced by human beings. The humans produce stories and videos and whatever, and the AI chooses which stories, which videos would reach our ears and eyes, selecting those that will get the most attention, that will be the most viral.
And while very primitive, these AI tools have nevertheless been sufficient to create this kind of curtain of illusions that increase societal polarization all over the world, undermine our mental health, and destabilize democratic societies. Millions of people have confused these illusions for the reality.
The USA has the most powerful information technology in the whole of history, and yet American citizens can no longer agree who won the last election, or whether climate change is real, or whether vaccines prevent illnesses or not.
The new AI tools are far, far more powerful than these social media algorithms, and they could cause far more damage. Now, of course, AI has enormous positive potential too. I didn’t talk about it because the people who develop AI naturally talk about it enough. You don’t need me to add up to that cause.
Positive Potential of AI
The job of historians and philosophers like myself is often to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures to cancer to discovering solutions to the ecological crisis that we are facing.
In order to make sure that the new AI tools are used for good and not for ill, we first need to appreciate their true capabilities, and we need to regulate them very, very carefully.
Since 1945, we knew that nuclear technology could physically destroy human civilization, as well as benefiting us by producing cheap and plentiful energy. We therefore reshaped the entire international order to protect ourselves and to make sure that nuclear technology is used primarily for good.
We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world. One big difference between nukes and AI is that nukes cannot produce more powerful nukes. AI can produce more powerful AI, so we need to act quickly before AI gets out of our control
Drug companies cannot sell people new medicines without first subjecting these products to rigorous safety checks. Biotech labs cannot just release a new virus into the public sphere in order to impress their shareholders with their technological wizardry.
Similarly, governments must immediately ban the release into the public domain of any more revolutionary AI tools before they are made safe. Again, I’m not talking about stopping all research in AI. The first step is to stop the release into the public sphere. You can research viruses without releasing them to the public. You can research AI, but don’t release them too quickly into the public domain.
AI Arms Race
If we don’t slow down the AI arms race, we will not have time to even understand what is happening, let alone to regulate effectively this incredibly powerful technology.
You might be wondering or asking, won’t slowing down the public deployment of AI cause democracies to lag behind more ruthless authoritarian regimes? The answer is absolutely no. Exactly the opposite. Unregulated AI deployment is what will cause democracies to lose to dictatorships.
Because if we unleash chaos, authoritarian regimes could more easily contain this chaos than could open societies. Democracy, in essence, is a conversation. Democracy is an open conversation. Dictatorship is a dictate. There is one person dictating everything, no conversation. Democracy is a conversation between many people about what to do. And conversations rely on language.
When AI hacks language, it means it could destroy our ability to conduct meaningful public conversations, thereby destroying democracy. If we wait for the chaos, it will be too late to regulate it in a democratic way. Maybe in an authoritarian or totalitarian way it will still be possible to regulate.
And the first regulation, there are many regulations we could suggest, but the first regulation that I would suggest is to make it mandatory for AI to disclose that it is an AI. If I’m having a conversation with someone and I cannot tell whether this is a human being or an AI, that’s the end of democracy. Because that’s the end of meaningful public conversations.
Now, what do you think about what you just heard over the last 20 or 25 minutes? Some of you, I guess, might be alarmed. Some of you might be angry at the corporations that develop these technologies or at the governments that fail to regulate them.
Some of you may be angry at me, thinking that I’m exaggerating the threat or that I’m misleading the public. But whatever you think, I bet that my words have had some emotional impact on you. Not just intellectual impact, also emotional impact.
I’ve just told you a story, and this story is likely to change your mind about certain things and may even cause you to take certain actions in the world.
Now, who created this story that you just heard and that just changed your mind and your brain?
Now, I promised you that I wrote the text of this presentation myself with the help of a few other human beings, even though the images have been created with the help of AI. I promised you that at least the words you heard are the cultural product of a human mind or several human minds. But can you be absolutely sure that this is the case?
Now, a year ago, you could. A year ago, there was nothing on Earth, at least not in the public domain, other than a human mind that could produce such a sophisticated and powerful text. But now it’s different.
In theory, the text you just heard could have been generated by a non-human alien intelligence. So take a moment, or more than a moment, to think about it.
Thank you.