Yuval Noah Harari’s guidebook for navigating the age of AI
768 views
Mar 29, 2025
“Why is it that the quality of our information did not improve over thousands of years? Why is it that very sophisticated societies have been as susceptible as stone age tribes to mass delusion and the rise of destructive ideologies?”
View Video Transcript
0:00
Who is supposed to be the arbiter of truth
0:03
Between the human and the computer together, do you think that AI will actually help us
0:09
Let me ask you how you think this is going to impact democracies. What are some arguments for and against a future
0:16
in which humans only have relationships with AI? What is your worst-case fear about all of this
0:25
I'm Yuval Noah Harari. I'm a professor of history at the Hebrew University of Jerusalem
0:30
and the author of Nexus, a history of information networks from the Stone Age to AI
0:37
If you consider, for instance, that we have managed to reach the moon
0:43
to split the atom to decipher DNA, and yet with all our knowledge and wisdom
0:50
we are on the verge of destroying ourselves. And a traditional answer to this question is that there is something wrong in human nature
1:02
But the problem is not in our nature. The problem is in our information
1:08
If you give good people bad information, they make bad decisions. So why is it that the quality of our information did not improve over thousands of years of history
1:22
Why is it that very sophisticated societies in the 20th and 21st century have been as susceptible as Stone Age tribes to mass delusion and psychosis and the rise of destructive ideologies like Stalinism or Nazism
1:52
Good evening, everybody. Thank you for being here. It is a privilege for me to be here with you
2:00
I have been a longtime fan of his back when Sapiens was first published. And there is so
2:09
much to talk about between then and now and how we think about AI and how we think about this new
2:17
reality, something that I think was actually a theme in your earliest books about reality and
2:21
truth and information, and in a world where it's going to become all electronic and in a cloud in
2:28
the sky in a whole new way, what it ultimately means when you think about history. So thank you
2:34
for being here. We're going to talk about so many things, but I'm going to tell you where we want to
2:38
start tonight, if I could. I want to read you something. This is you. You said that humankind
2:44
mankind gains enormous power by building large networks of cooperation. But the way these
2:52
networks are built predisposes them to use power unwisely. Our problem then is a network problem
3:02
you call it. But even more specifically, you say it's an information problem. I want you to try to
3:09
unpack that for the audience this evening, because I think that that, more than anything else in this
3:15
book, explains, or at least sets the table for what this discussion is all about
3:22
So basically, the key question of the book, and of much of human history, is if we are so smart
3:30
why are we so stupid? That's the key question. I mean, we can, you know, we can reach the moon
3:39
We can split the atom. We can create AI. And yet we are on the verge of destroying ourselves
3:46
And not just through one way. Like previously we just had nuclear war to destroy ourselves
3:52
Now we've created an entire menu of ways to destroy ourselves. So what is happening
4:01
And, you know, one basic answer in many mythologies is that there is something wrong with human nature
4:09
that we reach for powers that we don't know how to use wisely
4:14
that there is really something deeply wrong with us. And the answer that I try to give in the book is that the problem is not in our nature
4:22
the problem is in our information. That if you give good people bad information, they will make bad decisions
4:32
self-destructive decisions. And this has been happening again and again in history
4:39
Because information isn't truth and information isn't wisdom. The most basic function of information is connection
4:49
Information connects many people to a network. And unfortunately, the easiest way to connect large numbers of people together
4:58
is not with the truth, but with fictions and fantasies and delusions
5:04
I was going to say, there's a sense, though, that information was supposed to set us free
5:08
Information, access to information across the world was the thing that was supposed to make us a better planet
5:19
Why would it do that? Well, there was a sense that people in places that didn't have access to communication tools and other things
5:29
didn't have access to information and didn't have access, and maybe to put a point on it, to good information
5:35
you make the argument and now to go back to sapience frankly um you talk about different
5:42
types of realities and this is what i think is very interesting because you talk about objective
5:46
reality you know we can go outside and we can see that it's not raining and the sky is blue and
5:52
you'd say that's an objective reality right but then there is these other kinds of realities that
5:58
you talk about which makes us i think susceptible to what you're describing when i say susceptible
6:04
this idea that we as a group, as a humanity, that we are storytellers and that we are okay
6:12
oddly enough, with what you've described as a fictional reality. Yeah, I mean, to take an example, suppose we want to build an atom bomb
6:24
So to do that, suppose, just suppose somebody wants to build an atom bomb
6:30
You need to know some facts to build an atom bomb. You need some hold on objective physical reality
6:39
If you don't know that E equals mc squared, if you don't know the facts of physics
6:44
you will not be able to build an atom bomb. But to build an atom bomb, you need something else besides knowing the facts of physics
6:53
You need millions and millions of people to cooperate. Not just the physicists, but also the miners who mine uranium
7:03
and the engineers and builders who build the reactor, and the farmers that grow rice and wheat in order to feed
7:11
all the physicists and engineers and miners and so forth. And how do you get millions of people to cooperate on a project
7:19
like building an atom bomb? If you just tell them the facts of physics
7:23
look E equals MC squared now get to it Nobody would do it It not inspiring It doesn mean anything in this sense of giving motivation
7:35
You always need to tell them some kind of fiction, fantasy. And the people who invent the fictions are far more powerful
7:46
than the people who know the facts of physics. In Iran today, the nuclear physicists are getting their orders from people who are experts in Shiite theology
7:58
In Israel, increasingly, the physicists get their order from rabbis. In the Soviet Union, they got it from communist ideologues
8:08
In Nazi Germany, from Hitler and Himmler. It's usually the people who know how to weave a story
8:16
that give orders to the people who merely know the facts of nuclear physics
8:24
And one last point, crucial point, is that when, if you build a bomb
8:29
and ignore the facts of physics, the bomb will not explode. But if you build a story and ignore the facts of history and biology and whatever
8:39
the story still explodes. And usually with a much bigger bank. And so one of the underlying conceits of this book, in my mind as a reader, is that you believe that AI is going to be used for these purposes. Yes
8:56
And that AI ultimately is going to be used as a storyteller to tell untruths and unwise stories
9:04
And that those who have the power behind this AI are going to be able to use it in this way
9:11
Do you believe it's that people are going to use the AI in the wrong way or that the AI is going to use the AI in the wrong way
9:21
Initially, it will be the people. But if we are not careful, it will get out of our control and it will be the AIs
9:28
I mean, maybe we start by saying something about the definition of AI because it's now kind of this buzzword, which is everywhere
9:36
And there is so much hype around it that it's becoming difficult to understand what it is
9:41
whenever they try to sell you something, especially in the financial markets
9:45
they tell you, oh, it's an AI. This is an AI chair, and this is an AI table
9:49
and this is an AI coffee machine. So buy it. So what is AI
9:55
So if we take a coffee machine as an example, not every automatic machine is an AI
10:01
Even though it knows how to make coffee when you press a button, it's not an AI
10:06
It's simply doing whatever it was pre-programmed programmed by humans to do. For something to be an AI, it needs the ability to learn and change
10:17
by itself and ultimately to make decisions and create new ideas by itself. So if you approach
10:25
the coffee machine and the coffee machine on its own accord tells you, hey, good morning, Andrew
10:30
I've been watching you for the last couple of weeks and based on everything I learned about you
10:36
and your facial expression and the hour of the day and whatever
10:40
I predict that you would like an espresso, so here I made you an espresso
10:45
Then it's an AI, and it's really an AI if it says
10:49
and actually I've just invented a new drink called Bespresso, which is even better, I think you would like it better
10:56
so I took the liberty to make some for you. Then it's an AI, when it can make decisions and create new ideas by itself
11:06
And once you create something like that, then obviously the potential for it to get out of control and to start manipulating you is there
11:18
And that's the big issue. And so how far away do you think we are from that
11:27
When OpenAI developed GPT-4 like two years ago, they did all kinds of tests
11:34
what can this thing do? So one of the tests they gave
11:39
GPT-4, was to solve capture puzzles. That when you access a web page
11:45
a bank or something, and they want to know if you're a human being or you're a bot
11:50
so you have these twisted letters and symbols. You guys all know this
11:54
This is the nine squares, and it says which one has something with four legs in it
11:58
and you see the animal. All these kinds of things, yeah. And GPT-4 couldn't solve it
12:04
But GPT-4 accessed TaskRabbit, which is an online site where you can hire people to do all kinds of things for you
12:15
And, you know, these captures were developed so humans can solve them, but AI still struggle with them
12:22
So it wanted to hire a human to solve the problem for it
12:27
Now, and this is where it gets interesting, the human got suspicious. The human asked, why do you need somebody to solve your capture puzzles
12:36
Are you a robot? And at that point, GPT-4 said, no, I'm not a robot
12:43
I have a vision impairment, which is why I have difficulty with these captures
12:48
So it knew how to lie. It had basically theory of mind on some level
12:54
It knew what, you know, there's so many different explanations and lies you can tell
13:00
It told a very, very effective lie. It's the kind of lie that, okay, vision impairment, I'm not going to ask any more questions about it
13:09
I will just take the word of GPT-4 for it. And nobody told it to lie, and nobody told it which lie will be most effective
13:21
It learned it. the same way that humans and animals learn things by interacting with the world
13:29
Let me ask you a different question. To the extent that you believe that the computer will be able to do this to us
13:37
will with the help of the computer, we be able to avoid these problems on the other side
13:43
Meaning there's a cat and mouse game here. And the question is, between the human and the computer together
13:51
will we be able to be more powerful, and maybe power is good or bad
13:56
but be able to get at truth in a better way? Potentially, yes
14:03
I mean, the point of talking about it, of writing a book about it
14:08
is that it's not too late. And that also the AIs are not inherently evil or malevolent
14:15
It's a question of how we shape this technology. We still have the power
14:20
We are still, at the present moment, we are still far more powerful than the AIs
14:25
They, in very narrow domains, like playing chess or playing Go, they are becoming more intelligent than us
14:32
But we still have a huge edge. Nobody knows for how many more years
14:36
So we need to use it and make sure that we direct the development of this technology in a safe direction
14:45
Okay, so I'm going to make you king for the day. What would you do
14:49
How would you regulate these things In an environment and world where we don have global regulators we don have people who like to interact and even talk to each other anymore frankly Where we talked about regulating social media at least in the United States for the past 15 years now and have done next to nothing
15:11
How we would get there, what that looks like. You know, there are many regulations we should enact right now, and we can talk about that
15:19
one key regulation is make social media corporations liable for the actions of their
15:26
algorithms, not for the actions of the users, but for the actions of the corporate algorithms
15:33
And another key regulation that AIs cannot pretend to be human beings. They can interact
15:41
with us only if they identify as AIs. But beyond all these specific regulations, the key thing is
15:49
institutions. Nobody is able to predict how this is going to play out, how this is going to develop
15:57
in the next 5, 10, 50 years. So any thought that we can now kind of regulate in advance
16:05
it's impossible. No rigid regulation will be able to deal with what we are facing
16:12
We need living institutions that have the best human talent and also the best technology
16:19
that can identify problems as they develop and react on the fly
16:24
And we are living in an era in which there is growing hostility towards institutions
16:30
But what we've learned again and again throughout history is that only institutions are able to deal with these things
16:38
not single individuals and not some kind of miracle regulations. Play it out
16:45
What is the, and you play it out in the book a little bit when it comes to dictators and your real fears
16:52
What is your worst-case fear about all of this? One of the problems with AIs is that there isn't just like this single nightmare scenario
17:04
With nuclear weapons, one of the good things about nuclear weapons, it was very easy for everybody to understand the danger
17:13
because there was basically one extremely dangerous scenario. nuclear war, with AIs because they are not tools like atom bombs
17:25
They are agents. That's the most important thing to understand about them
17:30
An atom bomb ultimately is a tool in our hand. All decisions about how to use it and how to develop it, humans make these decisions
17:40
AI is not a tool in our hand. It's an autonomous agent that, as I said, can make decisions and can invent ideas by itself
17:50
And therefore, by definition, there are hundreds of dangerous scenarios that many of them we cannot predict because they are coming from a non-human intelligence that think differently than us
18:08
You know, the acronym AI, it traditionally stood for artificial intelligence, but I think it should stand for alien intelligence, not in the sense that coming from outer space, in the sense that it's an alien kind of intelligence
18:24
It thinks very differently than a human being. And it's not artificial because an artifact, again, is something that we create and control
18:34
And this is now has the potential to go way beyond what we can predict and control
18:41
Still, from all the different scenarios, the one which I think speaks to my deepest fears and also to humanity's deepest fears is being trapped in a world of delusions created by AI
19:00
You know, the fear of robots rebelling against humans and shooting people and whatever, this is very new
19:08
This goes back a couple of decades, maybe to the mid-20th century
19:13
But for thousands of years, humans have been haunted by a much deeper fear
19:19
We know that our societies are ultimately based on stories. And there was always the fear that we will just be trapped inside a world of illusions and delusions and fantasies, which we will mistake for reality and will not be able to escape from
19:39
That we will lose all touch with truth, with reality. Part of that exists today
19:44
Yeah. And, you know, it goes back to the parable of the cave in Plato's philosophy of a group of prisoners chained in a cave facing a blank wall, a screen
19:58
And there are shadows projected on this screen and they mistake the shadows for reality
20:05
And you have the same fears in ancient Hindu and Buddhist philosophy of the world of Maya, the world of illusions
20:12
and AI is kind of the perfect tool to create it. We are already seeing it being created all around us
20:23
If the main metaphor of the early internet age was the web, the worldwide web
20:31
and the web connects everything, so now the main metaphor is the cocoon
20:37
It's the web which closes on us and I live inside one information cocoon
20:42
and you live inside maybe a different information cocoon and there is absolutely no way for us to communicate
20:50
because we live in separate, different realities. Let me ask you how you think this is going to impact democracies
20:59
And let me read you something from the book because I think it speaks to maybe even this election
21:04
here in the United States. The definition of democracy as a distributed information network
21:08
with strong self-correcting mechanisms stands in sharp contrast to a common misconception
21:14
that equates democracy only with elections. Elections are a central part of the democratic
21:22
toolkit, but they are not democracy. In the absence of additional self-correcting mechanisms
21:27
elections can easily be rigged. Even if the elections are completely free and fair by itself
21:33
this too doesn't guarantee democracy. For democracy is not the same thing as
21:39
and this one's interesting, majority dictatorship. What do you mean by that
21:46
Many things. But, I mean, on the most basic level, we just saw in Venezuela
21:52
that you can hold elections, and if the dictator has the power to rig the elections
21:58
so it means nothing. The great advantage of democracy is that it corrects itself
22:06
It has this mechanism that you try something, the people try something
22:10
they vote for a party, for a leader, let's try this bunch of policies for a couple of years
22:16
If it doesn't work well, we can now try something else. The basic ability to say we made a mistake
22:23
let try something else this is democracy In Putin Russia you cannot say that We made a mistake Let try somebody else In democracy you can But there is a problem here You give so much power for a limited time
22:39
you give so much power to a leader, to a party, to enact certain policies. But then what happens
22:47
if that leader or party utilizes the power that the people gave them
22:53
not to enact a specific time-limited policy, but to gain even more power
22:59
and to make it impossible to get rid of them anymore. That was always one of the key problems of democracy
23:07
that you can use democracy to gain power and then use the power you have to destroy democracy
23:14
Erdogan said it in Turkey, I think, best, that he said democracy is like a tram, like a train
23:20
You take it until you reach its destination, then you go down. You don't stay on the train
23:26
So that's one key problem that have been haunting democracy for its entire existence, you know, from ancient times until today
23:37
And for most of history, large-scale democracy was really impossible because democracy, in essence, is a conversation
23:48
between a large number of people. And in the ancient world, you could have this conversation
23:55
in a tribe, in a village, in a city-state like Athens or Republican Rome, but you could not have it
24:03
in a large kingdom with millions of people because there was no technical means to hold the conversation
24:10
So we don't know of any example of a large-scale democracy before the rise of modern information technology in the late modern age
24:20
So democracy is built on top of information technology. And when there is a big change in information technology, like what is happening right now in the world, there is an earthquake in the structure built on top of it
24:36
Who is supposed to be the arbiter of truth in these democracies
24:40
And the reason I raise this issue is we have a big debate going on in this country right now about free speech
24:47
Yeah. And how much information we're supposed to know what access to information we're supposed to get
24:54
Mark Zuckerberg just came out a couple of weeks ago and said that, you know, there was information during the pandemic that he was suppressing that in retrospect, he wishes now he wasn't suppressing
25:05
This gets to the idea of truth, and it also gets the idea of what information
25:11
You said information is not truth. Yeah. Are we supposed to get access to all of it and decipher what's real ourselves
25:19
Is somebody else supposed to do it for us? And will AI ultimately be that arbiter
25:26
So there are a couple of different questions here. One about free speech and the other about who is supposed to
25:34
I mean, the most important thing is that free speech includes the freedom to tell lies, to tell fictions, to tell fantasies
25:43
So it's not the same as the question of truth. Ideally, in a democracy, you also have the freedom to lie
25:51
You also have the freedom to spread fictions. You say ideally? Why is that ideal
25:56
Why is it ideal? That's part of freedom of speech. Okay. And this is something that is crucial to understand because the tech giants, they are constantly confusing the question of truth with the question of free speech
26:11
And there are two different questions. We don't want a kind of truth police that constantly tells human beings what they can and cannot say
26:23
There are, of course, limits even to that, but ideally, yes, people should be able
26:29
to also tell fictions and fantasies and so forth. This is information
26:35
It's not truth. The crucial role of distilling out of this ocean of information
26:44
the rare and costly kind of information which is truth, This is the role of several different, very important institutions in society
26:55
This is the role of scientists. This is the role of journalists
26:59
This is the role of judges. And the role or the aim of these institutions is, again, it's not to limit the freedom of speech of the people
27:12
but to tell us what is the truth. Now, the problem we now face is that there is a sustained attack on these institutions
27:24
and on the very notion that there is truth, because the dominant worldview in large parts of both the left and the right
27:37
on the left it takes a Marxist shape, and on the right a populist shape
27:41
is they tell us that the only reality is power. All the world, all the universe, all reality is just power
27:54
The only thing humans want is power. And any human interaction, like we having this conversation
28:03
is just a power struggle. And in every such interaction, when somebody tells you something
28:10
The question to ask, and this is where Donald Trump meets Karl Marx
28:15
and they shake hands. They agree on that. When somebody tells you something, the question to ask is not
28:22
is it true? There's no such thing. The question to ask is, who is winning and who is losing
28:29
Whose privileges, like what I have just said, whose privileges are being served by it
28:36
And the idea is that, again, scientists, journalists, judges, people at large, they are only pursuing power
28:46
And this is an extremely cynical and destructive view, which undermines all institutions
28:55
Luckily, it's not true. especially when we look at ourselves, if we start to understand humanity
29:04
just by looking at ourselves, most people will say, yes, I want some power in life
29:09
but it's not the only thing I want. I actually want truth also
29:15
I want to know the truth about myself, about my life, the truth about the world
29:21
And the reason that there is this deep yearning for truth is that you can't really be happy
29:27
if you don't know the truth about yourself and about your life
29:32
If you look at these people who are obsessed with power, people like Vladimir Putin or Benjamin Netanyahu or Donald Trump
29:41
they have a lot of power. They don't seem to be particularly happy
29:47
And when you realize that I'm a human being and I really want to know at least some truth about life
29:56
why isn't it true also of others? Yes, there are problems in institutions. There is corruption, there is influence and so on
30:05
But this is why we have a lot of different institutions. I'm going to throw another name at you. You talked about Putin. You talked about Trump
30:12
You talked about Netanyahu. He's not a politician, but he might want to be at some point
30:17
Elon Musk. Where do you put him in this? And the reason I ask, talking about truth and information
30:25
he's somebody who said that he wants to set all information free. because he believes that if the information is free, he says
30:32
that you, all of us, will be able to find and get to the truth
30:37
There are others who believe that all of that information will obscure the truth
30:42
I think, again, it's a very naive view of information and truth
30:46
The idea that you just open the floodgates and let information flood the world
30:52
and the truth will somehow rise to the top. You don't know anything about history if this is what you think
31:03
It just doesn't work like that. The truth, again, it's costly. It's rare
31:09
Like if you want to write a true story in a newspaper or an academic paper or whatever
31:15
you have to research so much. It's very difficult. It takes a lot of time, energy, money
31:22
If you want to create some fiction, it's the easiest thing in the world. Similarly, the truth is often very complicated because reality is very complicated
31:32
Whereas fiction and fantasy can be made as simple as possible. And people, in most cases, they like simple stories
31:41
And the truth is often painful. Whether you think about the truth about entire countries or the truth about individuals, the truth is often unattractive
31:52
It's painful. We don't want to know, for instance, the pain that we inflict sometimes on people in our life or on ourselves
32:03
Because the truth is costly and it's complicated and it can be painful, in a completely kind of free fight, it will lose
32:13
This is why we need institutions like newspapers or like academic institutions
32:19
And again, something very important about the tech giants, my problem with Facebook and Twitter and so forth, I don't expect them to censor their users
32:33
We have to be very, very careful about censoring the expression of real human beings
32:41
My problem is their algorithms. Even when they're telling untruths. Even when they're telling untruths
32:46
Even when they're not telling the truth. to be very careful about censoring the free expression of human beings
32:53
My problem is with the algorithms. That if you look at a historical case
33:00
like the massacre of the Rohingya in Myanmar in 2016, 2017, in which thousands of people were murdered
33:10
tens of thousands were raped, and hundreds of thousands were expelled and are still refugees in Bangladesh and elsewhere
33:17
This ethnic cleansing campaign was fueled by a propaganda campaign that generated intense hatred among Buddhist Burmese in Myanmar against the Rohingya
33:33
Now, this campaign, to a large extent, took place on Facebook. And the Facebook algorithms played a very central role in it
33:45
Now, whenever these accusations have been made, and they have been made by Amnesty and the United Nations and so forth
33:52
Facebook basically said, but we don't want to censor the free expression of people
33:59
That all these conspiracy theories about the Rohingya, that they are all terrorists and they want to destroy us
34:04
they were created by human beings. And who are we, Facebook, to censor them
34:10
And the problem there is that the role of the algorithms was, at that point, was not in creating the stories
34:21
It was in disseminating, in spreading particular stories. Because people in Myanmar were creating a lot of different kinds of content in 2016, 2017
34:34
there were hate-filled conspiracy theories, but there were also sermons on compassion
34:39
and cooking lessons and biology lessons and so much content. And in the battle for human attention
34:48
what will get the attention of the users in Myanmar? The algorithms were the kingmakers
34:55
And the algorithms were given by Facebook a clear and simple goal
35:03
Increase user engagement. Keep more people, more time on the platform, because this is the basis for what still is for Facebook's business model
35:17
Now, the algorithms, and this goes back to AIs that make decisions by themselves
35:22
and invent ideas by themselves. Nobody in Facebook wanted there to be an ethnic cleansing campaign
35:30
Most of them did not know anything about Myanmar and what was happening there
35:34
They just told the algorithms increase user engagement. And the algorithms experimented on millions of human guinea pigs
35:42
And they discovered that if you press the hate button and the fear button in a human mind
35:48
you keep that human glued to the screen. So they deliberately started spreading hate-filled conspiracy theories
35:57
and not cooking lessons or sermons on compassion. And this is the expectation
36:04
Don't censor the users. But if your algorithms deliberately spread hate and fear
36:14
because this is your business model, it's on you. This is your fault
36:18
Okay, but here I'm going to make it more complicated. I'm going to make it more complicated. And the reason we're talking about Facebook and Elon Musk
36:23
is that the algorithms that are effectively building these new AI agents
36:30
are scraping all of this information off of these sites, this human user-generated content which we've decided everybody should be able to see
36:43
except once the AIs see it, they will see the conspiracy theories
36:48
They will see the misinformation. And the question is how the AI agent will ever learn what's real and what's not
36:57
That is a very, very old question. You know, the editor of the New York Times, the editor of the Wall Street Journal
37:06
have dealt with this question before. Why do we need to start again as if there has never been any history
37:14
Like the question, there is so much information out there. How should I know what's true or not
37:19
People have been there before If you run Twitter or Facebook you are running one of the biggest media companies in the world So take a look at what media companies have been dealing with in previous generations and previous centuries
37:35
Do you think they should be liable for what's on their sites right now? The tech companies
37:40
The tech companies should be liable for the user-generated content. No, they should be liable for the actions of their algorithms
37:48
If their algorithms decided to put at the top of the news feed a hate-filled conspiracy theory
37:54
it's on them, not on the person who created the conspiracy theory in the first place
37:59
The same way that, you know, like the chief editor of the New York Times decided to put a hate-filled conspiracy theory at the top of the first page of the New York Times
38:09
And when you come and tell them, what have you done? They say, I haven't done anything
38:14
I didn't write it. I just decided to put it on the front page of the New York Times
38:19
That's all. That's a huge thing. Editors have a lot more power than the authors who create the content
38:29
It's very cheap to create content. The really key point is what gets the attention
38:36
You know, I'll give you an example from thousands of years ago. that when Christianity began
38:43
there were so many stories circulating about Jesus and about the disciples and about the saints
38:50
Anybody could invent a story and say, Jesus said it. And there was no Bible
38:55
There was no New Testament in the time of Jesus or 100 years after Jesus
39:00
or even 200 years after Jesus. There are lots and lots of stories and texts and parables
39:06
At a certain point, the leaders of the Christian church in the 4th century
39:11
they said, this cannot go on like this. We have to have some order in this flood of information
39:18
How would Christians know what information to value, which texts to read
39:24
and what are forgeries or whatever that they should just ignore? And they had two church councils in Hippo and in Carthage, today in Tunisia
39:34
in the late 4th century, and this is where they didn't write the texts. The texts were written
39:43
generations previously by so many different people. They decided what will get into the Bible
39:51
and what will stay out. The book didn't come down from heaven in its common complete form
39:57
There was a committee that decided which of all the texts will get in
40:05
And for instance, they decided that a text which claimed to be written by St. Paul
40:13
but many scholars today believe that it wasn't written by St. Paul, the first epistle to Timothy, which was a very misogynistic text
40:22
which said basically that the role of women is to be silent and to bear children
40:26
This got into the Bible. Whereas another text, the Acts of Paul and Thecla
40:33
which portrayed Thecla as a disciple of St. Paul, preaching and leading the Christian community and doing miracles and so forth
40:42
they said, nah, let's leave that one out. And the views of billions of Christians for almost 2,000 years
40:52
about women and their capacities and their role in the church and in the community
40:59
It was decided not by the authors of 1 Timothy and the Acts of Paul and Thecla
41:07
but by this church committee who sat in what is today Tunisia
41:12
and went over all these things and decided this will be in and this will be out
41:17
This is enormous power, the power of curation, the editorial power. You know, you think about the role of newspapers in modern politics
41:28
It's the editors who have the most power. Lenin, before he was dictator of the Soviet Union, his one job was chief editor of Iskra
41:38
And Mussolini also. First he was a journalist, then he was editor
41:44
Immense power. And now the editors increasingly are the algorithms. You know, I'm here partly on a book tour, and I know that my number one customer is the algorithm
41:58
Like, if I can get the algorithm to recommend my book, the humans will follow
42:06
So it's an immense power. How are you planning to do that
42:11
That's above my pay grade. I mean, I'm just sitting here on the stage talking with you
42:16
There is a whole team that decides, oh, we'll do this, we'll do that, we will put it like this
42:22
But seriously, again, when you realize the immense power of the recommendation algorithm, the editor, this comes with responsibility
42:32
So again, I don't think that we should hold Twitter or Facebook responsible for what their users write
42:40
And we should be very careful about censoring users. Even when they write lies
42:48
But we should hold the companies responsible for the choices and the decisions of their algorithms
42:55
You talk now about Lenin and we've talked about all sorts of dictators
42:58
One of the things you talk about in the book is the prospect that AI could be used to turn a democracy into totalitarian state
43:08
Which I thought was fascinating because it would really require AI to cocoon all of us in a unique way, telling some kind of story that captures an entire country
43:20
How would that work in your mind? Again, we don't see reality
43:28
We see the information that we are exposed to. And if you have better technology to create and to control information, you have a much more powerful technology to control humans
43:44
Previously, there was always limitations to how much a central authority could control what people see and think and do
43:55
Even if you think about the most Tolitovian regimes of the 20th century, like the USSR or Nazi Germany, they couldn't really follow everybody all the time
44:07
It was technically impossible. Like if you have 200 million Soviet citizens, to follow them around all the time, 24 hours a day, you need about 400 million KGB agents
44:21
Because even KGB agents, they need to sleep sometimes. They need to eat sometimes
44:25
They need two shifts. So 200 million citizens, you need 400 million agents
44:29
You don't have 400 million KGB agents. Even if you had, you still have a bigger problem
44:35
Because let's say that two agents follow each citizen 24 hours a day
44:39
What do they do in the end? They write a report. This is like 1940 or 1960
44:44
It's paper. They write a paper report. So every day KGB headquarters in Moscow is flooded with 200 million reports about each citizen Where they went what they read who they met Somebody needs to yze all that information Where do they get the ysts
45:05
It's absolutely impossible, which is why even in the Soviet Union, privacy was still the default
45:13
You never knew who is watching and who is listening, but most of the time, nobody was watching and listening
45:18
And even if they were, most likely the report about you would be buried in some place in the archives of the KGB and nobody would ever read it
45:28
Now both problems of surveillance are solved by AI. To follow all the people of a country 24 hours a
45:38
day, you do not need millions of human agents. You have all these digital agents, the smartphones
45:44
the computers, the microphones, and so forth. You can put the whole population under 24-hour
45:50
surveillance. And you don't need human ysts to yze all the information. You have AIs to do
45:57
it. What do you make of the idea that an entire generation has given up on the idea of privacy
46:05
Right? We all put our pictures on Instagram and Facebook and all of these sites. We hear about a
46:12
security hack at some website and we go back the next day and try to buy and put our credit card
46:17
on the same site again. We say that we're very anxious about privacy
46:22
We love to tell people how anxious we are about the privacy. And then we do things
46:27
that are the complete opposite. Because there is immense pressure on us
46:31
to do them. Part of it is despair. And people still cherish privacy
46:40
And part of the problem is that we haven't seen the consequences yet
46:45
Yet. Yet. we will see them quite soon. It's like this huge experiment conducted on billions of human guinea
46:53
pigs, but we still haven't seen the consequences of this annihilation of privacy. And it is
47:01
extremely, extremely dangerous. Again, going back to the freedom of speech issue, part of this
47:07
crisis of freedom of speech is the erosion of the difference between private and public
47:13
that there is a big difference if just the two of us are sitting alone somewhere talking just you and me
47:22
or if there is an entire audience and it's public. Now, I'm a very big believer in the right to stupidity
47:30
People have a right to be stupid in private. That when you talk in private with your best friends, with your family, whatever
47:40
you have a right to be really stupid. to be offensive. As a gay person, I would say that even politicians, if they tell homophobic jokes
47:50
in a private situation with their friends, this is none of my business. And it's actually good
47:56
that politicians would have a situation where they can just relax and say the first stupid
48:03
thing that pops up in their mind. This should not happen in public. Now it's going to be entered
48:10
into the algorithm. Yes. And this inability to know what that anything I say, even in private
48:20
can now go viral. So there is no difference. And this is extremely harmful. Part of what's
48:28
happening now in the world is this kind of tension between organic animals, we are organic animals
48:36
and an inorganic digital system, which is increasingly controlling and shaping the entire world
48:45
Now, part of being an organic entity is that you live by cycles
48:51
day and night, winter and summer, growth and decay. Sometimes you're active
48:56
Sometimes you need to relax. You need to rest. Now, algorithms and AIs and computers, they are not organic
49:04
They never need rest. They are on all the time. And the big question is whether we adapt to them or they adapt to us
49:14
And more and more, of course, we have to adapt to them. We have to be on all the time
49:19
So the news cycle is always on. And everything we say, even when we are supposedly relaxing with friends, it can be public
49:28
So the whole of life becomes like this one long job interview
49:32
that any stupid thing you did in some college party when you were 18
49:37
it can meet you down the road 10, 20 years later. And this is destructive to how we function
49:46
You even think about the market. Like Wall Street, as far as I know, it's open Monday to Fridays
49:52
9.30 to 4 in the afternoon. Like if on Friday, at five minutes past four
49:57
a new war erupts in the Middle East, the market will react only on Monday
50:02
It is still running by organic cycles. Now, what would happen to human bankers and financiers and what is happening when the market is always active
50:13
You can never relax. We've got some questions, I think, that are going to come out here on cards, which I want to ask in just a moment
50:19
I have a couple of questions before we get to them. One is, have you talked to folks like Sam Altman, who runs OpenAI, or the folks at Microsoft
50:29
I know Bill Gates was a big fan of your books in the past. or the folks at Google
50:34
What do they say when you discuss this with them? And do you trust them
50:39
As humans, when you've met them, do you go, I trust you, Sam Altman
50:45
Most of them are afraid. Afraid of you or afraid of AI
50:50
Yeah, afraid of what they are doing, afraid of what is happening. They understand better than anybody else
50:56
the potential, including the destructive potential of what they are creating. and they are very afraid of it
51:04
At the same time, their basic shtick is that I'm a good guy
51:08
and I'm very concerned about it. Now you have these other guys, they are bad
51:14
they don't have the same kind of responsibility that I have, so it would be very bad for humanity if they create it first
51:21
so I must be the one who creates it first, and you can trust me that I will know
51:26
I will at least do my best to keep it under control. And everybody is saying it
51:31
And I think that to some extent, they are genuine about it. There is, of course, also this, another element in there of extreme kind of pride and hebris
51:44
that they are doing the most important thing in basically not just the history of humanity, the history of life
51:51
You think they are? They could be, yes. If you think about the timeline of the universe, at least as far as we know it
52:01
So you have basically two stops. First stop four billion years ago
52:07
the first organic life forms emerge on planet Earth. And then for four billion years, nothing major happens
52:16
Like for four million years, it's more of the same. It's more organic stuff. So you have amoebas and you have dinosaurs and you have humans
52:23
but it's all organic. And then here comes Elon Musk or Sam Altman And that the second important thing in the history of the universe the beginning of inorganic evolution
52:35
Because AI is just at the very, very beginning of its evolutionary process
52:41
It's basically like 10 years old, 15 years old. We haven't seen anything yet
52:48
GPT-4 and all these things, they are the amoebas of AI evolution
52:54
and who knows how the AI dinosaurs are going to look like
53:01
But the name on the inflection point of the history of the universe
53:08
if that name is Elon Musk or that name is Sam Altman, that's a big thing
53:14
We've got a bunch of really great questions, and actually one of them is where I wanted to go before we even got to these
53:20
so we'll just go straight to it. It's actually a bit of a right turn of a question
53:24
given the conversation that you mentioned Netanyahu on point. So the question here on the card says you're Israeli
53:32
Do you really think that the Israel-Palestinian conflict is solvable? And I thought actually that was an important question
53:40
because it actually touches on some of these larger issues that you've raised about humanity
53:46
So on one level, absolutely, yes. It's not like one of these kind of mathematical problems
53:52
that we have a mathematical proof that there is no solution to this problem
53:59
No, there are solutions to the Israeli-Palestinian conflict because it's not a conflict about objective reality
54:07
It's not a conflict about land or food or resources. You say, okay, there is just not enough food
54:14
somebody has to starve to death. Or there is not enough land, somebody has to be thrown in the sea
54:20
This is not the case. There is enough food between the Mediterranean and Jordan to feed everybody
54:26
There is enough energy. There is even enough land. Yes, it's a very crowded place, but technically
54:32
there is enough space to build houses and synagogues and mosques and factories and hospitals
54:38
and schools for everybody. So there is no objective shortage. But you have people each in their own
54:45
information cocoon, each with their own mass delusion, each with their own fantasy
54:52
basically denying either the existence of the other side or the right of the other side to exist
55:01
And the war is basically an attempt to make the other side disappear. Like my mind
55:10
has no space in it. It's not that the land has no space for the people
55:15
My mind has no space in it for the other people, so I will try to make them disappear
55:23
The same way they don't exist in my mind, they also must not exist in reality
55:28
And this is on both sides. And again, it's not an objective problem
55:34
It's a problem of what is inside the minds of people. And therefore, there is a solution to it
55:41
Unfortunately, there is no motivation. What is the solution? For someone who grew up there, who's lived there, you're not there right now, right
55:50
I'm here right now. You're here right now. Again, if you go back, say, to the two-state solution, it's a completely workable solution in objective terms
56:03
and you can divide the land and you can divide the resources so that side by side
56:10
you have a Palestinian state and you have an Israel and they are both, they both exist and
56:16
they are both viable and they both provide security and prosperity and dignity to their
56:22
citizens, to their inhabitants. So how would you, how would you do that though? Because part of the
56:27
part of the issue is, is a chicken and egg issue here. It seems like where the Israelis or Netanyahu
56:33
would say, look, unless we are fully secure and we feel completely good about our security
56:39
we can't really even entertain a conversation just about anything else. Right. And the Palestinians on the other side effectively say the opposite, which is to say
56:49
that you need to solve our issues here. The problem is that both sides are right
56:55
Each side thinks that the other side is trying to annihilate it and both are right
57:02
And both are right. But that's a problem if both are right. This is the problem, yes
57:07
The place to start, again, is basically in our minds. It's very difficult to change the minds of other people
57:15
The first crucial step is to say these other people, they exist, and they have a right to exist
57:25
The issue is that both sides suspect that they don't think that we should exist
57:31
Any compromise they would make is just because they are now a bit weak
57:38
So they are willing to compromise on something, but deep in their hearts, they think we should not exist
57:45
So that sooner or later, when they are stronger, when they have the opportunity, they will destroy us
57:51
And again, this is correct. This is what is in the hearts and minds of both sides
57:56
So the place to start is to, you know, inside first our heads come to recognize that the other side exists and it has a right to exist
58:10
That even if someday we'll have the power to completely annihilate them, we shouldn't do it because they have a right to exist
58:17
And this is something we can do for us. Let me ask you a question. I think people in Israel would say that people in Palestine have a right to exist
58:28
Okay. There is a, you don't believe that, you think that's not the case
58:32
Unfortunately, for a significant percentage of the citizens of Israel, and certainly of the members of the present governing coalition, this is not the case
58:44
And you think they have no right to exist? Because I was going to say, they would say that they feel that the Palestinians want to annihilate them
58:53
And do you believe that? Yes. Again, they think that the Palestinians want to annihilate us, and they are right
59:00
They do want to annihilate us. But also, again, I would say, I don't have the numbers to give you
59:06
but a significant part of the Israeli public and a very significant part of the current ruling coalition in Israel
59:13
Well, they want to drive the Palestinian completely from the land. Again, maybe they say we are now too weak, so we can't do it
59:23
We have to compromise. We have only to do a little bit. But ultimately, this is what they really think and want
59:31
And at least some members of the coalition are completely open about it
59:35
And their messianic fantasy, they actually want a bigger and bigger war
59:41
they want to set the entire Middle East on fire, because they think that when the smoke would clear
59:48
yes, there will be hundreds of thousands of casualties, it will be very difficult, it will be terrible
59:52
but in the end, we'll have the entire land to ourselves, and there will not be any more Palestinians between
59:59
the Mediterranean and the Jordan. Let me ask you one other very political question as it relates to this
1:00:06
and then actually we have a great segue back into our conversation about AI as it happens
1:00:12
Politically, here in the U.S., we have an election, and there's some very interesting questions about how former President Trump
1:00:20
if he were to become the president, would be good or bad for Israel
1:00:25
I think there's a perception that he would be good for Israel. You may disagree. And what you think Vice President Harris, if she was the president, would be good or bad for Israel or good or bad for the Palestinians
1:00:39
How do you see that? And I can't predict what each will decide, but it's very clear that President Trump is undermining the global order
1:00:51
He's in favor of chaos. He is in favor of destroying the liberal global order that we had for the previous couple of decades
1:01:03
and which provided with all the problems, with all the difficulties, the most peaceful era in human history
1:01:14
And when you destroy order and you have no alternative, what you get is chaos
1:01:20
And I don't think that chaos is good for Israel or that chaos is good for the Jews, for the Jewish people
1:01:29
And this is why I think that it will be very, even from a very kind of transactional, very narrow perspective
1:01:37
you know, America is a long, long way from Israel, from the Middle East
1:01:42
and isolationist America that withdraws from the world order is not good news for Israel
1:01:51
So what do you say to the, there are a number of American Jews who say, and I'm Jewish
1:01:57
and this is not something that I say, but I hear it constantly. They say
1:02:00
Trump would be good for the Jews. Trump would be good for Israel. He would protect Israel
1:02:06
In what way? he would have a open checkbook and send military arms and everything that was asked
1:02:18
no questions as long as it serves his interests and if at a particular point it serves his
1:02:24
interests to make a deal with putin or with iran or with anybody at israel's expense he will do it
1:02:33
Fair enough. I mean, he's not committed. I asked the question to provoke an answer
1:02:41
Let me ask you this, and this is a great segue back into the conversation we've been having with the last hour
1:02:46
You've written about how humans have evolved to pursue power and knowledge, but power, I think, is the big focus
1:02:55
How does the pursuit of happiness fit into the narrative? And I don't think that humans are kind of obsessed only with power
1:03:05
I think that power is a relatively superficial thing in the human condition
1:03:12
It's a means to achieve various ends. It's not necessarily bad. It's not that power is always bad
1:03:19
No, it can be used for good. It can be used for bad. But the deep motivation of humans, I think, is the pursuit of happiness
1:03:27
and the pursuit of truth, which are related to one another because you can never really be happy
1:03:34
if you don't know the truth about yourself, about your life. Unfortunately, as often happens
1:03:42
the means becomes an end. That people become obsessed with power not with what they can do with it So very often they have immense power and they don know what to do with it or they do with it very bad things
1:03:59
This is an interesting one. What are some arguments for and against a future
1:04:03
in which humans no longer have relationships with other humans and only have relationships with AI
1:04:16
One thing to say about it is that AI is becoming better and better at understanding our feelings, our emotions, and therefore of developing relationships and intimate relationships with us
1:04:31
Because there is a deep yearning in human beings to be understood. We always want people to understand us, to understand how we feel
1:04:38
We want my husband, my parents, my teachers, my boss to understand how I feel
1:04:44
And very often we are disappointed. They don't understand how I feel
1:04:49
partly because they are too preoccupied with their own feelings to care about my feelings
1:04:56
AIs will not have this problem. They don't have any feelings of their own
1:05:02
and they can be 100% focused on deciphering, on yzing your feelings
1:05:10
So, you know, in all these science fiction movies in which the robots are extremely cold and mechanical
1:05:16
and they can't understand the most basic human emotion, it's the complete opposite
1:05:23
Part of the issue we are facing is that they will be so good
1:05:28
at understanding human emotions and reacting in a way which is exactly calibrated
1:05:34
to your personality at this particular moment that we might become exasperated with the human beings who don't have this capacity
1:05:45
to understand our emotions and to react in such a calibrated way
1:05:53
There is a very big question, which we didn't deal with, it's a long question, of whether AIs will develop emotions, feelings of their own
1:06:02
whether they become conscious or not. At present, we don't see any sign of it
1:06:06
But even if AIs don't develop any feelings of their own, once we become emotionally attached to them, it is likely that we would start treating them as conscious entities, as sentient beings, and we'll confer on them the legal status of persons
1:06:29
In the US, there is actually a legal path already open for that
1:06:36
Corporations, according to US law, are legal persons. They have rights. They have freedom of speech, for instance
1:06:43
Now, you can incorporate an AI. When you incorporate a corporation like Google or Facebook or whatever
1:06:52
until today, this was to some extent just make-believe. because all the decisions of the corporations
1:06:59
had to be made by human beings, by the executives, the lawyers, the accountants
1:07:06
What happens if you incorporate an AI, it's now a legal person
1:07:11
and it can make decisions by itself. It doesn't need any human team to run it
1:07:16
So you start having legal persons, let's say in the US, which are not human
1:07:22
and in many ways are more intelligent than us, and they can start making money
1:07:27
For instance, going on TaskRabbit and offering their services to various things
1:07:32
like writing texts, so they earn money. And then they go to the market
1:07:37
they go to Wall Street and they invest that money And because they so intelligent maybe they make billions And so you have a situation in which perhaps the richest person in the U is not a human being
1:07:54
And part of their rights is that they have, under the freedom of speech, the right to make
1:08:00
political contributions. So this AI person can contribute billions of dollars to some candidate in exchange for getting more rights to AIs
1:08:14
So there'll be an AI president. I'm hoping you're going to leave all of us maybe on a high note here with something optimistic
1:08:23
Here's the question. And it's my final question. Humans are not always able to think from other perspectives
1:08:31
Is AI able to think from multiple perspectives? I think the answer is yes
1:08:36
But do you think that AI will actually help us think this way
1:08:42
That's one of the positive scenarios about AI, that they will help us understand ourselves better
1:08:52
That their immense power will be used not to manipulate us, but to help us
1:08:59
And we have historical precedents for that. Like we have relationships with humans like doctors, like lawyers, accountants, therapists that know a lot of things about us
1:09:16
Some of like our most private information is held by these people
1:09:20
and they have a fiduciary duty to use our private information and their expertise to help us
1:09:30
And this is not, you don't need to invent the wheel. This is already there. And it's obvious that if they use it to manipulate us
1:09:37
or if they sell it to a third party to manipulate us
1:09:43
this is basically against the law. They can go to prison for that. And we should have the same thing with AI
1:09:49
And we talked a lot about the dangers of AI, but obviously AI has enormous positive potential
1:09:55
Otherwise, we would not be developing it. It could provide us with the best healthcare in history
1:10:00
It could prevent most car accidents. It can also, you know, you can have armies of AI doctors and teachers and therapists who help us, including help us understand our own humanity
1:10:17
our own relationships, our own emotions better. This can happen if we make the right decisions
1:10:24
in the next few years. I would end maybe by saying that, again, it's not that we lack the power
1:10:34
At the present moment, what we lack is the understanding and the attention
1:10:39
This is potentially the biggest technological revolution in history, and it's moving extremely
1:10:46
fast. That's the key problem. It's just moving extremely fast. If you think about the U.S
1:10:52
elections, the coming U.S. elections, so whoever wins the elections over the next four years
1:10:58
some of the most important decisions they will have to make would be about AIs and regulating
1:11:05
AIs and AI safety and so forth. And it's not one of the main issues in the presidential debates
1:11:13
it's not even clear what is the difference, if there is one
1:11:18
between Republicans and Democrats on AI. So on specific issues, we start seeing differences
1:11:24
when it comes to issues of freedom of speech and regulation and so forth. But about the broader question
1:11:29
it not clear at all And again the biggest danger of all is that we will just rush forward without thinking
1:11:40
and without developing the mechanisms to slow down or to stop if necessary
1:11:48
You know, if you think about it like a car. So when they taught me how to drive a car, the first thing I learned is how to use the brakes
1:11:59
That's the first thing I think they teach most people. Only after you know how to use the brakes
1:12:05
they teach you how to use the fuel pedal, the accelerator. And it's the same when you learn how to ski
1:12:11
I never learned how to ski, but people who had told me. The first thing they tell you is how to stop or how to fall
1:12:18
It's a bad idea to first teach you how to kind of, okay, go faster
1:12:23
And then when you're down the slope, they start shouting, okay, this is how you stop
1:12:28
And this is what we are doing with AI. Like you have this chorus of people in places like Silicon Valley
1:12:35
Let's go as fast as we can. If there is a problem down the road, we'll figure it out, how to stop
1:12:41
That's very, very dangerous. Yuval, before you go, let me ask you one final, final question, and it's news we could all use
1:12:48
You are writing this whole book about AI and technology, and you do not carry a smartphone
1:12:56
Is this true? I have a kind of emergency smartphone because of various services
1:13:01
How does your whole life work? But I don't carry it with me. I told you, you do not carry a phone
1:13:06
You don't have an email, the whole thing. No, I have email. You have email. And I try to use technology, but not to be used by it
1:13:16
And part of the answer is that I have a team who is carrying the smartphone and doing all that for me
1:13:22
So it's not so fair to say that I don't have it
1:13:29
But I think on a bigger issue, what we can say, it's a bit like with food
1:13:36
That 100 years ago, food was scarce. So people ate whatever they could
1:13:43
And if they found something full of sugar and fat, they ate as much of it as possible because it gave you a lot of energy
1:13:51
And now we are flooded by enormous amounts of food and junk food, which is artificially pumped full of fat and sugar and is creating immense health problems
1:14:05
And most people have realized that more food is not always good for me and that I need to have some kind of diet
1:14:13
And it's exactly the same with information. We need an information diet
1:14:18
that previously information was scarce, so we consumed whatever we could find
1:14:24
Now we are flooded by enormous amounts of information, and much of it is junk information
1:14:30
which is artificially filled with hatred and greed and fear. And we basically need to go on an information diet
1:14:39
and consume less information and be far more mindful about what we put inside
1:14:48
that information is the food for the mind, and you feed your mind with unhealthy information
1:14:55
you'll have a sick mind. It's as simple as that. Well, we want to thank you, Bala
1:15:01
Add Nexus to your information diet, because it's an important document about our future and our world
1:15:09
I want to thank you for this fascinating conversation. Thank you, and thank you for all your questions
1:15:18
Want to dive deeper? Become a Big Think member and join our members-only community
1:15:24
watch videos early, and unlock full interviews
#Ethics
#Machine Learning & Artificial Intelligence