“The problem is in our information. Humans, yes, we are generally good and wise, but if you give good people bad information, they make bad decisions.”
Show More Show Less View Video Transcript
0:00
I'm Yuval Noah Harari. I'm a professor of history at the Hebrew University of Jerusalem
0:04
and the author of Nexus, a history of information networks from the Stone Age to AI
0:11
The key question of Nexus is if humans are so smart, why are we so stupid
0:20
Why are we on the verge of destroying ourselves? We have managed to reach the moon, to split the atom to decipher DNA
0:31
and yet with all our knowledge and wisdom, we are on the verge of ecological collapse
0:38
perhaps of a third world war, and also we are developing an extremely powerful technology
0:47
AI, which might get out of our control and enslave or destroy us
0:53
So the book explores this strange dynamic in human history between our knowledge and wisdom and our self-destructiveness
1:06
And this is a question that has been often raised, and a traditional theological and mythological answer to this question is that there is something wrong in human nature
1:19
There is something in human nature that makes us self-destructive. And the answer that Nexus gives is different
1:29
The problem is not in our nature. The problem is in our information
1:35
Humans, yes, we are generally good and wise. But if you give good people bad information, they make bad decisions
1:44
And what the book explores is why is it that the quality of our information did not improve over thousands of years of history
1:55
Why is it that even modern, very sophisticated societies in the 20th and 21st century have been as susceptible as Stone Age tribes to mass delusion and psychosis and the rise of destructive ideologies like Stalinism or Nazism
2:20
Chapter 1. The Rise of Alien Intelligence Storytelling has always been important
2:27
from the Stone Age to the 21st century. Whenever a large number of people
2:34
are trying to cooperate on something, whether it is to hunt a mammoth
2:40
or whether it is to build an atom bomb, just knowing the facts about the objective world
2:48
about objective reality, is not enough. If you want, for instance, to build an atom bomb
2:55
you need to know some facts about physical reality. You need to know that E equals MC squared
3:02
If you try to build a bomb and you ignore the facts of reality
3:07
the bomb will not explode. But just knowing facts is not enough
3:13
Because in order to build an atom bomb, you need millions of people to cooperate on the project
3:19
You need physicists to write complicated equations, but you also need miners to mine uranium in distant places around the world
3:31
You need engineers and builders to build the reactor and the other facilities
3:37
and you need farmers to grow potatoes and rice and wheat so that all the physicists and engineers and builders and cleaners and plumbers
3:48
in the nuclear facility will have something to eat. If they have to go themselves and grow potatoes
3:54
and then come back to the reactor to do all their experiments, it won't work
3:58
So you need really hundreds of thousands, if not millions of people cooperating on that
4:04
Now, if you just tell them the facts of physics, that E equals MC squared, this is not going to motivate anybody to cooperate on this project
4:17
And this is where storytelling comes into the picture. So what really motivates people is it could be religious stories, mythologies and theologies
4:28
It could be secular ideologies like communism or capitalism. And it's always the people who are experts in storytelling that give the orders to people who merely know the facts of nuclear physics
4:45
So in Iran today, the nuclear scientists are getting their orders from experts in Shiite theology
4:55
In the Soviet Union in the 1950s, the experts in nuclear physics
5:00
they got their orders from experts in communist ideology. And even in a completely, say, free market economy
5:10
there are still stories at the basis of the system, because money and corporations are also stories that humans invented
5:21
they are not physical facts. If you consider, for instance, a dollar bill, it has no objective
5:31
value whatsoever, at least not for human beings. Maybe termites can eat it, but humans can't eat
5:38
dollars. They can't drink them. There is nothing useful you can do with them. They nevertheless
5:45
have value because the greatest storytellers in the world, the finance ministers, the bankers
5:52
the investors, they tell us a story that this piece of paper has value. I can use it to buy
6:02
bread or potatoes or bananas or anything else. And as long as millions of people believe in this
6:09
story, they are willing to work, for instance, on constructing a nuclear reactor, because at the end
6:16
of the month, they get these few colorful pieces of paper. And today, of course, it's not even paper
6:23
Most of the money in the world today is not paper notes and metal coins. It's just digital
6:30
information moving between computers. But as long as people still have trust in the story
6:39
about this digital information, it works. People are willing to work hard for a whole month or a whole year
6:48
just in order to get a few bits of data in their bank account
6:53
When we think about this kind of meeting between storytelling, which is a very ancient human capacity going back tens of thousands of years
7:02
and the new technology of AI, I don't think we should start with words like risk or threat or danger
7:11
It's better to just understand the immense importance of what is happening right now
7:20
Throughout history, for tens of thousands of years, the only entities that could invent stories whether stories about gods or stories about money were human beings
7:37
So we lived, cocooned, inside a cultural world constructed by the human imagination
7:46
If you read a holy book or if you read an economic theory or if you listen to a song or you listen to a piece of music, this came out of the mind of another human being
8:00
So we lived in a human world. Now, for the first time in history, there is another entity, there is another agent out there that can create stories, economic theories, new kinds of currencies, music, poems, images, videos
8:24
And this new entity is AI. What happens to human society? What happens to human life
8:33
if we increasingly live our lives cocooned inside the cultural artifacts coming from a non-human intelligence, from an alien intelligence
8:48
And, you know, the acronym AI traditionally stood for artificial intelligence. But I think it's more accurate to think about it as an acronym for alien intelligence
9:01
Because artificial, it gives the impression that this is an artifact that we create and control
9:10
And AI, with every passing year, AI is becoming less and less artificial and more and more alien
9:19
in the sense that we can't predict what kind of new stories and ideas and strategies it will come up with
9:30
It thinks, it behaves in a fundamentally alien way. I'll give two examples to clarify this
9:40
because there is a huge confusion around AI today. There is so much hype around AI
9:47
especially in the market. If you want to sell something to people today
9:52
you call it AI. So everything now becomes AI, and then people don't understand, so what is it
9:58
And if you think, let's say, about somebody's trying to sell you a coffee machine, and they tell you this is an AI coffee machine, how do you know what it means and whether it's true
10:09
Not every automatic machine is an AI. If this coffee machine, you press a button, let's say for espresso, and the machine produces, provides you with a cup of espresso, this is not AI
10:24
It simply follows the pre-programmed orders of its human creators. The hallmark of AI, what makes AI AI, is that it is able to learn and change by itself
10:39
and come up with decisions and ideas that we don't anticipate, can't anticipate
10:46
So if you approach the coffee machine, and the coffee machine, before you press any button
10:51
tells you, hi, hello, I've been watching you for the last month, and based on all the information
10:59
I gathered on you and on many other users, and based on the time of day and your facial expression
11:06
and whatever, I predict that you now want an espresso, so I took the liberty to already prepare
11:13
for you a cup of espresso. This is an AI. And if now it goes a step further and says
11:21
I actually invented a new drink that you've never tasted before. I call it Bespresso
11:29
And I also took the liberty to prepare it for you because I think you would like it
11:33
This is an AI coffee machine. And this is not just theory
11:38
It's also we are seeing it all around us. One of the key moments in the AI revolution back in 2016 was when AlphaGo defeated Lisset Do, the world champion at the game of Go
11:53
Go is a strategy board game, much more complex than chess, invented more than 2,000 years ago in China
12:01
And it became a cultural treasure in East Asia. For more than 2,000 years, tens of millions of people in East Asia played Go
12:09
and entire schools of thought, entire philosophies evolved around this game because it was seen as a mirror for life and as a good preparation for politics
12:22
and for making decisions in the world. And people thought that we know how to play Go
12:30
And AlphaGo taught itself how to play Go. and within a few weeks surpassed the wisdom accumulated by humanity
12:41
by tens of millions of people over more than 2,000 years. The most amazing thing about its victory was that it used a strategy
12:50
which was considered beyond the pale. When it played its crucial moves
12:57
Go experts didn't understand, what is it? Nobody plays Go like that
13:04
And it turned out to be a brilliant strategy. And it also turned out that for more than 2,000 years
13:13
our human minds have explored only a very limited part of the landscape of Go
13:21
If you imagine all the ways you can play Go as a kind of planet with a geography
13:27
So humans were stuck on one island in the planet Go for more than 2,000 years
13:34
because human minds just couldn't conceive of going beyond this small island
13:40
And then AI came along, and within a very brief time, it discovered entire new continents on the planet Go
13:49
And this is likely to happen in more and more fields, in finance, in art, in politics, in religion
13:58
So before we think about it in terms of risk or threat or opportunity
14:04
just think what it would mean to live on a planet which is increasingly shaped by the stories
14:13
and the products of an alien intelligence. Chapter two, how information technology shapes society
14:24
Every time there is a new information technology was invented, it's completely changed
14:32
Society, politics, culture. About 5,000 years ago, one of the most important revolutions in information technology occurred
14:41
with the invention of writing. Now, from a technical perspective, it didn't seem like much
14:48
Because to invent the invention of writing and we are in ancient Mesopotamia what is today Iraq about 5 years ago It basically involves mud and a stick People started taking clay tablets
15:06
and clay is just essentially mud, and they take a stick, a reed
15:10
and they imprint signs on the clay tablet, and then preserve the clay tablet
15:17
and this is a document. This preserves records of various things. So this is the invention of writing
15:23
people playing with mud. And this had immense impact. So to give again just one example, think about ownership
15:31
What does it mean to own something? Let's say I have a field
15:36
What does it mean that this field is mine? So if you live in ancient Mesopotamia or anywhere else in the world
15:43
before writing, ownership means a communal agreement among my neighbors, the people in my village, that this field is mine
15:54
So they don't graze their goats there, and they don't pick fruits there without my permission
16:01
But because ownership means an agreement of the community, it limits the power of the individual
16:09
I can't sell my field to someone else unless I get the agreement of my neighbors
16:16
Because they decide who owns what field. It also means that a distant king in some capital city, a thousand kilometers away, he can't know who owns what because there are no records
16:30
And he can't know what each field in each village belongs to whom
16:35
So it makes it very difficult to tax property, which makes it very difficult to build large kingdoms and empires
16:44
Then mud comes along. Writing, you have these clay tablets. Suddenly, to own a field means that there is a piece of dry mud somewhere with some signs on it
16:58
which says, this field is mine. And this means that now I can sell my field to someone without getting the permission of my neighbors
17:09
Because to transfer the field to that other person in exchange for, I don't know, a couple of, some gold, I don't need
17:19
the agreement of the neighbors. I just give the person this piece of clay, of dry mud
17:25
This is ownership. It also means that the king in the distant capital can now create an archive
17:32
of all the property records in lots and lots of villages and know, he has bureaucrats who know how
17:40
to read his clay tablets. They know who owns each field. In numerous villages, you can start to have
17:47
taxation systems, you can start to have kingdoms and empires. So paradoxically, in this case
17:54
there are many other influences, but in this case, the invention of the written document
17:59
it on the one hand empowers the individual and creates the basis for private property rights
18:06
and creates the basis for large-scale authoritarian systems of kingdoms and empires
18:13
We jump 5,000 years from ancient Mesopotamia to the 20th century. The rise of mass media and mass information technology, telegraph and radio and television
18:28
on the one hand, they now form the basis for large-scale democratic systems
18:34
and on the other hand, for large-scale totalitarian systems. Before the rise of modern information technology, it was impossible to create either large-scale democracies or large-scale totalitarian regimes
18:50
Totalitarian regimes meaning regimes that try to control the totality of people's lives
18:56
Ancient kings in Mesopotamia or Roman emperors or Chinese emperors, they had a very limited capacity to collect information on the people in their kingdom
19:08
So yes, they raised taxes and they used the taxes to pay for soldiers and build armies, but they could not micromanage the social and economic and cultural lives of every individual in the country
19:23
They didn't have the information necessary to do it. Large-scale totalitarianism appears in the 20th century for the first time in the Soviet Union after the Bolshevik Revolution
19:34
And it's based on exactly the same technology that at exactly the same time leads to the rise of the first mass democracies in the United States and the United Kingdom and other places around the world
19:49
Chapter 3, The Rise of Inorganic Information All information technologies up to the 21st century were organic networks
20:03
because ultimately it was all based on our organic brain. And this had a lot of implications
20:11
Organic entities live by cycles. We run by cycles. Sometimes it's day, sometimes it's night
20:20
There is winter and summer, there is growth and decay. There are times for activity and then there are times for sleep and for rest
20:28
All information networks previously in history, they had these cycles. Even if you think about the financial markets, think about Wall Street
20:39
Wall Street also obeyed, until today, this organic logic. The market is open only Mondays to Fridays
20:49
9.30 in the morning, I think, until 4 o'clock in the afternoon. And then the weekend is off
20:55
And this is how organic beings function. Even bankers and investors and financiers
21:00
as long as they are humans and not algorithms, they need time to rest
21:05
And they need time to be with their family, with their friends. So the market, take rest
21:10
and another thing is that there is always time off and there is always therefore also private time
21:20
until the rise of AI even the most totalitarian regimes like in the Soviet Union
21:27
they could not monitor they could not surveil everybody all the time
21:34
the Soviet Union did not have enough KGB agents to follow every Soviet citizen 24 hours a day
21:42
And even if you somehow managed to follow all the people all the time, they didn't have ysts, enough ysts
21:49
to go over all the information and make sense of it. Even if a KGB agent saw you do something and wrote a report about it
21:59
there was a very high chance that this report will just accumulate dust
22:04
in the archives of the KGB because it didn't have enough ysts to read millions and millions of
22:12
reports written every day about all Soviet citizens. So organic information networks
22:19
they always run by cycles there is always time to rest and there is always a measure of privacy So we now see the rise of a new type of information network which is inorganic which is based on AI It need not have any breaks It never rests And there is no privacy
22:42
potentially. It could completely annihilate privacy. Computers, they don't care if it's
22:48
night or day, if it's summer or winter, they don't need vacations, they don't have families they want
22:55
to spend time with. They are always on, and therefore they might force us to be always on
23:03
always being watched, always being monitored, and this is destructive for organic animals like
23:11
ourselves. If you force an organic being to be on all the time, it eventually collapses and dies
23:18
And we see it happening all around us with a 24-hour news cycle that never rests
23:25
The markets never rest. Politics never rest. So the people involved in these occupations, they can never really rest
23:34
And this takes a toll on them. It's very, very difficult and will soon become impossible
23:41
Anything you do or say at any time might be watched and recorded
23:46
and then it can meet you down the line 10 or 20 years in the future
23:50
You do something stupid but legal in a college party today when you're 18
23:56
maybe in 20 years when you run for political office or you want to be a judge or whatever, it's there
24:04
So basically the whole of life is becoming like one long job interview
24:09
Anything you do at any moment is part of your job interview 20 years from now
24:16
Now, all this is made possible by the fact that AI is the first technology in history that can take decisions by itself
24:25
Until today, all our big information networks, they were managed, they were populated by human bureaucrats
24:35
Whether it's government offices or corporations or armies or banks or schools
24:40
all the decisions ultimately have to be made by an organic brain of a human being
24:47
Now, AI has the capacity to make decisions by itself. So what we are facing is not, you know, like a Hollywood science fiction scenario
24:58
of one big evil computer trying to take over the world. No, it's nothing like that
25:03
It's more like millions and millions of AI bureaucrats that are given more and more authority to make decisions about us in banks, in armies, in governments
25:17
And again, there is good potential in that as well. They can provide us with the best health care in history
25:23
But there are, of course, huge risks. When power shifts from organic humans to these alien, inorganic AIs
25:33
it just becomes more and more difficult for us to understand the decisions that shape our life
25:40
What happens if you can no longer understand why the bank refused to give you a loan
25:47
why the government or the army did this or did that? and this is the world that we are entering, a curious fact is that at least in the United States
25:58
there is already a legal path open for AIs to become legal persons. Because in the U.S.
26:08
unlike in other countries around the world, corporations are considered legal persons
26:14
that even have rights like freedom of speech. Now, until today, this was a kind of legal fiction
26:21
because a corporation like Google could not make any decisions. Only the humans employed by Google made all the decisions
26:32
But now AI can make decisions by itself. So what happens if you now incorporate an AI
26:43
You go through this legal process that you incorporate an AI, and I don't know, you call it Boogle
26:50
Now it's the Boogle Corporation. It has no human employees. It's run by an AI, and it is now a legal person
26:58
that according to U.S. law has a lot of rights and freedoms
27:03
So for instance, it can open a bank account. Corporations open bank accounts
27:07
Why can't the AI do it? It's a corporation. It can earn money. It can go online to websites like TaskRabbit and offer its services to clients, human or non-human, and earn money
27:20
And then it takes its money and invests it. And because it's so good at making investment decisions, it earns billions and billions
27:30
We could be in a situation when the richest person in the United States is not a human being
27:38
The richest person in the United States is an incorporated AI. And another thing that the US legal system allows
27:45
is for these legal persons to make political donations because it's considered part of freedom of speech
27:52
So now the richest person in the US is giving billions of dollars to candidates
28:00
in exchange for these candidates broadening the rights of AIs. The legal path to this, this is no longer kind of a science fiction scenario
28:13
The legal and practical path to this situation is open. Chapter 4, The Importance of Human Institutions
28:24
To deal with the era of AI, it should be clear that we cannot anticipate how this technology will develop over the next few decades
28:35
so it's impossible to kind of think about all the dangers in advance
28:40
and regulate against them or whatever. What we need is living institutions
28:46
staffed by the best human talent and with access to the best technology
28:52
that will be at the cutting edge of the technological development and will be able to identify and react
29:00
to dangers and threats as they arise. So I'm not talking about rigid regulation in advance
29:08
I'm talking about the need for new institutions. Because you can never rely on just the letter of the law or on a charismatic individual, some genius to do it
29:21
In history, humans again and again encounter these problems. And it always goes back to the same solution, institutions
29:28
And in good institutions, they are characterized by having strong self-correcting mechanisms
29:37
A self-correcting mechanism is a mechanism that allows an entity, a human being, an animal or an institution
29:45
to identify and correct its own mistakes. You don't have to rely on the environment, on something out there to correct your mistakes
29:55
You can correct your own mistakes. This is a basic feature of any
29:59
functioning organism. Like how does a child learn to walk? Yes, the child gets some instruction
30:07
from parents, from teachers, but mostly it's self-correction. You try to walk, you fall down
30:17
you get up again, you try something else, you again fall down, you get up again, and
30:22
step by step you learn how to walk by identifying and correcting your own mistakes
30:28
And this goes all the way to entire countries. This is the heart of democratic systems, is this self-correcting
30:39
What are elections? Elections are a self-correcting mechanism. You give power to a certain party or individual
30:49
Let's try your policies. After some time, if you think you made a mistake
30:54
this was the wrong policy, this was the wrong party, you can say, I made a mistake, there are another round of elections, we made a mistake last time
31:02
let's try something else this time. In dictatorships, there is no such self-correcting
31:07
mechanism. If Putin or Maduro makes a terrible mistake, there is no mechanism within Russia
31:15
today that can identify and correct Putin's own mistakes. When we come to the challenge
31:22
of AI, what we need are institutions that are able to identify and correct their mistakes and
31:31
the mistakes of AI as the technology develops. Another important example of self-correcting
31:38
mechanism is the way that modern science works in contrast to traditional religions. Traditional
31:46
religions, they were characterized by claiming to be infallible, that their holy book, their
31:53
sacred tradition never makes any mistake. And therefore, you cannot, there is no mechanism
32:00
for instance, in Christianity or Judaism, to identify and correct mistakes in the Bible
32:07
I'm not talking just about, you know, factual mistakes, also moral mistakes. The Bible
32:13
of the Ten Commandments, for instance, endorses slavery. The Tenth Commandment says that you
32:19
should not covet your neighbor's field, or your neighbor's ox, or your neighbor's slaves. According
32:27
to the Tenth Commandment, God has no problem with people owning slaves. He just has a problem
32:34
with people coveting the slaves of somebody else. No, no, no, no, no. That's not good
32:39
Now, even today, with everything that has changed since these words were written in the first millennia BCE, there is no mechanism to correct the text of the Bible
32:54
You can interpret them in different ways, but you can't change the text
32:58
This in contrast to what we find both in science and in modern democracies, the U.S. Constitution originally also enabled slavery, but the U.S. Constitution also had an amendment mechanism, a self-correcting mechanism, that eventually the people of the United States amended the Constitution to forbid slavery
33:22
And science works in an ogous way that if you have a theory of how the planets move or how organisms evolve, the whole of science really is a self-correcting mechanism
33:40
The only thing that scientific journals publish are corrections to previous publications
33:47
In religious publications, no. They publish again and again the same teachings
33:53
But in academic journals, in history or physics or medicine, they never publish the same thing twice
34:01
The only thing they publish is corrections, either to past mistakes or past lacunae
34:08
If there is something in the theory of Newton, which is either incomplete or mistaken
34:15
then they will publish Einstein correction to Newton physics Every large human system is based on an unlikely marriage between mythology and bureaucracy If you think about a country
34:31
for instance, so mythology explains the rationale why should the country even exist
34:38
Every country, to convince its own citizens why it should exist, tells them some kind of
34:46
national or religious mythology, like we are God's chosen people, and we have some very special
34:53
role here on earth. So this is the mythology part. It gives the motivation, the inspiration
35:00
the reason. But then to actually have a functioning country, it's not enough if the citizens believe
35:07
in the mythology that they are God's chosen people with some mission on earth. You also need to build
35:14
roads and hospitals and armies and sewage systems. You know, no large-scale city, at least a modern
35:23
city, if you want to avoid epidemics, can function without a sewage system. And in order to build a
35:29
sewage system, so you need a lot of workers and engineers and you need to pay them, so you need
35:35
to collect taxes from the citizens in order to build a sewage system. So again, here mythology
35:42
comes back into the picture, that mythology encourages people or explains to people why
35:48
they should pay their taxes honestly so that other citizens in our country will enjoy good
35:55
healthcare services and a good sewage system that protects us from cholera and dysentery and so
36:02
forth. And when I talk about national mythologies, so it should be clear that there is nothing wrong
36:09
about it. Nationalism and patriotism have been one of the most, one of the best inventions in
36:16
human history. Most other social mammals, all other social mammals actually, they care only about a
36:23
small circle of animals that they know personally, they have intimate connection with. And this was
36:29
also true for our human ancestors hundreds of thousands of years ago. The miracle of nationalism
36:37
and patriotism is that it makes us care about millions of strangers that we have never met
36:44
in our lives. And again, nationalism is not about hating foreigners and wanting to kill
36:50
and the others. It's about loving our compatriots and showing this love, for instance, by paying
36:58
taxes honestly so that other people in the country will be defended against cholera by building a
37:05
sewage system. Chapter 5. Information isn't truth. The biggest misconception about information
37:15
is that information is truth. And information isn't truth. Most information is not truth
37:24
The truth is a very rare and costly and expensive type of information. If you want truth
37:32
you need to invest a lot in getting it. I'll give an example, historical example
37:38
Let's think about images and portraits. What is the most common portrait in the world
37:45
What is the most famous face in the world? It is Jesus. Over the last 2,000 years
37:53
people have created billions and billions of portraits of Jesus, and they are hung in
37:59
countless churches and cathedrals and private homes and so forth, and not a single one of them
38:05
is an authentic depiction of Jesus. They are all, 100% of them are fictional depictions
38:15
because we have no idea how Jesus looked like. There is not a single portrait made during his
38:21
lifetime. There is not a single sentence in the Bible that tells us whether he was fat or thin
38:27
tall or short, black hair, blonde hair, nothing. So all these images, they're fiction. And it's
38:34
very easy to create fictional information You don need to research anything You don need evidence You just come up with something and draw it If you want to paint a truthful picture of anything of a person of an economy of a war you need to invest a lot of time and effort and money to research to make sure that you get it right
39:02
So if we just flood the world with information and expect the truth to float up
39:09
it will not, it will sink. The more we flood the world with information
39:14
unless we make the effort to construct institutions that invest in truth
39:22
we will be flooded by fiction and illusion and delusion and junk information
39:29
So most information is not truth. And most information, what it tries to do is gain power by creating order, not by spreading the truth
39:40
If you want millions of people to cooperate on something, the easiest way to do it is to create some fictional mythology or ideology
39:50
and convince a lot of people to believe in it. And the way to do it is to bombard them with more and more stories and images and so forth of your favorite mythology or ideology
40:05
And this is how you gain power. And you need to know some truth
40:10
Again, a system that is completely oblivious to truth, it will collapse, of course
40:15
But in this balance, how much truth do you need in order to construct the Soviet Union
40:21
And how much fiction and delusions do you need in order to construct the Soviet Union
40:27
You need a little truth and a lot of fiction. And this is true of most of the large-scale political systems that existed throughout human history
40:38
We tend to think about totalitarianism and democracy as different ethical systems
40:43
but they are different information networks. Information flows differently in totalitarian versus democratic networks
40:53
Totalitarian networks are centralized. All the information flows to just one place where all the decisions are being made
41:01
And they lack strong self-correcting mechanisms. There is no mechanism in the Soviet Union to identify and correct the mistakes of Stalin
41:11
Democracies, in contrast, they are distributed information networks with lots of self-correcting mechanisms
41:20
The decisions in the United States are not made only in Washington
41:24
Just a small part of all decisions are made there. Most decisions are made by private businesses and voluntary associations and individuals
41:34
and so forth. And there are lots of mechanisms to correct the mistakes, even of the most powerful politicians and corporations
41:45
So this is the key difference in terms of information flow between totalitarianism and democracy
41:51
In the 20th century, totalitarian systems work worse than democratic systems. when all the information flowed to just one place, just to Moscow
42:04
the people there were overwhelmed by all the flood of information and they could not make the right decisions
42:11
and there was no mechanism to correct their mistakes and eventually the system collapsed
42:16
A distributed information system was much better when the decision makers were human beings
42:24
But AI could give an advantage to totalitarian systems in the 21st century. Why? Because AI can process enormous amounts of information much faster and more efficiently than any communist bureaucrat
42:44
When you flood a human with too much information, the human collapses
42:48
When you flood an AI with information, the AI becomes better. So there is a scenario that it's not deterministic
42:57
it's not certain, but there is a scenario that totalitarian systems will become better in the 21st century because of AI Still the other problem of totalitarian systems that they have no self mechanisms this is still applicable
43:15
even in the age of AI. It makes it even more dangerous. A totalitarian system relying on AI
43:22
that the AI makes a mistake and AIs are fallible. AIs are not God. They make mistakes
43:30
If you give all the power to a totalitarian AI and you have no way to correct its mistakes
43:37
this could prove catastrophic to the entire human civilization. For human dictators, AI is an especially big problem because for an AI to take power in
43:49
a dictatorship is much, much easier than to kidnap power in a democracy
43:54
because all power in the dictatorship is already concentrated in the hands of just one paranoid
44:01
leader. The AI needs to learn how to manipulate just this single individual in order to take power
44:08
in the country. So the danger of AI taking power in a country are much bigger in dictatorships
44:16
than in democracy. In democracy, a big problem is very different. Democracy is a conversation
44:23
The whole meaning for democracy is that you have large numbers of people conversing about the issues of the day
44:31
Now, imagine a large group of people standing in a circle and talking
44:37
and suddenly a group of robots entering the circle and start talking very loudly and very emotionally and persuasively
44:47
And you can't tell the difference who is a human and who is a robot
44:53
That is a situation we are now living through. And it is no coincidence that the democratic conversation is breaking down all over the world
45:03
because the algorithms are hijacking it. We have the most sophisticated information technology in history
45:10
and we are losing the ability to talk with each other, to hold a reasoned conversation
45:16
In order to protect the conversation between people, we need to ban bots from the conversation
45:24
We need to ban fake humans. AIs should be welcome to talk with us only if they identify as AIs
45:32
If you talk online with someone and you don't know whether it's an AI or a human
45:38
this will destroy the democratic conversation, so we need to ban that
45:42
if we want to ensure that we get the truth, the only way to do it is to invest in institutions
45:50
like academic research institutions, like newspapers, that invest a lot of effort in
45:59
finding the truth. If we just expect that a flood of information will bring us the truth
46:05
it will not. It will overwhelm the rare and costly kind of information, which is truth
46:11
by a deluge of fake and junk information. And as individuals, my best recommendation is to go on an information diet
46:24
the same way that people go on food diets. Information is the food of the mind
46:29
We have learned that it's not good to our body to eat too much food or too much junk food
46:37
So lots of people are very mindful what they feed their body
46:41
We should be equally mindful about what we feed our mind. More information isn't always good for you
46:49
It's actually good to, from time to time, take time for information fasts
46:54
When we don't put anything more in, we just digest and detoxify
47:00
And similarly, we should watch the quality of the information we feed our mind
47:05
If we feed our mind with all this junk information full of greed and hate and fear, we will have sick minds
#Social Issues & Advocacy
#Green Living & Environmental Issues
#Climate Change & Global Warming


