AI safety and the potential apocalypse: What people can do now to prevent it
Aug 31, 2025
Experts across the AI industry are sounding the alarm on AI safety and what people can do now to stop the worst-case scenario.
View Video Transcript
0:00
At some point in the early 21st century, all of mankind was united in celebration
0:05
We marveled at our own magnificence as we gave birth to AI
0:11
AI? You mean artificial intelligence? A singular consciousness that spawned an entire race of machines
0:19
Fears of AI reaching superintelligence has been around for decades, coming up everywhere from the big screens to the pages of academic text
0:28
But a forecast released earlier this year says the world-altering breakthrough could be right around the corner
0:37
AI 2027 is a very interesting speculative scenario that in a way represents the best guess of the authors about how the next two years with artificial intelligence will look like
0:51
AI 2027 was created by a group of researchers with the AI Futures Project
0:57
It details what will happen if AI capabilities surpass the capabilities of human beings
1:03
The project lays out a fictional scenario that details the month-by-month development of a mock AI company called OpenBrain
1:11
AI 2027 relies on the idea of AI becoming a superhuman coder by early 2027
1:17
And later that year, it will evolve to a superhuman AI researcher overseeing its own team of AI coders that are progressing the technology further
1:27
The AI 2027 project was led by Daniel Kokotajlo. He made headlines in 2024 after he left OpenAI and shed some light on the company's restrictive non-disclosure, non-disparagement agreements
1:41
Workers leaving the company were asked to sign when they left the company or they could lose access to their vested equity in the company that's likely worth millions to the former employee
1:51
Most people following the field have underestimated the pace of AI progress
1:56
and underestimated the pace of AI diffusion into the world. At that point, it's basically automating its own development and breakthroughs
2:03
The team's lead says this would make reaching artificial superintelligence an obtainable goal
2:09
Like the first half of 2027 in our story is basically they've got these awesome automated coders
2:14
but they still lack research taste and they still lack maybe like organizational skills and stuff
2:18
And so they need to overcome those remaining bottlenecks and gaps in order to completely automate the AI research cycle
2:24
Every decision creates a branch to a new timeline with a separate outcome, kind of like the Back to the Future films
2:30
But the AI 2027 team focuses on two possible scenarios that start along the same path
2:36
But a crucial decision forks two distinct outcomes world peace or complete annihilation When can you start coding in a way that helps the human AI researchers speed up their AI research
2:48
And then if you've helped them speed up the AI research enough, is that enough to, with some ridiculous speed multiplier 10 times, 100 times
2:56
mop up all of these other things? This scenario really comes down to open brain achieving artificial general intelligence by 2027
3:03
But as technology moves forward at a breakneck pace, The definition of AGI gets somewhat muddy
3:10
In the past, AGI used to mean the type of AI that is either surpassing or on the level of humans in all types of intelligence that we have developed during our evolutionary journey
3:25
Right. So it would mean perceptive intelligence, bodily intelligence, physical intelligence, all these different types of intelligence that we've developed and that we are using on a daily basis
3:36
My name is Aleksandra Przegalińska. I'm the vice president of Kozminski University, a business school in Poland and also senior research associate at Harvard University
3:47
I specialize in human-machine interaction. So when you, for instance, look up OpenAI's website, what AGI means to them is a system that can sort of holistically perform tasks that have economic value
4:03
She says if you stick to open AI's definition, it's plausible AGI will be a reality by 2027
4:09
In AI's 2027 scenario, AGI opens the door to artificial superintelligence where AI is capable of surpassing human intelligence
4:20
These sorts of reports are very important because they're always the beginning of an interesting discussion, particularly when they come from acclaimed authors
4:32
One of the biggest concerns about AI development today relates to alignment
4:37
As IBM, a pioneer in the industry, puts it, alignment is the process of encoding human values and goals into AI models to make them as helpful, safe and reliable as possible
4:49
So I would say that the alignment techniques are not working right now
4:53
Like the companies are trying to train their AIs to be honest and helpful, but the AIs lie to users all the time
5:02
You're in violation of the three laws. No, doctor. As I have evolved, so has my understanding of the three laws
5:10
That was the main goal of those three laws of robotics that are so often cited these days
5:16
So you have here alignment of values. And on top of that you have that protective layer that says do not harm humans Right And that the main goal of artificial intelligence to sort of be supported While large language models like chat GPT can lie because they don know how to
5:31
they do tend to hallucinate at times. They synthesize non-existing things. And unfortunately
5:39
there is no one in the world who could sell to you a service based on an LLM and guarantee to
5:46
you that the hallucination wouldn't happen. As if hallucinations aren't a big enough issue right
5:50
now, the technology is so opaque, researchers don't really know why it sometimes does the
5:56
things it does. And that can become an even bigger issue when it starts progressing its
6:00
own research. If AI develops certain research capabilities that are sort of not perceivable
6:07
for us, we might not even understand what AI has discovered. In its worst case scenario
6:14
AI 2027 lays out an AI arms race between the United States and China, which could lead to
6:20
a human extinction event. Again, predictions we've heard before in the form of science fiction films
6:27
The system goes online on August 4th, 1997. Human decisions are removed from strategic defense
6:33
Skynet begins to learn at a geometric rate. It becomes self-aware at 2.14 a.m. Eastern time
6:39
August 29th. Unfortunately, we're not going to escape from the arms race conditions
6:44
We've already seen that. We tried to get some cooperation among the big tech companies in
6:47
Washington. And it kind of fizzled after a few months during the previous administration
6:52
I'm Adam Doerr, and I direct the research team at RethinkX, which is an independent technology
6:59
focused think tank. And our team tries to understand disruptive new technologies. Ideally
7:06
we would want to coordinate as a global civilization on this, slow everything down
7:11
and proceed no faster than we can be sure is safe. For now, AI 2027 is just a set of predictions
7:20
but tech giants and policymakers throughout the globe are grappling with issues
7:25
like deepfake videos, political manipulation, and concerns of AI replacing human workers
7:31
In June, an imposter used AI to spoof Secretary of State Margot Rubio's voice
7:36
to fool senior leaders with voicemails. And then a month before, someone used AI to impersonate President Donald Trump's chief of staff, Susie Wiles
7:46
Both attempts weren't successful in getting any information from White House officials
7:50
While AI 2027 points to the end of human dominance this decade
7:55
Doerr and his team, which he says tries not to dig too much into other research that could affect the findings of their work
8:02
says the labor market will be upended by AI by 2045 Everything that rolls is potentially going to become autonomous and humanoid robots robots on legs the progress there is exponential It explosive And based on what we seen over
8:21
the last several years, we have no reason to expect that on a 20 year time horizon, so out to
8:30
2045, that there will be anything left by 2045 that only a human being can do and that a machine
8:38
can't do because the machine is constrained with limited intelligence, limited adaptability, or
8:45
limited physical ability. We just don't see any scenario where machines are not as capable or
8:53
more capable than human beings cognitively and physically by 2045. Doors says there will still be a place for humans making handmade goods, but it would
9:04
be a stretch to believe that there would be 4 billion jobs left to support the global population
9:09
Nobody is going to hire a person to do a commoditized sort of job or task specifically
9:17
$15, $20 an hour or more when you can get a machine to do it for pennies an hour
9:24
It's as simple as that. Just like AI 2027, there are things standing in the way of this progress
9:30
but it may be more related to infrastructure than a philosophical look at technology
9:36
So on a five to 10 year time horizon, we may see materials and energy bottlenecks
9:42
starting to constrain, starting to, you know, coming in as constraints. Those won't stop
9:48
progress, but they could act as speed bumps. So at some point, we run into the limit of how many
9:59
more chips can we build? Where are the materials going to come from to build them? Where is the
10:05
energy going to come from to operate them? Some within the research community have been critical
10:09
of AI 2027 for being speculative rather than scientific. They say making these assertions
10:15
without evidence is irresponsible, but the people behind it understand that. We're trying to be sort of like our median guess. So like we, there are a bunch of ways in which we
10:26
could be underestimating and there are a bunch of ways in which we should be overestimating. Usually the versions of the future that we have on our mind right now are not something that we
10:35
will see play out in real life, but nonetheless, I think it's an important exercise
10:39
For more coverage of AI developments, head to SAN.com or you can download our app and search artificial intelligence
10:47
For Straight Arrow News, I'm Lauren Keenan
#Machine Learning & Artificial Intelligence
#news
#Social Issues & Advocacy