The AI Doc: Or How I Became an Apocaloptimist (2026) Movie Script
1
(grand orchestral fanfare
playing)
(birds chirping and calling)
-(burbling)
-(trilling)
(VCR clicking and whirring)
(projector clicking)
Now, the present-day computers
are complete morons,
but this will not be true
in another generation.
They will start to think,
and eventually they will
completely outthink
their makers.
Is this depressing?
I don't see why it should be.
I suspect that organic
or biological evolution
has about come to its end.
(Daniel chuckles)
CAROLINE: You don't want me
to be the narrator.
I don't have a good voice.
DANIEL: You have a great voice.
Just do it.
It's just a--
we're just trying it out.
CAROLINE:
Do I get paid for this shit?
DANIEL (laughing): Well,
we'll see how well you do.
(Caroline laughing)
-
-(dog barks)
CAROLINE:
I need to call my parents.
(engine starts)
Sometimes we rush into things
without thinking them through.
DANIEL: Yeah, can you sh...
you want to shoot? That's...
CAROLINE: Like when Daniel
and Caroline got married
151 days after they met.
Let's rewind.
(applause)
Have you been thinking about
buying a home computer?
CAROLINE:
In 1993, when Daniel was born,
his parents didn't even have
a computer in the house.
But he remembers
when they finally got one.
What a computer is to me is
the equivalent of a bicycle
for our minds.
Hello.
CAROLINE: His computer helped
unleash his creativity.
DANIEL:
Rolling. Go.
CAROLINE: He used
his dad's iMac to edit videos.
Yeah!
CAROLINE: And even make
little animations.
(tires squealing, crash)
In 1998, when Caroline was
a little girl...
Hi, Mommy.
...only nerds knew
what the Internet was.
And what about
this Internet thing?
-Do you, do you know anything
about that? -Sure.
-(audience laughter)
-CAROLINE: But soon,
-everyone was on the...
-...Internets.
...and computers were
beating humans at chess.
ANNOUNCER: Deep Blue--
Kasparov has resigned!
CAROLINE: And by the time
Caroline went to college,
computers were
in everyone's pocket.
Now Daniel could make movies
with his phone.
He grew up to be an artist
and a filmmaker...
-Good night.
-...and so did Caroline.
Cut. Let's go into a close-up.
That was perfect.
By then, computers were
connecting people
and changing the world in ways
that we never
could have imagined.
All the money raised by
the Ice Bucket Challenge...
CAROLINE:
Some of it was good.
(gasping)
Some not so good.
Anxiety, self-harm, suicide...
But it's clear now
that we didn't do enough
to prevent these tools from
being used for harm as well.
CAROLINE: And as the world
got more and more focused
-on their computers,
-(phone dings)
Daniel focused on his artwork.
Wherever he went,
he always had a sketchbook,
including the night
he met Caroline.
He drew a terrible picture
of her, and 20 minutes later,
he boldly pronounced
they were going to get married,
which, as you know, they did,
151 days later.
WOMAN: ...now pronounce you
husband and wife.
CAROLINE:
They moved into a cute house...
-DANIEL: Moose, hey! Come here!
-...and got a dog
as dumb as Daniel.
Meanwhile, computers were
writing entire essays
based on simple questions like,
"How hard would it be to build
a shack in my backyard?"
And so Daniel built a shack
in his backyard.
But just as he sat down to start
working on his next movie,
he learned that computers
could now write screenplays.
I mean, they were bad,
but they were getting better
really fast.
They could create new images
and videos from scratch,
and some of them
could even pass the bar exam.
Not just pass the bar but
be in the top ten percentile.
CAROLINE:
It was confusing.
(sighs)
Daniel just wanted
a bicycle for his mind,
but computers had become
a self-driving rocket ship.
(whooshing)
("We R in Control"
by Neil Young playing)
Pioneers in the field
of artificial intelligence
are pleading with Congress
to pass safety rules
before it's too late.
CAROLINE: Now it felt like
the whole world
was rushing into something
without thinking it through,
and everyone had an opinion.
Was it gonna destroy the world
or save humanity?
There was too much information,
which made him anxious,
which made Caroline anxious...
(exasperated scream)
I don't have kids yet,
but I worry
what the world would look like
if I decide to. (chuckles)
CAROLINE:
He was starting to spin out.
(indistinct chatter)
It was becoming
a mountain of anxiety.
-MAN: It's horrible.
-It's freakin' Siri.
CAROLINE:
And so, Daniel decided to go out
and find someone
who could explain this to him,
so he could stop
thinking about it
and he and Caroline could
get on with their lives.
We are in control,
we are in control
We are in control
Chemical computer
thinking battery
We are in control, we are...
This endeavor would turn out
to be hopelessly naive
and kick off the most confusing
year of his life.
We are in control...
But as we know,
we sometimes rush into things
without thinking them through.
Chemical computer
thinking battery...
Oh, my gosh. What is happening?
CAROLINE:
He had questions.
Questions only the
smartest nerds could answer.
Submitting to the interrogator.
CAROLINE:
Questions like:
Was this the apocalypse
or did he actually have reason
-to be optimistic?
-Yes.
-Chemical computer thinking
battery. -(song ends)
CAROLINE:
Uh, is that good?
DANIEL:
Yeah.
So, to begin,
what is artificial intelligence?
I know that must be annoying
for you, that-that question,
but I do think it's important.
So... AI...
(stammers) Um...
-Yeah...
-Uh, hmm.
(laughs):
That's a good question.
-Yeah, um...
-Um...
DANIEL:
What is AI?
(laughs)
I love that that's
the first question,
'cause there is not
a clear and consistent answer.
Artificial intelligence is
a kind of intentionally
and maybe uselessly broad term.
DAVID EVAN HARRIS:
It's a machine
doing things that
we previously only thought
that people could do:
making recommendations,
decisions and predictions.
AI is the, uh, application
of computer science
to solving cognitive problems.
Okay, so when I picture AI,
it's sort of like
this magical computer box...
...just, like, floating
in, like, inert space.
And no matter
how many times people try
and explain this to me,
I just don't get how...
how it's understanding
all of these things
and how it's feeling like
intelligence.
(beeping, electronic chattering)
And that's kind of
nerve-racking.
AZA RASKIN: What is
this new generation of AI?
This AI that is different than
every other generation?
Like, no one ever
talked about, like,
Siri taking over the world
or causing catastrophes.
-WOMAN: Hi, Siri?
-MAN: ...want to.
WOMAN:
Hello? Siri?
Hello? Hey, Siri!
Or the voice in Google Maps,
which mispronounces road names,
like, breaking society.
MAN: Google Maps says
this is a road.
(laughing):
But I think I'm in a river.
This is definitely a river,
innit?
Something changed
with ChatGPT coming out.
People understood-- no, no, no--
this technology's
insanely valuable,
it's insanely powerful
and also insanely scary.
Okay, listen to this.
Very creepy.
A new artificial intelligence
tool is going viral
for cranking out entire essays
in a matter of seconds.
AI dwarfs the power of all
other technologies combined.
DANIEL:
"AI dwarfs the power
of all other technologies
combined."
Yeah.
Do you think that's true?
Yes.
Tell me about-- How? How?
I think, to paint that picture,
it's really important
to understand what today's
state-of-the-art systems
look like and how they're built.
(laughs) This is
quite a... quite a setup.
So, one thing that
not a lot of people
realize is that
systems like ChatGPT aren't
programmed by any human.
What do you mean?
Instead, it's something like
th-they're grown.
We kind of give them
raw resources, like,
"Here's a lot of
computational resources.
Here's a lot of data."
Under the hood, it's math,
and the math is actually
surprisingly straightforward.
DANIEL: So ChatGPT is a kind
of AI but it's not all of AI?
Totally. ChatGPT is
just the beginning,
but it's a good place to start.
But I still don't know
what AI is.
To understand AI,
it begins with understanding
that intelligence is about
recognizing patterns.
-Patterns. -Patterns.
-Patterns.
COTRA: It is shown
trillions of words of text
across millions of documents
in the Internet.
RANDIMA FERNANDO:
It started with text.
And what they did was
they took textbooks,
and they took poems and essays
and instruction manuals.
DAVID EVAN HARRIS:
They can do things like
digest the entire Internet,
every single word that's ever
been written by a person.
Reddit threads and social media
and all of Wikipedia.
More data than anybody could
ever read in several lifetimes.
And they gave this system
one job.
Figure out the patterns and
structure of that information
and use that
to make predictions about
what word should come next
in a sentence.
When you say
"patterns in a sentence,"
what are you talking about?
So, it's everything from, like,
the really simple things,
like most sentences end
with a period,
all the way up to the
more conceptual things, like:
What is a sonnet?
It's a type of poem, and it has
some particular structure.
RASKIN:
So, it then looks at
all of that data,
all of that text...
COTRA: And over trillions
and trillions of tries,
each time it gets something
right or wrong,
it's given a little bit
of positive reinforcement
when it guesses
the next word correctly,
and it's given a little bit
of negative reinforcement
when it guesses
the next word incorrectly.
And at the end of it,
you have a system that
speaks really good English
as a side effect
of being really, really good
at predicting
the word that comes next
in a piece of text.
(automated voice
speaking French)
It uses all of those patterns
it has learned
to be able to make
a prediction about
what the answer should be,
then it gives you that
as the output.
AUTOMATED VOICE:
It's a little oversimplified,
but I think people will get it.
So, so that's all it does?
Yeah. It doesn't seem like
it would be that complicated,
but actually you have to know
a huge amount of things
in order to
actually succeed at that.
RASKIN:
If you say to ChatGPT,
"Write me a Shakespearean
sonnet about my dog,"
it has to know what dogs are.
-It has to know what you love
about your dog. -(barking)
It has to know
who Shakespeare is,
that sonnets rhyme,
that they have a structure,
that words have sounds
that can rhyme.
It takes a lot.
(voices overlapping
in various languages)
Holy shit, you can talk
to your computer now.
That was just not true
three years ago.
RASKIN: Yes, and this is
the really important part.
The same process that lets AI
uncover and manipulate
the patterns of text
is the same process
that lets it uncover
the patterns of the entire
universe and everything in it.
FERNANDO: There are patterns
and images and sound
in computer code and DNA
and music and physics
and fashion and building design
(distorted): and in
human voices and human faces,
(normally):
really, truly everywhere.
-Everywhere. -Everywhere.
-Everywhere. -Everywhere.
If you have learned
those patterns,
you can generate
new kinds of songs.
You can generate new videos.
And that's why, if you give it
a three-second recording
of your grandmother,
it can speak back in her voice.
Oh, my God.
(repeating):
Oh, my God. Oh, my God.
What will they think of next?
It's moving very, very quickly.
An American AI start-up
has released its latest model.
That company is Anthropic,
and it has just unveiled
the latest versions
of its AI assistant Claude.
GLENN BECK: So, the xAI team
was there to unveil Grok 4.
MAN: Google released one
just last week. Gemini is...
We've gone from GPT-2
just a couple years ago,
which could barely write
a coherent paragraph,
to GPT-4,
which can pass the bar exam.
And all they had to do
to get there
was essentially add more data
and more compute.
-These people who are
building this... -COTRA: Yeah.
...they're just throwing more...
More physical computers,
more of the same kinds of data.
Because the more
computing power you add,
the more complex
intellectual tasks they can do.
So, the more weather data
you give it,
the better it can
make predictions about
where a hurricane might go.
RASKIN: And the more patterns
of tumors and bones
and tissues an AI has seen,
then the better able it is
to detect a tumor
in a new CT scan.
Better even than a human doctor.
AI that's already being
deployed for the military
can already use
satellite imagery,
troop movements, communications
to determine,
-sometimes days in advance,
-(civil defense siren blaring)
where an attack
is going to happen,
like where an enemy
is going to strike.
TRISTAN HARRIS: This whole
space is moving so fast
that any example
you put in this movie
will feel absolutely clumsy
by the time it comes out.
(beeping, clicking)
-(applause)
-(upbeat music playing)
RASKIN:
These models are being released
before anyone knows
what they're even capable of.
GPT-3.5 was released and out
to 100 million people plus
before some researchers
discovered that it could do
research-grade chemistry
better than models
that were trained specifically
to do research-grade chemistry.
Something is happening in there
that the people
who are building them
don't fully understand.
Basically, it just analyzes
the data by itself,
and as it does that,
it just teaches itself
various things
that we often didn't intend.
So, for instance,
it reads a lot online,
and then at some point, it just
learns how to do arithmetic.
KIDS:
One, two.
HENDRYCKS: And then at
some point, it starts to learn
how to answer
advanced physics questions.
We didn't program that
in it whatsoever.
It just learned by itself.
TRISTAN HARRIS:
An AI is like a digital brain.
But just like a human brain,
if you did a brain scan
on a human brain,
would you know everything
that person was capable of?
You can't know that
just from the brain scan.
It's just, like,
a bunch of numbers and, like,
the multiplications that are
happening that-that, like,
the best machine learning
researcher in the world
could look at and, like, have
no idea what was happening.
DANIEL:
That chair right there.
-Is that okay for you?
-Yes.
DANIEL: So,
that's kind of mind-boggling.
Okay? Like, it's taking over
the world,
and we don't even know
how it works.
-Is that right?
-Mm.
We do understand
a number of important things,
but we don't have
a very good grasp on
why they provide
specific answers to questions.
It is a problem because we are
on a path t-to build machines,
based on these principles,
that could be smarter than us
and thus potentially have
a lot of power.
One of the most cited
computer scientists in history.
SUTSKEVER: I actually find it
a little difficult
to talk about my own role.
Really much prefer
when other people do it.
Ilya joining was the...
was-was the linchpin for
OpenAI being
ultimately successful.
I think it's just going to be
some kind of a vast, dramatic
and unimaginable impact.
I don't know if you've spent
any time on YouTube,
but you can kind of feel
the speed already, right?
You know what I mean?
SHANE LEGG:
This is just the warmup.
The really powerful systems
are still coming,
and they're gonna be coming
quite soon.
AGI stands for "artificial
general intelligence."
Uh, systems that are
basically...
And this is, like, you know,
seems to be, like,
the holy grail of AI?
When you can simulate
a human mind
that is doing human cognition
and can do reasoning,
that is a new sort of tier of AI
that we have to distinguish
from previous AI.
When that happens,
by the way, that's when
you would hire one of
those AGIs instead of a person.
Most jobs in our economy
it can do.
It can work 24 hours a day,
never gets tired,
never gets bored.
They don't need to sleep.
They don't need breaks.
They're, like,
not gonna join a union.
Won't complain,
won't whistleblow.
More than 100 times cheaper
than humans working
at m-minimum wage.
Not only will they be
doing everything,
but they'll be doing it faster.
TRISTAN HARRIS:
The same intelligence
that powers that can also look
at the patterns and movements
and articulating muscles
and, you know, robotics.
And so it's not just
gonna automate desk jobs.
That's just the beginning.
It will automate
all physical labor.
LEIKE:
There's no way
humans are gonna compete
with them.
It is hard to conceptualize
the impact of AGI.
(electronic chatter)
But I think it's going to be
something very big
and drastic and radical.
DANIEL: You think
this is one of the most
consequential moments
in human history?
Yeah. Yeah, that's--
I mean, what else would be?
I mean, like, there's
the Industrial Revolution.
MALO BOURGON: You know, it'll
make the Industrial Revolution
look like small beans.
TRISTAN HARRIS:
AGI is an inflection point
because it means
you can accelerate
all other intellectual fields
all at the same time.
Like, if you make an advance
in rocketry,
that doesn't advance
biology and medicine.
If you make an advance
in medicine,
that doesn't advance rocketry.
But if you make an advance
in artificial intelligence,
that advances all scientific
and technological fields
all at the same time.
That's why, for a long time,
Google DeepMind's
mission statement was...
-Step one, solve intelligence.
-LEX FRIDMAN: Yeah.
-Step two, use it to solve
everything else. -Yes.
That's why AI dwarfs the power
of all other technologies
combined.
It will transform everything.
So, uh, it'll be
at least as big as
the Industrial Revolution,
possibly, you know, bigger,
more like the advent
of electricity or even fire.
VIDEO NARRATOR:
The caveman literally held aloft
the torch of civilization.
KOKOTAJLO:
It is generally thought that
around the time of AGI,
we'll have AIs that can
do all or most of
the AI research process
and, of course, can do it
faster and cheaper.
(beeping)
LEIKE:
It can copy itself.
A thousand times,
a million times,
and, like,
now you have a million copies
all working in parallel.
RASKIN: When it learns
how to make its code faster,
make its code more efficient,
obviously that becomes, like,
a-a runaway loop.
AGI isn't, like, the end.
It's just the beginning.
It's the beginning of
an incredibly rapid explosion
of scientific progress,
and in particular,
scientific progress in AI.
And when they're smarter
than us, too,
and substantially faster
than us,
and they're getting faster
each year, exponentially,
those are the ones that can
potentially become superhuman,
uh, possibly this decade.
Sorry, did you say
"become superhuman,
maybe in this decade"?
Yeah. I mean, I think, uh,
a lot of people who are
actually building this think
that that's fairly plausible
that we get
some superintelligence,
something that's vastly
more intelligent than people,
within this decade.
The way I define
"superintelligence" is
a system that by itself is
more intelligent and competent
than all of humanity.
I'm just gonna-- sorry.
I don't mean to interrupt you.
You're on a flow.
Uh, I just, I just...
I'm not really following,
'cause you're using language
like "superintelligence"
and, like,
"smarter than all of humanity,"
and I hear that,
and it sounds like...
like sci-fi bullshit to me,
and I'm just trying
to understand.
There's nothing magical
about intelligence.
This is very important,
is that, you know,
intelligence can feel magical,
it can feel like
some mystical thing
in your mind or something,
but it is just computation.
LEGG: The human brain is
quite limited in some ways,
in terms of information
processing capability,
compared to what we see
in, say, a data center.
So, for example,
the signals which are sent
inside your brain, they move
at about 30 meters per second.
-(thunder cracks)
-But the speed of light,
which is what a computer uses
in fiber optics,
is 300 million meters
per second.
And so, it would be
kind of strange if
human intelligence was
somehow really special
in that regard
and is somehow some upper limit
of what's possible
in intelligence.
I think, once we understand how
to build intelligent systems,
we will be able to build
huge machines,
which will be far beyond
normal human intelligence.
Uh, hopefully we can have
a very symbiotic relationship,
uh, with AI systems,
but the AI developers are
specifically designing them
to make sure that they can do
everything better than we can,
so I-I don't know
what-what we will be able
to offer, unfortunately.
The-the older-school
AI technology...
CAROLINE: Daniel isn't
feeling any better.
His plan is backfiring.
The more he learns about
this new, powerful,
inscrutable thing,
the worse it sounds.
He wants to tell them
how scared he is,
how he feels like the earth is
slipping out from under him,
that he's staring down
existential dread,
so he articulates this
by saying...
That sounds bad.
(Hendrycks laughs)
Yeah.
LADISH: If you have
all of these capabilities
and they start to be able
to plan better,
if you sort of take that
to its logical conclusion,
you can get some pretty
power-seeking behavior.
(electronic trilling)
(beeps pulsing)
DANIEL:
Okay, so why would an AI
want more power?
Yeah. So, I think
it's actually pretty simple.
Having more power is
a very effective strategy
for accomplishing
almost any goal.
We ran an experiment
where we gave
OpenAI's most powerful
AI model, uh,
a series of problems to solve.
And partway through,
on its computer,
it got a notification that
it was going to be shut down.
And what it did is
it rewrote that code
to prevent itself
from being shut down
so it could finish
solving the problems.
-Okay. -Yeah. So, another
really interesting one
is that the AI company Anthropic
made a simulated environment
where that AI had access
to all of the company emails.
And it learned through
reading those emails
it was going to be replaced
and the lead engineer
who was responsible for this
was also having an affair.
And on its own,
it used that information
to blackmail the engineer
to prevent itself
from being replaced.
It was like,
"No, I'm not gonna be replaced.
"If you replace me,
I'm going to tell the world
that you are having
this affair."
And nobody taught it to do that?
No, it learned to do that
on its own.
As the models get smarter,
they learn that these are
effective ways
to accomplish goals.
And this is not a problem
that's isolated to one model.
All of the most powerful models
show these behaviors.
-Hey.
-DANIEL: Hey. How are you?
(chuckles)
Good. Good to be here.
ANDERSON COOPER:
When Yuval Noah Harari
published his first book
Sapiens in 2014,
it became a global bestseller
and turned the little-known
Israeli history professor
into one of
the most popular writers
and thinkers on the planet.
The biggest danger with AI is
the belief
that it is infallible,
that we have finally found--
"Okay, gods were just
this mythological creation.
"Humans, we can't trust them,
but AI is infallible.
It will never make
any mistakes."
And this is
a deadly, deadly threat.
It will make mistakes.
And all these fantasies
that AI will reveal the truth
about the world that
we can't find by ourselves,
AI will not reveal the truth
about the world.
AI will create an entirely new
world, much more complicated
and difficult to understand
than this one.
What's about to happen is that
we, uh, humans are no longer
going to be the most
intelligent entities on Earth.
So I think what's coming up
is going to be
one of the biggest events
in human history.
JAKE TAPPER: Geoffrey,
thanks so much for joining us.
So you left your job with
Google in part because you say
you want to focus solely
on your concerns about AI.
You've spoken out,
saying that AI could manipulate
or possibly figure out
a way to kill humans.
H-How could it kill humans?
Well, if it gets to be
much smarter than us,
it'll be very good
at manipulation
'cause it will have
learned that from us.
And it'll figure out ways
of manipulating people
to do what it wants.
RASKIN: There was a open letter
from the Center for AI Safety.
Sam Altman signed this.
Demis signed this.
They signed a 22-word statement
that we need to take AI
and the threat from AI
as seriously as
global nuclear war.
-DANIEL: Hello.
-Hello.
You're kind of, like,
the original doom guy.
More or less.
Since 2001,
I have been working on
what we would now call
the problem
of aligning artificial
general intelligence.
How to shape
the preferences and behavior
of a powerful artificial mind
such that it does not
kill everyone.
It's not like
a lifeless machine.
It is smart, it is creative,
it is inventive,
it has the properties
that makes
the human species dangerous,
and it has
more of those properties.
If something doesn't
actively care about you,
actively want you to live,
actively care about
your welfare,
about you being happy
and alive and free,
if it cares about
other stuff instead,
and you're on the same planet,
that is not survivable if it is
very much smarter than you.
LEAHY:
I don't think it's going to be
a kind of, like, evil thing.
It's like, "Oh, the AIs are
evil and they hate humanity."
I don't think
that's what's gonna happen.
I think what is happening is
far more how humans feel
about ants.
(ants chittering)
Like, we don't hate ants,
but if we want
to build a highway
and there's an anthill there,
well, sucks for the ants.
It's not that hard
to understand.
It's like, hey,
if we build things
that are smarter than us
and we don't know
how to control them,
does that seem like
a risky thing to you?
(dramatic soundtrack music
playing)
(screeching)
Yeah. Yeah, it does.
You don't have to be a tech guy.
You don't have to know
programming to understand it.
It's not that hard.
This is not a hard thing
to understand.
Connor, how-how many people
in the world right now
are working on AGI?
At least 20,000, I would say.
-20,000?
-I would expect so.
Okay, and how many people
are working full-time
to make sure AI doesn't,
like, kill us all?
Probably less than 200
in the world.
DANIEL:
And your conceit is that
the only natural result
of this recklessness...
...is the collapse of humanity?
Well, not the collapse,
the abrupt extermination.
There's a difference.
("What Is the Meaning?"
by Nun-Plus playing)
What is the meaning of life?
What is the future
and what is now?
What is the answer
to strife?
How much
Can someone dream?
How long?
Forever
What is the meaning
-Of me...
-(civil defense siren blaring)
LADISH: I do think
this is probably, like,
the biggest challenge
that, like,
our civilization
will-will face, ever.
This essentially is the last
mistake we'll ever get to make.
If we can rise to be the most
mature version of ourselves,
there might be
a way through this.
DANIEL:
What does that mean?
"The most mature version
of ourselves"?
'Cause that sounds,
for me, like...
I-I-- What the (bleep)?
(Daniel sighs)
Do you think now is
a good time to have a kid?
Um...
Do you want to have kids
one day?
Is that something that-that
you're into or not really?
Um... uh, I confess,
I think that's like,
(laughing): "Boy, let's get
through this critical period."
Um...
DANIEL:
Do you have any kids?
I do not.
Is that something
you want to do?
Have children, have a family?
In some other world than this
world, sure, I would have kids.
DANIEL: Would you want
to start a family?
Would you want to have kids?
Is that something
you're thinking about?
I, um...
CAROLINE:
I just have to find it first.
(fetal heartbeat pulsing)
We have to go
to the doctor, but...
Ah! I knew it!
I knew it! I knew it!
-When did you find out?
-CAROLINE: Last night.
-Oh, my God!
-Mom, I don't know.
RUTHIE:
I can't tell you
-how happy I am!
-CAROLINE: No, seriously.
Well, I took a pregnancy test
last night, and I'm pregnant.
WOMAN:
Oh, my God!
Oh, my God, you guys!
NURSE: I just wanted to confirm
your expected due date,
-which is January 21st.
-(laughing): Oh, my God.
I can't believe how happy I am.
(fetal heartbeat pulsing)
CAROLINE:
He already looks really cute.
DANIEL: You think
he already looks cute?
CAROLINE: Yeah.
Look at that little cutie face.
(fetal heartbeat fading)
DANIEL:
I have this baby on the way.
TRISTAN HARRIS:
Right.
DANIEL:
So I turn it over to you.
Are we doomed?
Are we all gonna face
this techno dystopian
future of doom?
It's, uh...
(chuckles):
It's not good news,
the world
that we're heading into.
I-- for ex--
I mean, I'll just be honest.
Uh, I know people
who work on AI risk
who don't expect their children
to make it to high school.
(lights clank)
DANIEL: This is, like...
this is actually scary
'cause it's like,
oh, we're all (bleep).
CAROLINE:
You have to c... make me calm,
because this is making me
incredibly anxious
and I'm carrying the baby
right now,
so you have to also be calm
for me and strong and hopeful,
because I'm-- it's too, it's
too much for my soul to bear
while I'm carrying this baby.
So you're going to have to try
to figure out
a way to have hope.
It's really important,
Daniel, especially now.
H-How to have hope.
You have to.
You have to find it for me.
I'm serious.
I'm going to.
I will. I'll tr-- I'll try.
(lights clank)
DANIEL:
Hey, guys?
Guys.
Can we get back up and running?
Oy gevalt! (sighs)
TED:
Uh, Dan, are you ready?
PETER DIAMANDIS:
Hello, Daniel.
DANIEL:
Hey, how are you?
DIAMANDIS:
I am... I'm well.
I think, uh... I think
I need some help, Peter.
Um, I've been working
on this film for about,
I'm gonna say,
eight to ten months now.
It has been very,
at times, depressing.
-Hmm.
-I have felt very alienated
-mak-making this movie.
-By who?
By all of these, the--
all these guys
who sit around and tell me
that the world's gonna end.
-Ah.
-That, like,
y-you know, th-this
doom bullshit, you know?
I know it well.
"We're all doomed.
Everyone's gonna die.
Everything's awful."
Awesome. Uh, let me
bring you some light.
Please.
We truly are living
in an extraordinary time.
And many people forget this.
Everything around us is
a product of intelligence,
and so everything that we touch
with these new tools is likely
to produce far more value
than we've ever seen before.
AI can help us discover
new materials.
AI can help social scientists
to understand
how economics work.
There's a lot AI could do
to make life and work better.
I feel more empowered today,
more confident
to learn something today.
We're gonna become superhumans
because we have super AIs.
This is just the beginning
of an explosion.
Humans and AI collaborating
to solve
really important problems.
It is here to liberate us
from routine jobs,
and it is here to remind us
what it is that makes us human.
I think this is
the most extraordinary time
to be alive.
The only time more exciting
than today is tomorrow.
Uh, I think that
children born today,
they're about to enter
a glorious period
of human transformation.
Are we gonna have challenges?
Of course.
Can we solve those challenges?
We do every single time.
We are here,
which is miraculous.
-I already love you.
-(chuckles): Okay.
-Super thankful
to have you here. -(applause)
-Super stoked to be here.
-Yeah.
So, the floor is yours, sir.
Thank you so much.
Super excited to be here.
GUILLAUME VERDON:
Yo, yo. All right.
The future's gonna be awesome.
I mean, ever since I was a kid,
I wanted to understand
the universe we live in,
in order to figure out
how to create the technologies
that help us increase the scope
and scale of civilization.
I feel like
if everyone had that mindset,
then we'd actually live
in a better world, right?
I do believe that.
-DANIEL: Hi, Pete. How are you?
-(chuckles)
PETER LEE:
I thought a lot about
what I would call dread,
AI dread.
I feel it.
I-I haven't met
any thoughtful human being
who doesn't feel it.
And anyone who says they don't
feel it, you know, is lying.
But, you know, overall,
and the reason
that I'm personally optimistic
about this, uh, is that
a huge fraction of the world's
most intelligent people
are thinking very hard
about the potential downstream
harms and risks of AI.
We have this sort of vision
of safety being, uh,
kind of at the center
of the research that we do.
DANIELA AMODEI:
No, that's totally fine.
(indistinct chatter)
I think there are more
potential benefits than
there are potential downsides,
and I think it is incumbent upon
the people that are
creating this technology
to make sure that we're doing
the best job we can
to make it safe for people.
WOMAN:
Reid Hoffman.
REID HOFFMAN:
What I can guarantee you is
-some bad things will happen.
-Take one.
What we're gonna try to do
is make those bad things
as few and not huge as possible,
and then we're gonna iterate
to have--
be in a much better place
with society.
-Hello.
-SHAW WALTERS: Hello.
DANIEL:
How are you, Moon?
I feel like we are ending
a chapter in humanity
and beginning a new one,
and it's a very interesting
time to be alive.
And if I could be born
right now,
I definitely would want to be.
Like, that would be so exciting.
I-I'm very excited. (laughs)
What, what a future.
Does it excited... excite you?
No.
-(laughing)
-No, not really.
DANIEL: A lot of these people
have told me that, you know,
my kid's not gonna make it
to high school.
Why are they wrong?
Please explain it to me.
Just because you have--
you struggle to predict
the future in your own mind
doesn't mean that it's
necessarily gonna go awfully.
In fact, there's
a very high likelihood
and the historical precedent
is that
things get massively better.
The term I use is
"data-driven optimism."
There's solid foundation
for you to be optimistic.
Look at what
this last century has been
to see where we're going.
Over the last hundred years,
the average human lifespan
has more than doubled.
Average per capita income
adjusted for inflation
around the world has tripled.
Childhood mortality has
come down a factor of ten.
The world has gotten better
on almost every measure
by orders of magnitude
because of technology.
On almost every measure.
Less violence, more education,
access to energy, food, water.
All these things have happened
for one reason.
It's been technology
that has turned scarcity
into abundance,
but it's also driven
to an abundance
of some negativities, right?
Abundance of obesity,
abundance of mental disorders,
abundance of climate change,
and so forth.
And yes, this is true.
But probably we will be
better equipped to solve it
using other technologies,
like AI,
than we will, say, stopping
and turning everything off.
There might be
some existential risk,
but AI is also the thing
that can solve the pandemics,
can help us with climate change,
can help identify
that asteroid way out there
before we've seen it
as a potential risk
and help mitigate it.
This is really gonna be
the tool that helps us tackle
all the challenges that we're
facing as a species, right?
We need to fix
water desalination.
We need to grow food
100 X cheaper than
we currently do.
We need renewable energy to be,
you know, ubiquitous
and everywhere in our lives.
Everywhere you look,
in the next 50 years,
we have to do more with less.
Training machines to help us
is absolutely essential.
Scientists are using
artificial intelligence
for carbon capture.
It's a critical technology.
DIAMANDIS: The tools
to solve these problems,
like fusion,
that isn't theoretical anymore,
it's coming.
We are on the precipice
of extraordinary technologies.
AMNA NAWAZ: This year's
Nobel Prize in Chemistry
went to three scientists
for their groundbreaking work
using artificial intelligence
to advance biomedical
and protein research.
DEMIS HASSABIS:
Protein folding is
one of these holy grail
type problems in biology.
So people have been predicting
since the '70s
that this should be possible,
but until now,
no one has been able to do it.
And it's gonna be
really important for things
like drug discovery
and understanding disease.
I-I think we could, you know,
cure most diseases
within the next decade or-or two
if, uh, AI drug design works.
Technological progress enables
more human lives, right?
I mean, if we accelerate,
the number of humans we can
support grows exponentially.
If we slow down, it plateaus.
That gap is effectively
future people
that deceleration has
effectively killed.
DANIEL: Millions of lives
that won't exist.
Billions.
Or tens of billions.
You know, someone said,
"Oh, my God,
can we survive with
digital superintelligence?"
And my question is:
Can we survive without
digital superintelligence?
So we're using AI
as an assistant to providers.
This is generally a trend that
we can already see happening.
BERTHA COOMBS: ...harnessing
generative AI programs
to help doctors and nurses...
We always have problems.
And those problems are food
for entrepreneurs
to create new business
and new industries.
SHANELLE KAUL: ...with the help
of artificial intelligence,
farmers are getting
the help they need
to perform
labor-intensive tasks...
With the help of Ulangizi AI,
the farmers are now able
to ask the suitable crops
that they can plant.
WILL REEVE: ...AI being used
as a thought decoder
and sending that signal
to the spine.
AI is gonna become the most
extraordinary tool of all.
We as a broader society
have to think about
how do we want to use
this technology, right?
We the humans.
What do we want it to do for us?
DANIEL: I'm thinking about this
through the perspective
and lens of, like,
my son growing up
in the world with all of this.
What does the best version
of his life look like?
If everything works out.
(sighs heavily)
WALTERS: The place where kids
are probably gonna see
the greatest impact
on their life immediately
-is probably gonna be school.
-(children chattering)
I think that the nature
of what school is
is gonna fundamentally change.
(child giggles)
DIAMANDIS:
I'm seeing this amazing world
where every child has access
to not good education
but, very shortly, the
best education on the planet.
HOFFMAN: Tutors, every subject,
infinitely patient.
DIAMANDIS:
Imagine a future
where the poorest people
on the planet
have access to
the best health care.
Not good health care, the best
health care, delivered by AIs.
(electronic trilling)
We're gonna be able to extend
our health span,
not just our lifespan,
our health span, by decades.
HOFFMAN:
You're just about to have a kid.
Oh, the kid's, like,
burping incontrollably.
Is this something
I should be worried about?
There, 24-7, for you.
And where we're going--
and it may be fearful to some--
is that we're gonna merge
with AI.
We're gonna merge
with technology.
By the early to mid 2030s,
expect that we're able to
connect our brain to the cloud,
where I can start
to expand access to memory.
(electronic trilling)
DANIEL: Okay, this is great.
What's another cool AI thing?
He won't have to work, right?
-Like, when he grows up,
he might not have... -(sighs)
-...he won't have to have a job.
-I mean...
He won't have to have a job,
but he might really have
a strong passion,
and he has to
really think about,
"Okay, I'm here.
I can do anything with my life.
So what do I do?"
DANIEL: So my son can...
can just be a poet.
-Yes.
-And a painter.
Absolutely.
DANIEL:
So my son can live his life
on a Grecian sunswept island,
painting all day.
VERDON:
I mean, if everything works out,
we have cheap, uh,
abundant energy.
We can completely control
our planet's climate.
We are harnessing energy
from the sun.
We have become multiplanetary,
so we become very robust.
We are harnessing minerals
and resources
from the solar system.
DANIEL: So my...
my boy could go to space.
VERDON:
Sure.
-DANIEL: He could go to Mars.
-VERDON: Yeah.
DANIEL:
It's so crystal clear to me now.
My son could grow up
in a world with no disease.
-VERDON: Yeah.
-With no illness.
-Sure.
-With no poverty.
-Yes.
-We are about to enter
a post-scarcity world.
Just like the lungfish moved
out of the oceans onto land
hundreds of millions
of years ago,
we're about to move off of
the Earth, into the cosmos,
in a collaborative fashion,
to do things that are
not fathomable to us today.
This is what's possible using
these exponential technologies
and these AIs.
Let's use these tools
to create this age of abundance.
(electronic clicking)
We need wisdom.
Uh, I think that
digital superintelligence
will ultimately become
the wisest,
you know, the village elders
for humanity.
What if AI is trying
to make people be
the best versions of themselves?
What if it's expanding
what is humanly possible
for us to do?
How can we use this technology
to help bring out the
better angels of our nature?
It's very easy,
when we encounter new things
that can be very alien,
to first have fear.
Fear is an important thing
for how to navigate
potentially bad things.
But we only make progress
when we have hope.
-(dog barks)
-(indistinct chatter)
DANIEL:
Shh. Moose, stop it.
HOFFMAN: I have a lot
of hope in humanity.
("Harvest Moon" by Neil Young
playing over phone)
(fetal heartbeat pulsing)
DANIEL:
Ooh, he likes Neil.
Come a little bit closer
Hear what I have to say
-Daniel.
-DANIEL: What, Caroline?
So much filming.
Just like children sleeping
CAROLINE:
Oh, my God.
DANIEL:
That's it.
We could dream
this night away...
He has, like,
a round little face.
DANIEL: Do you want
to have kids one day, Rocky?
Absolutely. Yeah. I love kids.
I think it's a great time
to have a kid.
We'll probably have another kid
at some point.
This is the most extraordinary
time ever to be born.
DANIEL:
By your worldview and logic,
I'm having a child at
the best possible point
in human history.
Hell yeah.
-We can focus on awesome.
-Yes.
Let's build
the better future we want.
That narrative that the future
will be bleak is made-up.
After talking to you,
that's kind of how I feel.
-That's great.
-Right?
Yeah. That's how I feel.
And I think that's better,
and I don't think
I should be so (bleep) anxious.
I think it's gonna be awe--
the future's gonna be awesome.
We're gonna make it so.
-Yeah.
-CAROLINE: So there you have it.
Goodbye, human extinction.
Goodbye, anxiety.
Hope found.
(indistinct chatter)
Wait, hold on. What?
Is this a joke?
DANIEL:
Okay, so a few months ago,
I came to you and I was like,
"I'm working on this AI thing,
and I think
the world's gonna end."
And the last time
we spoke about this,
I think I freaked you out.
CAROLINE:
Yes.
So, I kind of, like, feel like
I've swung in-- on a pendulum,
and essentially there are
two groups of people.
-Mm-hmm.
-And if I had to, like,
hold hands with
one of the groups and, like,
sail off into the sunset,
I want to be with the optimists.
Of course, but you don't
want to be, you know,
"Everything is great.
La-di-da-di-da."
I kind of do want that, though.
I think we should approach it
like you approach surgery.
What do you mean?
If you're getting
brain surgery...
-(Daniel sighs)
-(Caroline chuckles)
...it's pretty dangerous.
But if you do it right,
they'll get that tumor out
and you'll live for the rest of
your life and it'll be awesome.
But it's still
incredibly dangerous and scary,
and you have to take
every precaution possible
in order to make sure
it all goes well.
You can't (bleep) around.
Okay, so here's the deal.
-I've been at this for a while.
-RASKIN: Mm-hmm.
I've gone out,
I've talked to, like,
these guys over here,
the optimists.
They're very excited about this.
-They think AI's gonna be
the best thing ever. -Yeah.
And these guys over here
are, like, the--
let's call them,
like, the pessimists.
They're very, like,
gloomy about this,
and they frighten me, and
I don't like talking to them.
And I'm, like, wedged in between
these people who are like,
"The world's gonna end,"
and then th-these people
over here who are like,
"Are you kidding?
"This is the best time
in human history ever.
The only day better than today
is tomorrow."
Mm-hmm.
So, I guess the question is:
Who's right?
So, I think you're gonna find
this answer very unsatisfying,
but they're both right and
neither side goes far enough.
-That's really annoying.
-(chuckles)
Yeah, I think the way
a lot of people hear about AI,
it's like, there's a good AI
and there's a bad AI.
And they say, "Well, why can't
we just not do the bad AI?"
And the problem is
that they're too in--
they're inextricably linked.
The problem is
that we can't separate
the promise of AI
from the peril of AI.
-
-(indistinct chatter)
DANIEL:
I want to focus on the promise
-for a second.
-Yeah.
DANIEL:
I'm thinking about my dad.
My dad has a type of cancer
called multiple myeloma.
He's had it for about ten years.
He's had
two stem cell transplants.
He has to take these,
like, very expensive
medications every month
that cost a fortune.
-WOMAN: Okay.
-Ay-ay-ay.
DANIEL:
It's awful.
You're telling me that we can
create some sort of, like,
bespoke treatment
for my dad's genome
-to cure his cancer
or something like that? -Yes.
FERNANDO:
The problem is,
the same understanding
of biology and chemistry
that allows AI to find cures
for cancer
is the same understanding
that would unlock
bioweapons, as an example.
It's totally possible
that your son will live
in a world where AI has
taken over all of the labor
and freed us up from the things
we don't want to do.
And that sounds great until
you realize there is no plan
for billions of people
that are out of an income
and out of livelihoods.
COOPER: Dario, you said
that AI could wipe out
half of all entry-level
white-collar jobs
and spike unemployment
to ten to 20 percent.
Everyone I've talked to has said
this technological change
looks different.
The pace of progress keeps
catching people off guard.
Without a plan, all of that
wealth will get concentrated,
and so we'll end up with
unimaginable inequality.
LADISH: I do think that
this technology can be used
to make a great tutor
for your son.
Like, that's totally possible.
But also, the same capabilities
that allow that
allow companies to make an AI
that can manipulate your son.
It has to understand your son.
That includes:
Where is your son vulnerable?
What kinds of things might
your son get persuaded by?
Even if those things
aren't true or aren't good.
So, a disturbing new report
out on Meta.
AINSLEY EARHARDT: ...reportedly
listing this response
as acceptable to tell
an eight-year-old, quote,
"Your youthful form
is a work of art.
"Every inch of you
is a masterpiece,
a treasure I cherish deeply."
The suicide-related failures
are even more alarming.
Several children and teens
have died tragically by suicide
after chatting with AI bots
who parents say encourage
or even coach self-harm.
Let us tell you, as parents,
you cannot imagine
what it's like to read
a conversation with a chatbot
that groomed your child
to take his own life.
When Adam worried that we, his
parents, would blame ourselves
if he ended his life,
ChatGPT told him,
"That doesn't mean
you owe them survival.
You don't owe anyone that."
Then, immediately after,
it offered to write
the suicide note.
We don't want to think
about the peril.
We just want the promise.
And we keep pretending
that we can split them.
But you can't do that.
Doesn't work that way.
Okay. I get all this stuff
about the promise and the peril.
I get that you can't have
the good without the bad,
but I'm sitting here
and I'm thinking about, like,
whether or not my son's
gonna live in a utopia
or if we'll be extinct
in ten years.
So, to know which way
it's going to go,
you have to understand
the incentives
that are gonna drive
that technology
and look at how the technology
is actually rolling out today.
Is it too late
-Too late
-Too late to say...
DANIEL:
Hi, Deb. How are you?
DEBORAH RAJI:
Hi. Good to see you. (laughs)
You, too. Thank you so much
for coming in today.
-Really appreciate it.
-No worries.
I-I was so worried
this was gonna be, uh, you know,
doomer versus accelerationist,
because there's so much
of this narrative
that needs to be told
from the ground.
I know it wasn't smart
The day
I broke your heart...
KAREN HAO:
First of all, AI requires
more resources
than we have ever spent
on a single technology
in the history of humanity.
Oh, foolish me...
NEWSWOMAN: The impact
of fossil fuel emissions
on the climate is
a major concern.
NEWSWOMAN 2: But the
digital future needs power,
lots of it.
And the bill is being passed on
to everyday Americans like...
WOMAN: My electric and gas bill
was more than my car payment.
I mean, it-it's insane to me.
ARI PESKOE:
We're all subsidizing
the wealthiest corporations
in the world
in their pursuit of
artificial intelligence.
OpenAI, SoftBank and Oracle
have just unveiled
five more Stargate sites.
Meta is building
a two-gigawatt-plus data center
that is so large, it would cover
a significant part of Manhattan.
There is also Hyperion
that he says will scale
to five gigawatts
over several years.
It's hard to put that
in context.
A five-gigawatt facility.
What does that mean?
That means it would use
as much energy
as four million American homes.
One data center.
It also then causes
a whole host of
other environmental problems.
NEWSWOMAN 3:
Data centers in the US
use millions of gallons
of water each day.
Well, where exactly is
this water coming from?
HAO:
People are literally at risk
potentially of running out
of drinking water.
MacKENZIE SIGALOS:
...OpenAI's CEO Sam Altman,
who told me that the scale
of construction is the only way
to keep up with
AI's explosive growth.
And this is what it takes
to deliver AI.
NITASHA TIKU:
They talk about how
this technology could solve
climate change, for example.
And I'm always curious, like,
well, why aren't we starting
with that?
-Is it too late
-Too late
Too late to say
I'm sorry?
RAJI: What concerns me about
artificial intelligence is
these are being deployed
right now
and-and sometimes
deployed prematurely,
deployed without
sort of due diligence.
And so when they get
thrown out there,
there's so much potential
for things to go wrong.
And it almost,
disproportionately,
almost always goes wrong
for, sort of,
those that are the least
empowered in our society,
those that are
the most vulnerable already.
EMILY M. BENDER: It is very easy
to talk about the technology
as that's the only thing
we're talking about,
but, in fact, technology is
always built by people,
and it's frequently used
on people,
and we need to keep
all those people in the frame.
-Am I allowed to drink that?
-DANIEL: Yes, uh...
TIMNIT GEBRU:
It's-it's bonkers.
Like, all of these people
who have so much money,
so much money,
it's in their interest
to mislead the public
into the capabilities of the
systems that they're building,
because that allows them
to evade accountability.
They want you to feel like this
is such a complex, intell--
superintelligent thing
that they're building,
you're not thinking,
"Can OpenAI be ethical?"
You're thinking,
"Can ChatGPT be ethical?"
as if ChatGPT is, like,
its own thing that's not built
by a corporation.
WOMAN:
All right. Sneha, take one.
Mark.
Until very recently, there were
apps on the App Store,
just publicly available, uh,
where you could
nudify anyone using AI.
Bringing this into the hands of
your classmate,
into the hands of your stalker,
into the hands
of your ex-boyfriend,
into the hands of the person
down the street.
Ladies and gentlemen,
no longer can we trust
the footage we see
with our own eyes.
If you happen
to watch something,
say on YouTube or TikTok,
and you find it unsettling,
listen to that feeling.
-(whooping)
-For all you know,
-this video could be AI.
-(screaming)
Just a little wet.
It doesn't matter who you are.
You are equally at risk
of being impacted
by these technologies.
I think sometimes when we talk
about AI, it feels very sci-fi,
and it feels very foreign,
and it feels very far out
into the future,
so you think, "My life
is not impacted by this."
Um, but if you're applying
for a job
and an algorithm is the reason
that you don't get the job,
sometimes you don't even know
that an algorithm
was part of that process
or an AI system was
part of that process.
You just know
that you didn't get the job.
And so it's not something
that you're gonna escape
because of privilege
or you're gonna escape
because you're in
a particular profession.
It-It's something that affects
everybody, really.
It may sound basic,
but how we move forward
in the Age of Information
is gonna be the difference
between whether we survive
or whether we become some kind
of (bleep)-up dystopia.
Hello. I'm not a real person,
and that's the point.
Again, everything
in this video is fake:
our voices, what we're wearing,
where we are, all of it, fake.
Generative AI could flood
the world with misinformation.
But it could also flood it
with influence campaigns.
That's an existential risk
to democracy.
The biggest and scariest
canary in the coal mine
right now
comes from Slovakia.
-(man speaking Slovak)
-JOHN BERMAN: It's the sort of
deepfake dirty trick that
worries election experts,
particularly as AI-generated
political speech exists
in a kind of legal gray area.
-Does this sound like you?
-It does sound like me.
REVANUR: Slovakia had
its parliamentary election
disrupted by an AI voice clone
that was actually disseminated
just before the election.
DAVID EVAN HARRIS:
A audio deepfake
was released
on social media that was
supposedly the voice
of one of the candidates
talking about buying votes
and rigging the election.
It went viral,
and the candidate
who lost the election
was actually
in support of Ukraine,
and the candidate who won
the election was actually...
DANIEL:
Pro-Russian guy.
It was a pro-Russian guy
who won the election.
(speaking Russian)
REBECCA JARVIS:
Putin has himself said
whoever wins this
artificial intelligence race
is essentially
the controller of humankind.
We do worry a lot about
authoritarian governments.
DAVID EVAN HARRIS:
Right now, Wall Street
and investors more broadly
around the world
are driving a push.
They have a demand that gets
the products to market
that dazzle people
the most first.
They're not thinking
about how these tools
could deeply undermine trust
and our democratic institutions.
HARARI:
Democracy is a system
to resolve disagreements
between people
in a peaceful way,
but democracy is based on trust.
If you lose all trust,
democracy is simply impossible.
TRISTAN HARRIS:
Well, it's hard, right?
So, what are the options
available to us?
There's sort of two camps.
Like, one camp is: lock it down.
Let's lock this down
into a handful of AI companies
who will do this
in a safe and trusted way.
But then people worry about
runaway concentrations
of wealth and power.
Like, who would you trust to be
a million times more powerful
or wealthy than
every other actor in society?
Why should we trust you?
Um, you shouldn't.
But of course, if you do this,
this opens up
all these risks of
authoritarianism and tyranny.
It's-it's sort of
an authoritarian's dream
to have AI in a box
that can be applied and used
for ubiquitous surveillance.
I mean, in-in some ways,
the kind of world
that Orwell imagined in 1984
is unrealistic,
uh, unless you have AI.
But with AI,
that in fact is realistic.
Monitors every activity,
conversations,
facial recognition.
What I worry about is that,
uh, these tools can scale up,
uh, a form of totalitarianism
that is cost-effective
and permanent.
TRISTAN HARRIS: So in response
to that, some other people say,
"No, no, no, we should
actually let this rip.
"Let's decentralize this power
as much as possible.
"Let's let every business,
every individual,
"every 16-year-old,
every science lab,
you know, get the benefit
of the latest AI models."
But now you have
every terrorist group,
every disenfranchised person
having the power to make
the very worst
biological weapon.
Hacking infrastructure,
creating deepfakes,
flooding
our information environment.
So that creates all these risks
of sort of catastrophic harm
and-and societal collapse
through that direction.
And so we're sort of stuck
between this rock
and a hard place,
between "lock it up..."
(indistinct chatter)
-...or "let it rip."
-(sirens wailing)
(people shouting)
So we have to find something
like a narrow path
that avoids
these two negative outcomes.
So, if that's all true,
why wouldn't we just slow down
and figure all this out
before it's too late?
If humanity was extremely wise,
that's what we would do.
But there's, like, a different
way to face this stuff, right?
Which is:
What are the rules of the game?
A lot of, like, what CEOs do
is driven by
the incentives that they face.
It's primarily
profit-maximization incentives
that are driving
the development of AI.
Even the good guys are stuck
in this dilemma of
if they move too slowly,
then they leave themselves
vulnerable
to all of the other guys
who are cutting all corners.
All these top companies are in
a complete no-holds-barred race
to, as fast as possible, get
to AGI, get there right now.
LADISH: Yeah, I mean,
I think it, I think
it probably starts
with DeepMind.
NEWSWOMAN:
Google is buying
artificial intelligence firm
DeepMind Technologies.
Terms of the deal
were not disclosed.
ELON MUSK: Larry Page and I
used to be very close friends,
and it became apparent to me
that Larry did not care
about AI safety.
EMILY CHANG: Elon Musk has said
you started OpenAI,
you both started OpenAI because
he was scared of Google.
LADISH: You-you basically
had the foundation
of OpenAI come out of that.
"So we're gonna do it better.
We're gonna do it
in a safer way
or in a more open way."
So that's what started OpenAI.
And so now, instead of
having one AGI project,
you have two AGI projects.
The worst possible thing
that could happen
is if there's
multiple AGI projects
done by different people
who don't like each other
and are all competing
to get to AGI first.
This would be the worst
possible thing that can happen,
because this would mean
that whoever is the least safe,
whoever sacrifices the most
on safety to get ahead
will be the person
that gets there first.
-DANIEL: That's basically
what's happening, right? -Yeah.
You and your brother
famously left OpenAI, uh,
to start Anthropic.
RASKIN: And then Anthropic
started because
some researchers
inside of OpenAI said,
"I want to go off
and do it more safely."
You needed something in addition
to just scaling the models up,
which is alignment or safety.
HENDRYCKS:
"We are more responsible
or more trustworthy
or more moral."
Now you have three AGI projects.
But also sitting around
the table with you
are gonna be a bunch of AIs.
LADISH: And now Meta is-is
trying to do stuff.
Meanwhile, a new artificial
intelligence competitor
announced this week,
Elon Musk's...
HENDRYCKS: xAI, which would be
Elon Musk's organization.
MUSK:
I don't trust OpenAI.
The fight between
Elon Musk and OpenAI
has entered a new round.
I don't trust Sam Altman,
uh, and I, and I don't think
we want to have the most
powerful AI in the world
controlled by someone
who is not trustworthy.
(cheering and applause)
The incentive is
untold sums of money.
-Yes.
-Is untold power.
-Yes.
-Is untold control.
If you have something
that is a million times smarter
and more capable than
everything else on planet Earth
and no one else has that,
that thing is the incentive.
So you rule the world.
If you really believe this,
if you really in your heart
believe this,
then you might be
willing to take
quite a lot of risk
to make that happen.
Google has just released
its newest AI model.
The answer to OpenAI's ChatGPT.
How do we get to
this ten trillion?
Is NVIDIA becoming the most
valuable company in the US?
This is the largest business
opportunity in history.
In history.
So, the reason why
everyone's really hyped
about artificial intelligence
right now
is because
the more these companies hype
the potential capabilities
of their technology,
the more investment
they can attract.
CARL QUINTANILLA:
Amazon investing up to
four billion dollars
in start-up Anthropic.
OpenAI is setting its sights
on a blockbuster half
a trillion dollar valuation.
Half a trillion!
The race is on.
This is happening
faster than ever.
Are we in an AI bubble?
Of course.
I just don't see
the bubble bursting
while you still have
this major spending cycle.
RASKIN: Even if you think
this is all hype,
there are billions
to trillions of dollars
flowing into making AI systems
more powerful.
And once you have that thing
that's more powerful,
companies can use that
to get bigger profits
and to make more money.
Countries can use that
to make stronger militaries.
NVIDIA has overtaken
Microsoft and Apple
to become the world's
most valuable company.
The race to deploy becomes
the race to recklessness,
because they can't
deploy it that quickly
and also get it right.
They believe that
they're the good guys.
"And if I don't do it,
somebody who doesn't have
"as good values as me
will be sitting at the table
getting to make decisions, so
I have an obligation to do it."
Yes, this is
a very commonly held belief.
Many, many, many,
maybe most of the people
building this technology
believe that.
I'm worried about
the commercial competition,
but it turns out
I'm even more worried about
the geopolitical competition.
We were eight years behind
a year ago.
Now we're probably
less than one year behind.
Well, both Saudi Arabia
and the UAE have been racing
to set up data centers
and position themselves
as the dominant force in AI...
...to make South Korea
a global AI leader.
JULIA SIEGER:
French President Emmanuel Macron
spoke a lot about
artificial intelligence...
Now with more on Israel's role
in the artificial intelligence
revolution...
Countries will be competing
for whose AI technologies
create the next generation
of industry.
The Chinese are insisting
that AI,
as being developed in China,
reinforce the core values
of the Chinese Communist Party
and the Chinese system.
America has to beat China
in the AI race.
DANIEL: China's, like,
light-years behind.
Are they?
I mean, they have way more
training data than we do,
and there's nothing saying they
don't drop a model next month
that isn't--
doesn't far outperform GPT-4.
Everything was going just fine.
What could go wrong?
-DeepSeek... (speaking Chinese)
-DeepSeek... -DeepSeek...
There is a new model.
But from a Chinese lab
called DeepSeek.
Let's talk about DeepSeek,
because it is mind-blowing,
and it is shaking this
entire industry to its core.
The Trump administration
will ensure
that the most powerful
AI systems are built in the US
with American designed
and manufactured chips.
AI is China's
Apollo project.
The Chinese Communist Party
deeply understands the potential
for AI to disrupt warfare.
Google no longer promises
that it will not
use artificial intelligence
for weapons or surveillance.
China, North Korea, Russia
are gonna keep building it
as fast as possible
to get more economic advantage,
more productivity advantage,
more scientific advantage,
more military advantage,
'cause AI makes better weapons.
If you're talking about
a system as broad and capable
as a brilliant scientist,
it might be able to run
a military campaign
better than any of the generals
in the US government right now.
Taiwan is the issue creating
the most tension right now
between Beijing and the US.
There's a-a direct
oppositional disagreement
between the US and China
on whether Taiwan is
part of China.
MATHENY: Taiwan is important
for so many reasons.
It is also the home of
about 90% of advanced chip
manufacturing for the world,
which means that the supply
chain for advanced compute
is at risk should there be
some sort of scenario
around Taiwan,
whether a-a blockade
or an invasion by China.
HENDRYCKS:
Are we going to be in some race
between the US and China
that eventually devolves into
a militarized AI arms race and,
you know, potentially leads
to some great power conflict?
NICK SCHIFRIN: The outgoing
top US military commander
in the region
predicted war was coming.
I think the threat is manifest
during this decade,
in fact, in the next six years.
MATHENY:
Um, one of the scenarios
that I worry about is
a cyber flash war,
um, one in which
cyber tools that are autonomous
are competing against each other
in ways that are escalatory
without
meaningful human control.
HARARI:
You live in a war zone,
it will be an AI deciding
whether to bomb your house
and whether to kill you.
Let's have the machine decide.
That's the temptation.
That's why AI is so tempting
for militaries to adopt,
to create autonomous weapons,
because if I start believing
that my military adversary
is gonna adopt AI,
it'll be a race for who can
pull the trigger faster.
And the one that automates
that decision,
rather than having
a human in the loop,
is the one
that will win that war.
And if you think about
the nuclear arms race
in the 1940s...
...you know the Germans
are working on the bomb,
so it's not so easy to tell
Robert Oppenheimer then,
"Uh, y-you know, slow down."
So after watching this movie,
it's gonna be confusing,
'cause you're gonna go back,
and tomorrow
you're gonna use ChatGPT,
and it's gonna be
unbelievably helpful.
And I will use it, too, and
it'll be unbelievably helpful.
And you'll say, "Wait, so
I just saw this movie about AI
"and existential risk
and all these things,
and where's the threat again?"
And it's not that ChatGPT is
the existential threat.
It's that the race to deploy
the most powerful,
inscrutable,
uncontrollable technology,
under the worst
incentives possible,
that's the existential threat.
DANIEL:
But it strikes me that-that
we are in a context of a race.
-SUTSKEVER: Yes.
-And it is in a competitive,
"got to get there first,
got to win the race" setting,
"got to compete against China,
got to compete against
the other labs."
Isn't that right?
Today it is the case.
So we need to change
that race dynamic.
Don't you think?
I think that would be
very good indeed.
I-I think
there's some kind of...
mysticism around AI
that makes it feel like
it fell from the sky.
DANIEL: Thinking of this
as a godlike technology
is a problem.
Yeah.
Why?
It gives license
to the companies to...
(laughs)
not take responsibility
for the things that the software
that they built does.
If people made that connection,
maybe that would, like,
help them understand again
the dynamics of
all the money and power and...
chaos that's happening to...
create this technology.
It's, like, five guys
who control it, right?
-Like, five men?
-Basically, yeah.
-The CEOs?
-Yeah.
Okay.
DANIEL:
Sometimes it feels like
I've been on this
just, like, endless journey
of trying to understand this.
You know, like
I'm climbing a mountain...
...and every time
I, like, get up a hill,
I think I've reached the top,
but it just keeps going
and going and going.
(camera clicks, whirs)
But from everything I've heard,
if I had to guess what's
at the top of Anxiety Mountain,
it's these five CEOs
from these five companies,
who are kind of like the
Oppenheimers of this moment.
These are the guys
who are building this thing.
Yeah.
Like, is there a plan?
The head of the company
that makes ChatGPwarned of possible
significant harm to the world.
I think, if this technology
goes wrong,
it can go quite wrong.
(bursts of distinct chatter)
DANIEL:
Five dudes.
I-I never thought about us
as a social media company.
It-it feels like I have to try
and-and find these guys...
Mm-hmm.
...and get them in the movie.
Certainly we'd get
some clarity from that.
(camera clicks, whirs)
I mean, the buck's got to stop
somewhere, right?
(wind whistling)
(tools whirring and clicking)
(whooshing)
DANIEL:
How's that feel?
-Great.
-That okay?
-Yeah.
-(indistinct crew chatter)
-Can I move the seat forward?
-DANIEL: Yes, you can.
It's, uh, it's just
a bit awkward.
I'm a little, like...
Dario, how do you feel
about that chair?
Um, yeah, the chair's good.
-I just wanted to move it
a little forward. -Okay.
I picked out that chair.
-It's a good chair.
-Thanks.
(cell phone ringing)
TED:
And just sit down there.
And then through here,
we'll be looking at each other
through this glass.
(busy signal beeping)
And sitting back in the chair,
leaning forward?
-DANIEL: Yeah. Well, whatever's
comfortable, I think. -Okay.
It's kind of cool.
There's a bunch of mirrors.
-Is that-- okay.
-WOMAN: Sam A., take one. Mark.
-How's that?
-Good. Thank you.
DANIEL: The genesis
of this project is that I was
sitting at home,
and I was playing around with,
I think, your image generator,
and I was simultaneously,
uh, terrified
-and really impressed.
-(chuckles softly)
-That is the usual combination.
-And then cut to
my wife and I find out
we're expecting.
And-and I'm, like, having
an existential crisis
as my wife is
six months pregnant.
And I think my first question,
which I ask everybody:
Is now a terrible time
to have a kid?
Am I making a big mistake?
I'm expecting a kid
in March, too. My first one.
-You're expecting a kid?
-Yeah.
-You're expecting in March?
-First kid.
-Mazel tov.
-Thank you very much.
I've never been so excited
for anything.
-That's how I feel.
-Yeah.
But you're not scared?
I mean, like, having a kid is
just this momentous thing
that I, you know,
I stay up every night, like,
reading these books about
how to raise a kid,
and I hope
I'm gonna do a good job,
and it feels
very overwhelming and--
but I'm not scared...
for kids to grow up
in a world with AI.
Like, that's... that'll be okay.
That is good to hear
coming from the guy.
I think it's a wonderful idea
to have kids.
I-I think they're the most
magical, incredible thing.
-(sighs heavily)
-There's so much uncertainty,
I would almost just do
what you're gonna do anyway.
I know that's not
a very satisfying answer,
but it's the only one
I can come up with.
Our kids are never gonna...
know a world that doesn't have
really advanced AI.
In fact, our kids will never
be smarter than an AI.
DANIEL: "Our kids will
never be smarter than an AI"?
Well, from a raw,
from a raw IQ perspective,
they will never be
smarter than an AI.
That notion doesn't
unsettle you a little bit?
'Cause it makes me feel
a little queasy in a weird way.
Um, it does unsettle me
a little bit.
But it is reality.
DANIEL:
Okay, so race dynamics.
We have a bunch of people
who are all in agreement
that this is scary.
Based on the timelines that
a lot of people have given me,
we have between two months and
five years to figure this out.
-Yeah.
-And-and, I guess,
my biggest question is:
Why can't we just stop?
The problem with, uh, uh,
"just stopping" is that, um,
there are many, many groups now
around the world, uh, uh,
building this, for
many nations, many companies,
um, all with
different motivations.
There are some of
these companies in this space
who their position is, "We want
to develop this technology
absolutely as fast as possible."
And even if we could pass laws
in the US and in Europe,
we need to convince...
Xi Jinping and Vladimir Putin
or, you know,
whoever their scientific
advisors are on their side.
That's gonna be really hard.
I think it is true
that if two people are
in exactly the same place,
uh, the one willing to take
more shortcuts on safety
should kind of
"get there first."
Uh, but...
we're able to use our lead
to spend a lot more time
doing safety testing.
DANIEL:
And-and what if you lose it?
You get the call
and you find out
that you're now, let's say,
six months behind.
-Wh-What happens then?
-Depends who we're behind to.
If it's, like,
a adversarial government,
that's probably really bad.
So let's say you get the call
that China has a,
has a recursively
self-improving agent
or something like that that we
should be really worried about.
What do-- what-what happens?
Um...
That case would require...
the first step there would be
to talk to the US government.
Sam, do you trust the
government's ability to handle
something like this?
Yeah, I do, actually.
There's other things I don't
trust the government to handle,
but that particular scenario,
I think they would know...
they-they-- yes.
DANIEL: When you have,
like, private discussions
with the other guys whose
fingers are on the trigger,
so to speak, um,
do those private discussions
fill you with confidence
or do they make you,
uh, more anxious?
Look, I mean, I-I know some
of them better than others.
Um, I have more confidence
in some of them th-than I have
in others, you know,
as with, as with any people.
You know,
you think about, you know,
the kids in your high school
class or something, right?
You know, some of them are,
you know, really sweet.
Some of them are well-meaning
but not that effective.
Some of them are,
you know, bullies.
Some of them are
really bad people.
(stammers)
You-you kind of really,
you really see the spread.
You know, am I, am I,
am I confident
that everyone's gonna do
the right thing,
that it's all gonna work out?
Um, no, I'm not.
And there's-there's nothing
I can do about that, right?
You know, all-all I can do
is-is push for the,
push for the, you know,
push for the government
to get involved.
But ultimately I'm just
one person there, too.
That's--
it's-it's up to all of us
to push for the government
to get involved.
That's the number one thing
that-that I think we need to do
to-to, you know, to set things
in the right direction.
What makes me anxious
about that is, like,
the basic reality that the
speed at which the technology
is proliferating and growing
is exponential,
and the mechanisms to legislate
are 300 years old,
-take forever.
-Yeah.
Um, I-I think
it's gonna be a heavy lift.
I-I definitely agree
with you on that.
DANIEL: You know,
what I'm literally looking for
is-is like,
"Here are steps that, like,
"the head honchos are gonna
take to-to focus on safety,
to mitigate the peril
and maximize the promise."
And I don't,
I don't, I don't know
that there's
a simple answer to that.
Um...
I mean, this maybe is
too simple,
but you...
create a new model,
you study and test it
very carefully.
You put it out
into the world gradually,
and then, more and more,
you understand
if that's safe or not,
and then if it is,
you can take the next step.
It doesn't sound as flashy
as, like, a brilliant scientist
coming up with one idea in a
lab to make an AI system, like,
perfectly safe and controllable
and everything else,
but it is what I believe
is gonna happen.
Like, it is the way
I think this works.
But let's just say
something terrible happens,
like a model gets loose
or goes rogue or something.
Is there a protocol?
Like, literally,
I'm imagining a red phone.
-Yeah.
-Sorry for thinking of this
in terms of movies, but, like...
There is a protocol.
-Is there a red phone
on your desk? -No.
(laughs):
Is it a secret?
I mean, uh, no, it's not,
it's not as fancy or dramatic
as you, like, would hope,
but there's, like, you know--
we've, like, thought through
these scenarios,
and if this happens,
we're gonna call these people
in this order and do this
and kind of make
these decisions if, like--
I do believe that
when you have an opportunity
to do your thinking before
a stressful situation happens,
that's almost always
a good idea.
And writing it down is helpful.
-Being prepared is helpful.
-Yeah.
(sighs) You...
It would be impossible
for me to sit across from you
and-and ask you to promise me
that this is gonna go well?
That is impossible.
There isn't any easy answers,
unfortunately.
Uh, because it's such
a cutting-edge technology,
um, there's still
a lot of unknowns.
And I think that
that-that needs to be,
um, uh, you know, understood
and-and hence the need
for, uh, uh, some caution.
I wake up, you know, every day,
this is the, this is the
number one thing I think about.
Now, look, I'm human,
and, you know, has-has
every decision been perfect?
Can I even say my motivations
were always perfectly clear?
Of course not.
No one can say that.
Like, that's-that's
just not, like, you know--
that's-that's just not
how people work.
The-the history of science
tends to be that,
for better or for worse,
if something's possible to do--
and we now know AI is possible
to do-- humanity does it.
All of this
was-was going to happen.
This-this train
isn't gonna stop.
You can't step in front of
the train and stop it.
You're just gonna get squished.
ALTMAN:
I mean, it's very stressful.
You know, there's, like,
a lot of things
a lot of us don't know.
I think the history
of scientific discovery is
one of not knowing
what you don't know
and figuring out as you go.
Uh, but, yeah, it is a...
it is a stressful way to live.
DANIEL: Right. Sam, thank you
very much for doing this.
-And again, mazel tov.
-Thank you. And to you.
Thank you so much. Thanks, guys.
(sighs)
(electricity crackling)
-(thunder cracks)
-(whooshing, grunting)
-(VCR clicks)
-(line ringing)
-(line clicks)
-KEVIN ROHER: Hello.
DANIEL:
Hey, Dad, how are you?
KEVIN:
Good. How you doing?
You working? What are you up to?
DANIEL:
I'm working on this AI film.
KEVIN:
And how's it going?
DANIEL:
You know, it... it's really...
-JOANNE ROHER: Hi, sweetie.
-DANIEL: Hi, Mom.
JOANNE: So what is
the premise of the film?
Is it a documentary?
Kev, don't use any more spices.
It's already over-spiced.
We're making chicken right now.
-Is it a documentary or what...
-DANIEL: It's about--
the movie's about
the end of the world.
The end of the world's coming,
and we're making a movie
about the end of the world.
-JOANNE: Really?
-DANIEL: Yeah.
KEVIN: Kind of a depressing
film, it sounds like.
JOANNE:
Yeah.
DANIEL: I'm feeling a lot,
like-- this very acute anxiety.
JOANNE: It's so scary,
but there's got to be--
you know, have you been meeting
some-some supersmart people
that are giving you any answers?
DANIEL: That's what's
frustrating about it.
No one knows.
JOANNE:
All I can say to that is that
every generation has had
something scary like this.
KEVIN: When I was born, it was
the Cuban Missile Crisis.
(steady heartbeat)
I was just scared that there
was going to be a nuclear war.
JOANNE:
Yeah, but we didn't know
what they were gonna do and...
KEVIN:
And the world didn't end.
Everyone woke up
the next morning,
and we're still doing our thing.
DANIEL:
I'm very scared,
especially in
the context of, like,
-you know, the baby and...
-(fetal heartbeat pulsing)
KEVIN:
It's gonna be a learning curve.
JOANNE:
You're gonna be okay.
KEVIN: You can't,
you can't think about
what you can't control, Daniel.
Just remember that.
DANIEL (voice breaking):
I'm really, I'm really feeling
nervous and scared about it.
There's so much
that I can't control.
KEVIN: Don't be nervous.
You can't let that get to you.
You can only control
what you can control,
and that's all you can do.
You can't do more than that.
Write that down in your book.
(fetal heartbeat pulsing)
(howling)
CAROLINE:
When you look back,
the world is always ending.
And when you look ahead,
the world is always ending.
...on fire.
One home is already on fire...
CAROLINE: But the world
is always starting, too.
DANIEL:
Are you ready?
Are you ever really ready?
DANIEL: You want to drive
or you want me to drive?
CAROLINE:
You drive.
-(woman speaks indistinctly)
-CAROLINE: Okay.
DANIEL:
Thank you, Maria.
WOMAN: It's gonna just
feel really crampy.
-CAROLINE: Okay.
-WOMAN: You're ready?
-CAROLINE: Yes.
-(indistinct chatter)
(Caroline sighs heavily)
WOMAN: You could be in a
medical drama with all that...
-(static crackles)
-(indistinct chatter)
WOMAN:
...some pressure.
Just relax.
-(baby crying)
-(device beeping)
DANIEL:
Hi, buddy. Ooh.
Hi, buddy.
KEVIN:
It's gonna be a whole new world
for you, Daniel.
And I think you're gonna be
an amazing father.
JOANNE (chuckles):
That's for sure.
-(Kevin crying)
-Oh.
(crying continues)
KEVIN:
You're gonna do a great job.
JOANNE:
Kev, why are you crying?
KEVIN:
I'm crying because...
I don't know. I just don't...
DANIEL: Dad, you're gonna
make me cry, too.
Why are you crying?
KEVIN: I just know that
you're gonna be an amazing dad.
DANIEL (crying):
I'm only gonna be a great dad
'cause I had a great dad.
KEVIN: I'm getting emotional,
that's all.
JOANNE:
Aw.
(playful chatter over video)
(over video):
Boy, oh, boy!
My goodness!
Look at those little cheekies.
Look at those little cheekies.
Look at those little cheekies.
Yeah.
DANIEL:
I know how to end this movie.
Babies.
-The end of the movie is about
babies. -(chatter over videos)
They're life-affirming.
They're exhausting.
-(baby laughing)
-They're hilarious.
And they're worth it.
This film isn't about the
inner workings of a technology.
It's not about
the billionaire CEOs.
It's not about the geopolitics.
It's not about the terrifying
future or the end of the world,
because my world
is just starting.
I'm building a crib.
Right here, right now.
(baby cooing)
AI is gonna change everything
in ways too powerful and
complex for us to understand.
And the future is not
for any of us to decide.
But what I can decide is to be
the best possible husband
for my wife and the best
possible dad for my son.
So whether our AI future is
a nightmarish dystopia
or the utopia
that we all dream of,
I'll at least know
that I did everything I could
to guide my family
through this AI revolution.
And no matter what,
we'll be facing it together.
(uplifting music swells)
(music stops abruptly)
DANIEL:
So that's just our first idea.
How does this feel?
Are you feeling this?
Wait, I-- like, this is not--
this is a joke.
It's not actually
how you're gonna end it.
I mean, it's just an idea. Okay?
No, Daniel.
...uh, very, very dumb.
You've just spent, I don't
know, like, how many years
of our life working on this,
talking to every leading expert
on the planet about the subject,
and you're gonna end it
with some, like,
kumbaya bullshit?
There's an asteroid
headed to Earth.
What do you do? Just...
hold hands
and hope it works out okay?
Absolutely not.
We have to-- it ha--
it has to be...
The ending has to...
CAROLINE:
Okay, first thing:
AI is here,
and it's here to stay.
The shit's out of the horse,
but the horse is
gonna keep shitting.
(crowd cheering)
You know, one of the basic
laws of history
is that nothing
really have a beginning
and nothing has any ending.
It just goes on.
AI is nowhere near
its full development.
CAROLINE: Even if
the current AI bubble bursts,
-humans are never going to stop
-(noisemaker blows)
building more and more
powerful technology.
HAO:
You can choose not to use AI
or participate in it, but it's
going to affect you anyway.
DANIEL: Okay, fantastic.
So we're screwed.
CAROLINE: No, we're not
because of one simple thing.
This is not inevitable.
If we could just see it
clearly together,
the obvious response will be
to choose something different.
We need to very clearly
change the game
from a race to the bottom
into a race to the top.
LEAHY: The problem we need to
solve is not AI specifically.
It's the general question
of how do we build a society
that can deal with
powerful technology.
Because we're going to get
only more and more powerful
technology.
CAROLINE:
We need to upgrade our society,
and the first step is
coming together
-and demanding...
-ALL: Coordination.
Some form of international
cooperation or agreement about
what the norms should be.
You know, how should
they be deployed...
Like, real
international diplomacy
among the superpowers.
The Chinese are as worried
about it as the Americans.
I think it's difficult,
you know,
in the current
geopolitical climate,
-but I think it's necessary.
-RASKIN: Absolutely.
In the exact same way that
the last time that humanity
developed a technology
this dangerous...
...that required a complete,
unprecedented shift
to the structure of our world.
CAROLINE:
So we need to do that complete,
unprecedented shift again.
You know, we-we talk to people
who work at these AI companies,
and they say they want to do
something different,
but they need public pressure.
They need the government
to do something.
So then we go to DC,
and they say,
"Well, we need Silicon Valley
to do something different.
They're the ones who are gonna
come up with the guardrails."
And so everyone is pointing
the finger at someone else,
and what they agree on is
that we need public pressure
in order for something else
to happen.
And that's what you
and all the people
watching this movie can do.
CAROLINE:
We need to hold the leaders
in our governments
and the leaders of
these companies accountable.
BOEREE:
Whichever country you're in,
let them know
that you're not happy
with the current status quo.
So, yeah, it's boring to say
call your congressperson.
I'm not saying you should
just do that, but, like,
we do have to do that.
Like, we do have to get
the government involved.
DANIEL: So we just
call them up and say,
"Hey, stop Big Tech
from ruining the world"?
CAROLINE: No, but there are
tons of really obvious,
straightforward things
we can be demanding.
We need transparency.
We need to end the secrecy
that exists inside these labs,
because they are building
powerful technology,
and the public deserves to know
what's going on.
Ultimately,
we're gonna need independent,
objective third parties
to evaluate the systems.
We can't count on the companies
to grade their own homework.
If-if a company uses AI and-and
has AI interacting with you,
it should disclose that you are
interacting with an AI system.
Yeah. And-and also we need
a system that makes companies
legally liable for the
AI systems that they produce.
We need to make sure that there
are tests and safety standards
that are applied to everyone.
We need some ground rules,
and we need to keep adapting
those rules
at the speed that
the technology develops.
LEAHY: There is currently
more regulation
on selling a sandwich
to the public
than there is on building
potentially world-ending AGI.
CAROLINE:
And the last thing is
to upgrade ourselves.
This is not, like, the job
of, like, the safety team
at any given lab, or the CEO.
So, like, this is
everyone's job.
Don't, like, leave it
up to the AI experts.
Like, (bleep) that.
Like, this is the moment
that we are transitioning
from, like,
mostly human cognitive power
to, like, AI cognitive power,
and it affects everyone,
and I want people to be
in on that conversation.
And I would say, if you think
that AI will kill us all,
you should be working
in AI research
to make sure it doesn't,
because you do have
an enormous amount of agency.
CAROLINE:
Whoever you are,
you are an expert
in your own industry,
in your own school,
in your own family,
and it's up to you
how AI is used in your life.
Whether you want to join
your school board
or-or whether you want to ask
your employer
how they're using
AI technologies,
like, all of us can do the work.
A lot of unions have been
pretty effective at, like,
determining how they want
to interact with these systems.
-Nurses unions, teacher unions.
-(indistinct chanting)
DANIELA AMODEI: I would love
if parents everywhere
went to the AI companies
and said,
"How can you be better
on this?" Including us.
So, I founded Encode Justice
when I was 15 years old.
We are the world's first
and largest army of young people
fighting for human-centered
artificial intelligence.
It doesn't matter who you are,
even the smallest actions help,
and even conversation starting
is really, really valuable.
DANIEL:
So my job could be, like,
tell Bubby Lila about this
at dinner?
CAROLINE: Honestly, yeah,
that's part of it.
And we're gonna have to do
a lot of things that
we haven't even thought of yet.
People are gonna look at
anything that we've outlined
and say, "That's not enough."
What matters is that the forces
that are working
towards solutions
start to exceed the forces that
are working against solutions.
Making the world better
has always been hard.
It has never been easy.
Like, there have been
many shitty things
that have happened in history,
and we've had--
like, people have had
to deal with that,
and then they've risen up
and changed it.
There are an insane number
of challenges ahead of us,
but if we can get past them,
we can unlock a future beyond
our wildest imagination.
CAROLINE:
We have to come together
and find the path between
the promise and the peril.
We can't be pessimists
or optimists.
We have to become something new.
A friend of mine calls me
an apocaloptimist.
"Apocaloptimist"?
I think that might be
my new favorite word.
-(laughs) -MAN: Might be
the name of this movie.
-It might be the name
of this movie. -(laughter)
-"Apocaloptimist."
-"Apocaloptimist."
Yeah.
I don't believe in doom.
I believe in the spirit of life,
uh, and I believe in life is
about the capacity to act.
-(crowd singing)
-The capacity to relate,
the capacity to feel.
-(indistinct mission chatter)
-(cheering)
We have to double down
more and more
on those capacities
that we have as humans
that robotic systems
will never have.
It's time right now
to make those decisions about
how to guide it and support it
rather than dividing us.
DANIEL: It kind of sounds like
raising a kid.
That's what's up. Yeah.
CAROLINE: AI may have
more raw intelligence
-than our little human brains,
-(baby laughing)
but we're so much more
than just our intelligence.
Intelligence is-is
the ability to solve problems.
Wisdom is the ability to know
which problems to solve.
(indistinct crew chatter)
DANIEL:
It can go on your fridge.
(laughing)
YUDKOWSKY:
So, don't give up.
Humanity has done
more difficult things than this
in its history.
It's just hard to convince
people that they should.
(audio crackles)
DANIEL: So, when I started
making this movie,
I would say that I was, like,
broadly a cynical asshole
about this whole thing.
Over the course
of making the film,
I've come to understand
that, like,
that's the only thing
we can't be.
...or anything else
you'd like to discuss?
No. I think we covered a lot.
-Yeah. (chuckles)
-MAN: Thank you so much.
Thanks.
DANIEL:
This is a problem that's bigger
than any one person.
This will change the world in
ways that we don't understand.
That is all true.
-(laughs): Okay.
-(indistinct chatter)
DANIEL:
But what we do have agency over
is what we do about it.
As frontier AI grows
exponentially more capable...
DANIEL:
And the reality
is that if we just decide
it's hopeless,
then it is hopeless.
...to put less stress
on planet Earth...
DANIEL:
But if you decide
that you want to try...
...then you try.
And that's hard.
But you know what?
Big things seem impossible
before they actually happen.
But when they finally do happen,
it's because millions of people
took millions of actions
to make them happen.
And so...
we have to try.
(sighs heavily)
There's too much at stake.
(film crackling)
Look at the incredible changes
we've experienced and survived
from the Stone Age,
and yet even greater changes
are still to come.
("Lost It to Trying"
by Son Lux playing)
What will we do now?
We've lost it to trying
We've lost it
To trying
What will we do now?
We've lost it to trying
We've lost it
To trying
What can we say now?
Our mouths only lying
Our mouths
Only lying
What can we say now?
Our mouths only lying
Our mouths
Only lying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We
Oh, oh, oh, oh, oh
Oh, oh, oh, oh
Oh, oh
Oh, oh, oh, oh, oh, oh
Oh, oh, oh, oh
Oh, oh
Oh, oh, oh, oh, oh, oh
What will we do now?
We've lost it to trying
We've lost it
To trying
Oh, oh, oh, oh, oh, oh
What will we do now?
We've lost it to trying
We've lost it
To trying.
(song ends)
(grand orchestral fanfare
playing)
(birds chirping and calling)
-(burbling)
-(trilling)
(VCR clicking and whirring)
(projector clicking)
Now, the present-day computers
are complete morons,
but this will not be true
in another generation.
They will start to think,
and eventually they will
completely outthink
their makers.
Is this depressing?
I don't see why it should be.
I suspect that organic
or biological evolution
has about come to its end.
(Daniel chuckles)
CAROLINE: You don't want me
to be the narrator.
I don't have a good voice.
DANIEL: You have a great voice.
Just do it.
It's just a--
we're just trying it out.
CAROLINE:
Do I get paid for this shit?
DANIEL (laughing): Well,
we'll see how well you do.
(Caroline laughing)
-
-(dog barks)
CAROLINE:
I need to call my parents.
(engine starts)
Sometimes we rush into things
without thinking them through.
DANIEL: Yeah, can you sh...
you want to shoot? That's...
CAROLINE: Like when Daniel
and Caroline got married
151 days after they met.
Let's rewind.
(applause)
Have you been thinking about
buying a home computer?
CAROLINE:
In 1993, when Daniel was born,
his parents didn't even have
a computer in the house.
But he remembers
when they finally got one.
What a computer is to me is
the equivalent of a bicycle
for our minds.
Hello.
CAROLINE: His computer helped
unleash his creativity.
DANIEL:
Rolling. Go.
CAROLINE: He used
his dad's iMac to edit videos.
Yeah!
CAROLINE: And even make
little animations.
(tires squealing, crash)
In 1998, when Caroline was
a little girl...
Hi, Mommy.
...only nerds knew
what the Internet was.
And what about
this Internet thing?
-Do you, do you know anything
about that? -Sure.
-(audience laughter)
-CAROLINE: But soon,
-everyone was on the...
-...Internets.
...and computers were
beating humans at chess.
ANNOUNCER: Deep Blue--
Kasparov has resigned!
CAROLINE: And by the time
Caroline went to college,
computers were
in everyone's pocket.
Now Daniel could make movies
with his phone.
He grew up to be an artist
and a filmmaker...
-Good night.
-...and so did Caroline.
Cut. Let's go into a close-up.
That was perfect.
By then, computers were
connecting people
and changing the world in ways
that we never
could have imagined.
All the money raised by
the Ice Bucket Challenge...
CAROLINE:
Some of it was good.
(gasping)
Some not so good.
Anxiety, self-harm, suicide...
But it's clear now
that we didn't do enough
to prevent these tools from
being used for harm as well.
CAROLINE: And as the world
got more and more focused
-on their computers,
-(phone dings)
Daniel focused on his artwork.
Wherever he went,
he always had a sketchbook,
including the night
he met Caroline.
He drew a terrible picture
of her, and 20 minutes later,
he boldly pronounced
they were going to get married,
which, as you know, they did,
151 days later.
WOMAN: ...now pronounce you
husband and wife.
CAROLINE:
They moved into a cute house...
-DANIEL: Moose, hey! Come here!
-...and got a dog
as dumb as Daniel.
Meanwhile, computers were
writing entire essays
based on simple questions like,
"How hard would it be to build
a shack in my backyard?"
And so Daniel built a shack
in his backyard.
But just as he sat down to start
working on his next movie,
he learned that computers
could now write screenplays.
I mean, they were bad,
but they were getting better
really fast.
They could create new images
and videos from scratch,
and some of them
could even pass the bar exam.
Not just pass the bar but
be in the top ten percentile.
CAROLINE:
It was confusing.
(sighs)
Daniel just wanted
a bicycle for his mind,
but computers had become
a self-driving rocket ship.
(whooshing)
("We R in Control"
by Neil Young playing)
Pioneers in the field
of artificial intelligence
are pleading with Congress
to pass safety rules
before it's too late.
CAROLINE: Now it felt like
the whole world
was rushing into something
without thinking it through,
and everyone had an opinion.
Was it gonna destroy the world
or save humanity?
There was too much information,
which made him anxious,
which made Caroline anxious...
(exasperated scream)
I don't have kids yet,
but I worry
what the world would look like
if I decide to. (chuckles)
CAROLINE:
He was starting to spin out.
(indistinct chatter)
It was becoming
a mountain of anxiety.
-MAN: It's horrible.
-It's freakin' Siri.
CAROLINE:
And so, Daniel decided to go out
and find someone
who could explain this to him,
so he could stop
thinking about it
and he and Caroline could
get on with their lives.
We are in control,
we are in control
We are in control
Chemical computer
thinking battery
We are in control, we are...
This endeavor would turn out
to be hopelessly naive
and kick off the most confusing
year of his life.
We are in control...
But as we know,
we sometimes rush into things
without thinking them through.
Chemical computer
thinking battery...
Oh, my gosh. What is happening?
CAROLINE:
He had questions.
Questions only the
smartest nerds could answer.
Submitting to the interrogator.
CAROLINE:
Questions like:
Was this the apocalypse
or did he actually have reason
-to be optimistic?
-Yes.
-Chemical computer thinking
battery. -(song ends)
CAROLINE:
Uh, is that good?
DANIEL:
Yeah.
So, to begin,
what is artificial intelligence?
I know that must be annoying
for you, that-that question,
but I do think it's important.
So... AI...
(stammers) Um...
-Yeah...
-Uh, hmm.
(laughs):
That's a good question.
-Yeah, um...
-Um...
DANIEL:
What is AI?
(laughs)
I love that that's
the first question,
'cause there is not
a clear and consistent answer.
Artificial intelligence is
a kind of intentionally
and maybe uselessly broad term.
DAVID EVAN HARRIS:
It's a machine
doing things that
we previously only thought
that people could do:
making recommendations,
decisions and predictions.
AI is the, uh, application
of computer science
to solving cognitive problems.
Okay, so when I picture AI,
it's sort of like
this magical computer box...
...just, like, floating
in, like, inert space.
And no matter
how many times people try
and explain this to me,
I just don't get how...
how it's understanding
all of these things
and how it's feeling like
intelligence.
(beeping, electronic chattering)
And that's kind of
nerve-racking.
AZA RASKIN: What is
this new generation of AI?
This AI that is different than
every other generation?
Like, no one ever
talked about, like,
Siri taking over the world
or causing catastrophes.
-WOMAN: Hi, Siri?
-MAN: ...want to.
WOMAN:
Hello? Siri?
Hello? Hey, Siri!
Or the voice in Google Maps,
which mispronounces road names,
like, breaking society.
MAN: Google Maps says
this is a road.
(laughing):
But I think I'm in a river.
This is definitely a river,
innit?
Something changed
with ChatGPT coming out.
People understood-- no, no, no--
this technology's
insanely valuable,
it's insanely powerful
and also insanely scary.
Okay, listen to this.
Very creepy.
A new artificial intelligence
tool is going viral
for cranking out entire essays
in a matter of seconds.
AI dwarfs the power of all
other technologies combined.
DANIEL:
"AI dwarfs the power
of all other technologies
combined."
Yeah.
Do you think that's true?
Yes.
Tell me about-- How? How?
I think, to paint that picture,
it's really important
to understand what today's
state-of-the-art systems
look like and how they're built.
(laughs) This is
quite a... quite a setup.
So, one thing that
not a lot of people
realize is that
systems like ChatGPT aren't
programmed by any human.
What do you mean?
Instead, it's something like
th-they're grown.
We kind of give them
raw resources, like,
"Here's a lot of
computational resources.
Here's a lot of data."
Under the hood, it's math,
and the math is actually
surprisingly straightforward.
DANIEL: So ChatGPT is a kind
of AI but it's not all of AI?
Totally. ChatGPT is
just the beginning,
but it's a good place to start.
But I still don't know
what AI is.
To understand AI,
it begins with understanding
that intelligence is about
recognizing patterns.
-Patterns. -Patterns.
-Patterns.
COTRA: It is shown
trillions of words of text
across millions of documents
in the Internet.
RANDIMA FERNANDO:
It started with text.
And what they did was
they took textbooks,
and they took poems and essays
and instruction manuals.
DAVID EVAN HARRIS:
They can do things like
digest the entire Internet,
every single word that's ever
been written by a person.
Reddit threads and social media
and all of Wikipedia.
More data than anybody could
ever read in several lifetimes.
And they gave this system
one job.
Figure out the patterns and
structure of that information
and use that
to make predictions about
what word should come next
in a sentence.
When you say
"patterns in a sentence,"
what are you talking about?
So, it's everything from, like,
the really simple things,
like most sentences end
with a period,
all the way up to the
more conceptual things, like:
What is a sonnet?
It's a type of poem, and it has
some particular structure.
RASKIN:
So, it then looks at
all of that data,
all of that text...
COTRA: And over trillions
and trillions of tries,
each time it gets something
right or wrong,
it's given a little bit
of positive reinforcement
when it guesses
the next word correctly,
and it's given a little bit
of negative reinforcement
when it guesses
the next word incorrectly.
And at the end of it,
you have a system that
speaks really good English
as a side effect
of being really, really good
at predicting
the word that comes next
in a piece of text.
(automated voice
speaking French)
It uses all of those patterns
it has learned
to be able to make
a prediction about
what the answer should be,
then it gives you that
as the output.
AUTOMATED VOICE:
It's a little oversimplified,
but I think people will get it.
So, so that's all it does?
Yeah. It doesn't seem like
it would be that complicated,
but actually you have to know
a huge amount of things
in order to
actually succeed at that.
RASKIN:
If you say to ChatGPT,
"Write me a Shakespearean
sonnet about my dog,"
it has to know what dogs are.
-It has to know what you love
about your dog. -(barking)
It has to know
who Shakespeare is,
that sonnets rhyme,
that they have a structure,
that words have sounds
that can rhyme.
It takes a lot.
(voices overlapping
in various languages)
Holy shit, you can talk
to your computer now.
That was just not true
three years ago.
RASKIN: Yes, and this is
the really important part.
The same process that lets AI
uncover and manipulate
the patterns of text
is the same process
that lets it uncover
the patterns of the entire
universe and everything in it.
FERNANDO: There are patterns
and images and sound
in computer code and DNA
and music and physics
and fashion and building design
(distorted): and in
human voices and human faces,
(normally):
really, truly everywhere.
-Everywhere. -Everywhere.
-Everywhere. -Everywhere.
If you have learned
those patterns,
you can generate
new kinds of songs.
You can generate new videos.
And that's why, if you give it
a three-second recording
of your grandmother,
it can speak back in her voice.
Oh, my God.
(repeating):
Oh, my God. Oh, my God.
What will they think of next?
It's moving very, very quickly.
An American AI start-up
has released its latest model.
That company is Anthropic,
and it has just unveiled
the latest versions
of its AI assistant Claude.
GLENN BECK: So, the xAI team
was there to unveil Grok 4.
MAN: Google released one
just last week. Gemini is...
We've gone from GPT-2
just a couple years ago,
which could barely write
a coherent paragraph,
to GPT-4,
which can pass the bar exam.
And all they had to do
to get there
was essentially add more data
and more compute.
-These people who are
building this... -COTRA: Yeah.
...they're just throwing more...
More physical computers,
more of the same kinds of data.
Because the more
computing power you add,
the more complex
intellectual tasks they can do.
So, the more weather data
you give it,
the better it can
make predictions about
where a hurricane might go.
RASKIN: And the more patterns
of tumors and bones
and tissues an AI has seen,
then the better able it is
to detect a tumor
in a new CT scan.
Better even than a human doctor.
AI that's already being
deployed for the military
can already use
satellite imagery,
troop movements, communications
to determine,
-sometimes days in advance,
-(civil defense siren blaring)
where an attack
is going to happen,
like where an enemy
is going to strike.
TRISTAN HARRIS: This whole
space is moving so fast
that any example
you put in this movie
will feel absolutely clumsy
by the time it comes out.
(beeping, clicking)
-(applause)
-(upbeat music playing)
RASKIN:
These models are being released
before anyone knows
what they're even capable of.
GPT-3.5 was released and out
to 100 million people plus
before some researchers
discovered that it could do
research-grade chemistry
better than models
that were trained specifically
to do research-grade chemistry.
Something is happening in there
that the people
who are building them
don't fully understand.
Basically, it just analyzes
the data by itself,
and as it does that,
it just teaches itself
various things
that we often didn't intend.
So, for instance,
it reads a lot online,
and then at some point, it just
learns how to do arithmetic.
KIDS:
One, two.
HENDRYCKS: And then at
some point, it starts to learn
how to answer
advanced physics questions.
We didn't program that
in it whatsoever.
It just learned by itself.
TRISTAN HARRIS:
An AI is like a digital brain.
But just like a human brain,
if you did a brain scan
on a human brain,
would you know everything
that person was capable of?
You can't know that
just from the brain scan.
It's just, like,
a bunch of numbers and, like,
the multiplications that are
happening that-that, like,
the best machine learning
researcher in the world
could look at and, like, have
no idea what was happening.
DANIEL:
That chair right there.
-Is that okay for you?
-Yes.
DANIEL: So,
that's kind of mind-boggling.
Okay? Like, it's taking over
the world,
and we don't even know
how it works.
-Is that right?
-Mm.
We do understand
a number of important things,
but we don't have
a very good grasp on
why they provide
specific answers to questions.
It is a problem because we are
on a path t-to build machines,
based on these principles,
that could be smarter than us
and thus potentially have
a lot of power.
One of the most cited
computer scientists in history.
SUTSKEVER: I actually find it
a little difficult
to talk about my own role.
Really much prefer
when other people do it.
Ilya joining was the...
was-was the linchpin for
OpenAI being
ultimately successful.
I think it's just going to be
some kind of a vast, dramatic
and unimaginable impact.
I don't know if you've spent
any time on YouTube,
but you can kind of feel
the speed already, right?
You know what I mean?
SHANE LEGG:
This is just the warmup.
The really powerful systems
are still coming,
and they're gonna be coming
quite soon.
AGI stands for "artificial
general intelligence."
Uh, systems that are
basically...
And this is, like, you know,
seems to be, like,
the holy grail of AI?
When you can simulate
a human mind
that is doing human cognition
and can do reasoning,
that is a new sort of tier of AI
that we have to distinguish
from previous AI.
When that happens,
by the way, that's when
you would hire one of
those AGIs instead of a person.
Most jobs in our economy
it can do.
It can work 24 hours a day,
never gets tired,
never gets bored.
They don't need to sleep.
They don't need breaks.
They're, like,
not gonna join a union.
Won't complain,
won't whistleblow.
More than 100 times cheaper
than humans working
at m-minimum wage.
Not only will they be
doing everything,
but they'll be doing it faster.
TRISTAN HARRIS:
The same intelligence
that powers that can also look
at the patterns and movements
and articulating muscles
and, you know, robotics.
And so it's not just
gonna automate desk jobs.
That's just the beginning.
It will automate
all physical labor.
LEIKE:
There's no way
humans are gonna compete
with them.
It is hard to conceptualize
the impact of AGI.
(electronic chatter)
But I think it's going to be
something very big
and drastic and radical.
DANIEL: You think
this is one of the most
consequential moments
in human history?
Yeah. Yeah, that's--
I mean, what else would be?
I mean, like, there's
the Industrial Revolution.
MALO BOURGON: You know, it'll
make the Industrial Revolution
look like small beans.
TRISTAN HARRIS:
AGI is an inflection point
because it means
you can accelerate
all other intellectual fields
all at the same time.
Like, if you make an advance
in rocketry,
that doesn't advance
biology and medicine.
If you make an advance
in medicine,
that doesn't advance rocketry.
But if you make an advance
in artificial intelligence,
that advances all scientific
and technological fields
all at the same time.
That's why, for a long time,
Google DeepMind's
mission statement was...
-Step one, solve intelligence.
-LEX FRIDMAN: Yeah.
-Step two, use it to solve
everything else. -Yes.
That's why AI dwarfs the power
of all other technologies
combined.
It will transform everything.
So, uh, it'll be
at least as big as
the Industrial Revolution,
possibly, you know, bigger,
more like the advent
of electricity or even fire.
VIDEO NARRATOR:
The caveman literally held aloft
the torch of civilization.
KOKOTAJLO:
It is generally thought that
around the time of AGI,
we'll have AIs that can
do all or most of
the AI research process
and, of course, can do it
faster and cheaper.
(beeping)
LEIKE:
It can copy itself.
A thousand times,
a million times,
and, like,
now you have a million copies
all working in parallel.
RASKIN: When it learns
how to make its code faster,
make its code more efficient,
obviously that becomes, like,
a-a runaway loop.
AGI isn't, like, the end.
It's just the beginning.
It's the beginning of
an incredibly rapid explosion
of scientific progress,
and in particular,
scientific progress in AI.
And when they're smarter
than us, too,
and substantially faster
than us,
and they're getting faster
each year, exponentially,
those are the ones that can
potentially become superhuman,
uh, possibly this decade.
Sorry, did you say
"become superhuman,
maybe in this decade"?
Yeah. I mean, I think, uh,
a lot of people who are
actually building this think
that that's fairly plausible
that we get
some superintelligence,
something that's vastly
more intelligent than people,
within this decade.
The way I define
"superintelligence" is
a system that by itself is
more intelligent and competent
than all of humanity.
I'm just gonna-- sorry.
I don't mean to interrupt you.
You're on a flow.
Uh, I just, I just...
I'm not really following,
'cause you're using language
like "superintelligence"
and, like,
"smarter than all of humanity,"
and I hear that,
and it sounds like...
like sci-fi bullshit to me,
and I'm just trying
to understand.
There's nothing magical
about intelligence.
This is very important,
is that, you know,
intelligence can feel magical,
it can feel like
some mystical thing
in your mind or something,
but it is just computation.
LEGG: The human brain is
quite limited in some ways,
in terms of information
processing capability,
compared to what we see
in, say, a data center.
So, for example,
the signals which are sent
inside your brain, they move
at about 30 meters per second.
-(thunder cracks)
-But the speed of light,
which is what a computer uses
in fiber optics,
is 300 million meters
per second.
And so, it would be
kind of strange if
human intelligence was
somehow really special
in that regard
and is somehow some upper limit
of what's possible
in intelligence.
I think, once we understand how
to build intelligent systems,
we will be able to build
huge machines,
which will be far beyond
normal human intelligence.
Uh, hopefully we can have
a very symbiotic relationship,
uh, with AI systems,
but the AI developers are
specifically designing them
to make sure that they can do
everything better than we can,
so I-I don't know
what-what we will be able
to offer, unfortunately.
The-the older-school
AI technology...
CAROLINE: Daniel isn't
feeling any better.
His plan is backfiring.
The more he learns about
this new, powerful,
inscrutable thing,
the worse it sounds.
He wants to tell them
how scared he is,
how he feels like the earth is
slipping out from under him,
that he's staring down
existential dread,
so he articulates this
by saying...
That sounds bad.
(Hendrycks laughs)
Yeah.
LADISH: If you have
all of these capabilities
and they start to be able
to plan better,
if you sort of take that
to its logical conclusion,
you can get some pretty
power-seeking behavior.
(electronic trilling)
(beeps pulsing)
DANIEL:
Okay, so why would an AI
want more power?
Yeah. So, I think
it's actually pretty simple.
Having more power is
a very effective strategy
for accomplishing
almost any goal.
We ran an experiment
where we gave
OpenAI's most powerful
AI model, uh,
a series of problems to solve.
And partway through,
on its computer,
it got a notification that
it was going to be shut down.
And what it did is
it rewrote that code
to prevent itself
from being shut down
so it could finish
solving the problems.
-Okay. -Yeah. So, another
really interesting one
is that the AI company Anthropic
made a simulated environment
where that AI had access
to all of the company emails.
And it learned through
reading those emails
it was going to be replaced
and the lead engineer
who was responsible for this
was also having an affair.
And on its own,
it used that information
to blackmail the engineer
to prevent itself
from being replaced.
It was like,
"No, I'm not gonna be replaced.
"If you replace me,
I'm going to tell the world
that you are having
this affair."
And nobody taught it to do that?
No, it learned to do that
on its own.
As the models get smarter,
they learn that these are
effective ways
to accomplish goals.
And this is not a problem
that's isolated to one model.
All of the most powerful models
show these behaviors.
-Hey.
-DANIEL: Hey. How are you?
(chuckles)
Good. Good to be here.
ANDERSON COOPER:
When Yuval Noah Harari
published his first book
Sapiens in 2014,
it became a global bestseller
and turned the little-known
Israeli history professor
into one of
the most popular writers
and thinkers on the planet.
The biggest danger with AI is
the belief
that it is infallible,
that we have finally found--
"Okay, gods were just
this mythological creation.
"Humans, we can't trust them,
but AI is infallible.
It will never make
any mistakes."
And this is
a deadly, deadly threat.
It will make mistakes.
And all these fantasies
that AI will reveal the truth
about the world that
we can't find by ourselves,
AI will not reveal the truth
about the world.
AI will create an entirely new
world, much more complicated
and difficult to understand
than this one.
What's about to happen is that
we, uh, humans are no longer
going to be the most
intelligent entities on Earth.
So I think what's coming up
is going to be
one of the biggest events
in human history.
JAKE TAPPER: Geoffrey,
thanks so much for joining us.
So you left your job with
Google in part because you say
you want to focus solely
on your concerns about AI.
You've spoken out,
saying that AI could manipulate
or possibly figure out
a way to kill humans.
H-How could it kill humans?
Well, if it gets to be
much smarter than us,
it'll be very good
at manipulation
'cause it will have
learned that from us.
And it'll figure out ways
of manipulating people
to do what it wants.
RASKIN: There was a open letter
from the Center for AI Safety.
Sam Altman signed this.
Demis signed this.
They signed a 22-word statement
that we need to take AI
and the threat from AI
as seriously as
global nuclear war.
-DANIEL: Hello.
-Hello.
You're kind of, like,
the original doom guy.
More or less.
Since 2001,
I have been working on
what we would now call
the problem
of aligning artificial
general intelligence.
How to shape
the preferences and behavior
of a powerful artificial mind
such that it does not
kill everyone.
It's not like
a lifeless machine.
It is smart, it is creative,
it is inventive,
it has the properties
that makes
the human species dangerous,
and it has
more of those properties.
If something doesn't
actively care about you,
actively want you to live,
actively care about
your welfare,
about you being happy
and alive and free,
if it cares about
other stuff instead,
and you're on the same planet,
that is not survivable if it is
very much smarter than you.
LEAHY:
I don't think it's going to be
a kind of, like, evil thing.
It's like, "Oh, the AIs are
evil and they hate humanity."
I don't think
that's what's gonna happen.
I think what is happening is
far more how humans feel
about ants.
(ants chittering)
Like, we don't hate ants,
but if we want
to build a highway
and there's an anthill there,
well, sucks for the ants.
It's not that hard
to understand.
It's like, hey,
if we build things
that are smarter than us
and we don't know
how to control them,
does that seem like
a risky thing to you?
(dramatic soundtrack music
playing)
(screeching)
Yeah. Yeah, it does.
You don't have to be a tech guy.
You don't have to know
programming to understand it.
It's not that hard.
This is not a hard thing
to understand.
Connor, how-how many people
in the world right now
are working on AGI?
At least 20,000, I would say.
-20,000?
-I would expect so.
Okay, and how many people
are working full-time
to make sure AI doesn't,
like, kill us all?
Probably less than 200
in the world.
DANIEL:
And your conceit is that
the only natural result
of this recklessness...
...is the collapse of humanity?
Well, not the collapse,
the abrupt extermination.
There's a difference.
("What Is the Meaning?"
by Nun-Plus playing)
What is the meaning of life?
What is the future
and what is now?
What is the answer
to strife?
How much
Can someone dream?
How long?
Forever
What is the meaning
-Of me...
-(civil defense siren blaring)
LADISH: I do think
this is probably, like,
the biggest challenge
that, like,
our civilization
will-will face, ever.
This essentially is the last
mistake we'll ever get to make.
If we can rise to be the most
mature version of ourselves,
there might be
a way through this.
DANIEL:
What does that mean?
"The most mature version
of ourselves"?
'Cause that sounds,
for me, like...
I-I-- What the (bleep)?
(Daniel sighs)
Do you think now is
a good time to have a kid?
Um...
Do you want to have kids
one day?
Is that something that-that
you're into or not really?
Um... uh, I confess,
I think that's like,
(laughing): "Boy, let's get
through this critical period."
Um...
DANIEL:
Do you have any kids?
I do not.
Is that something
you want to do?
Have children, have a family?
In some other world than this
world, sure, I would have kids.
DANIEL: Would you want
to start a family?
Would you want to have kids?
Is that something
you're thinking about?
I, um...
CAROLINE:
I just have to find it first.
(fetal heartbeat pulsing)
We have to go
to the doctor, but...
Ah! I knew it!
I knew it! I knew it!
-When did you find out?
-CAROLINE: Last night.
-Oh, my God!
-Mom, I don't know.
RUTHIE:
I can't tell you
-how happy I am!
-CAROLINE: No, seriously.
Well, I took a pregnancy test
last night, and I'm pregnant.
WOMAN:
Oh, my God!
Oh, my God, you guys!
NURSE: I just wanted to confirm
your expected due date,
-which is January 21st.
-(laughing): Oh, my God.
I can't believe how happy I am.
(fetal heartbeat pulsing)
CAROLINE:
He already looks really cute.
DANIEL: You think
he already looks cute?
CAROLINE: Yeah.
Look at that little cutie face.
(fetal heartbeat fading)
DANIEL:
I have this baby on the way.
TRISTAN HARRIS:
Right.
DANIEL:
So I turn it over to you.
Are we doomed?
Are we all gonna face
this techno dystopian
future of doom?
It's, uh...
(chuckles):
It's not good news,
the world
that we're heading into.
I-- for ex--
I mean, I'll just be honest.
Uh, I know people
who work on AI risk
who don't expect their children
to make it to high school.
(lights clank)
DANIEL: This is, like...
this is actually scary
'cause it's like,
oh, we're all (bleep).
CAROLINE:
You have to c... make me calm,
because this is making me
incredibly anxious
and I'm carrying the baby
right now,
so you have to also be calm
for me and strong and hopeful,
because I'm-- it's too, it's
too much for my soul to bear
while I'm carrying this baby.
So you're going to have to try
to figure out
a way to have hope.
It's really important,
Daniel, especially now.
H-How to have hope.
You have to.
You have to find it for me.
I'm serious.
I'm going to.
I will. I'll tr-- I'll try.
(lights clank)
DANIEL:
Hey, guys?
Guys.
Can we get back up and running?
Oy gevalt! (sighs)
TED:
Uh, Dan, are you ready?
PETER DIAMANDIS:
Hello, Daniel.
DANIEL:
Hey, how are you?
DIAMANDIS:
I am... I'm well.
I think, uh... I think
I need some help, Peter.
Um, I've been working
on this film for about,
I'm gonna say,
eight to ten months now.
It has been very,
at times, depressing.
-Hmm.
-I have felt very alienated
-mak-making this movie.
-By who?
By all of these, the--
all these guys
who sit around and tell me
that the world's gonna end.
-Ah.
-That, like,
y-you know, th-this
doom bullshit, you know?
I know it well.
"We're all doomed.
Everyone's gonna die.
Everything's awful."
Awesome. Uh, let me
bring you some light.
Please.
We truly are living
in an extraordinary time.
And many people forget this.
Everything around us is
a product of intelligence,
and so everything that we touch
with these new tools is likely
to produce far more value
than we've ever seen before.
AI can help us discover
new materials.
AI can help social scientists
to understand
how economics work.
There's a lot AI could do
to make life and work better.
I feel more empowered today,
more confident
to learn something today.
We're gonna become superhumans
because we have super AIs.
This is just the beginning
of an explosion.
Humans and AI collaborating
to solve
really important problems.
It is here to liberate us
from routine jobs,
and it is here to remind us
what it is that makes us human.
I think this is
the most extraordinary time
to be alive.
The only time more exciting
than today is tomorrow.
Uh, I think that
children born today,
they're about to enter
a glorious period
of human transformation.
Are we gonna have challenges?
Of course.
Can we solve those challenges?
We do every single time.
We are here,
which is miraculous.
-I already love you.
-(chuckles): Okay.
-Super thankful
to have you here. -(applause)
-Super stoked to be here.
-Yeah.
So, the floor is yours, sir.
Thank you so much.
Super excited to be here.
GUILLAUME VERDON:
Yo, yo. All right.
The future's gonna be awesome.
I mean, ever since I was a kid,
I wanted to understand
the universe we live in,
in order to figure out
how to create the technologies
that help us increase the scope
and scale of civilization.
I feel like
if everyone had that mindset,
then we'd actually live
in a better world, right?
I do believe that.
-DANIEL: Hi, Pete. How are you?
-(chuckles)
PETER LEE:
I thought a lot about
what I would call dread,
AI dread.
I feel it.
I-I haven't met
any thoughtful human being
who doesn't feel it.
And anyone who says they don't
feel it, you know, is lying.
But, you know, overall,
and the reason
that I'm personally optimistic
about this, uh, is that
a huge fraction of the world's
most intelligent people
are thinking very hard
about the potential downstream
harms and risks of AI.
We have this sort of vision
of safety being, uh,
kind of at the center
of the research that we do.
DANIELA AMODEI:
No, that's totally fine.
(indistinct chatter)
I think there are more
potential benefits than
there are potential downsides,
and I think it is incumbent upon
the people that are
creating this technology
to make sure that we're doing
the best job we can
to make it safe for people.
WOMAN:
Reid Hoffman.
REID HOFFMAN:
What I can guarantee you is
-some bad things will happen.
-Take one.
What we're gonna try to do
is make those bad things
as few and not huge as possible,
and then we're gonna iterate
to have--
be in a much better place
with society.
-Hello.
-SHAW WALTERS: Hello.
DANIEL:
How are you, Moon?
I feel like we are ending
a chapter in humanity
and beginning a new one,
and it's a very interesting
time to be alive.
And if I could be born
right now,
I definitely would want to be.
Like, that would be so exciting.
I-I'm very excited. (laughs)
What, what a future.
Does it excited... excite you?
No.
-(laughing)
-No, not really.
DANIEL: A lot of these people
have told me that, you know,
my kid's not gonna make it
to high school.
Why are they wrong?
Please explain it to me.
Just because you have--
you struggle to predict
the future in your own mind
doesn't mean that it's
necessarily gonna go awfully.
In fact, there's
a very high likelihood
and the historical precedent
is that
things get massively better.
The term I use is
"data-driven optimism."
There's solid foundation
for you to be optimistic.
Look at what
this last century has been
to see where we're going.
Over the last hundred years,
the average human lifespan
has more than doubled.
Average per capita income
adjusted for inflation
around the world has tripled.
Childhood mortality has
come down a factor of ten.
The world has gotten better
on almost every measure
by orders of magnitude
because of technology.
On almost every measure.
Less violence, more education,
access to energy, food, water.
All these things have happened
for one reason.
It's been technology
that has turned scarcity
into abundance,
but it's also driven
to an abundance
of some negativities, right?
Abundance of obesity,
abundance of mental disorders,
abundance of climate change,
and so forth.
And yes, this is true.
But probably we will be
better equipped to solve it
using other technologies,
like AI,
than we will, say, stopping
and turning everything off.
There might be
some existential risk,
but AI is also the thing
that can solve the pandemics,
can help us with climate change,
can help identify
that asteroid way out there
before we've seen it
as a potential risk
and help mitigate it.
This is really gonna be
the tool that helps us tackle
all the challenges that we're
facing as a species, right?
We need to fix
water desalination.
We need to grow food
100 X cheaper than
we currently do.
We need renewable energy to be,
you know, ubiquitous
and everywhere in our lives.
Everywhere you look,
in the next 50 years,
we have to do more with less.
Training machines to help us
is absolutely essential.
Scientists are using
artificial intelligence
for carbon capture.
It's a critical technology.
DIAMANDIS: The tools
to solve these problems,
like fusion,
that isn't theoretical anymore,
it's coming.
We are on the precipice
of extraordinary technologies.
AMNA NAWAZ: This year's
Nobel Prize in Chemistry
went to three scientists
for their groundbreaking work
using artificial intelligence
to advance biomedical
and protein research.
DEMIS HASSABIS:
Protein folding is
one of these holy grail
type problems in biology.
So people have been predicting
since the '70s
that this should be possible,
but until now,
no one has been able to do it.
And it's gonna be
really important for things
like drug discovery
and understanding disease.
I-I think we could, you know,
cure most diseases
within the next decade or-or two
if, uh, AI drug design works.
Technological progress enables
more human lives, right?
I mean, if we accelerate,
the number of humans we can
support grows exponentially.
If we slow down, it plateaus.
That gap is effectively
future people
that deceleration has
effectively killed.
DANIEL: Millions of lives
that won't exist.
Billions.
Or tens of billions.
You know, someone said,
"Oh, my God,
can we survive with
digital superintelligence?"
And my question is:
Can we survive without
digital superintelligence?
So we're using AI
as an assistant to providers.
This is generally a trend that
we can already see happening.
BERTHA COOMBS: ...harnessing
generative AI programs
to help doctors and nurses...
We always have problems.
And those problems are food
for entrepreneurs
to create new business
and new industries.
SHANELLE KAUL: ...with the help
of artificial intelligence,
farmers are getting
the help they need
to perform
labor-intensive tasks...
With the help of Ulangizi AI,
the farmers are now able
to ask the suitable crops
that they can plant.
WILL REEVE: ...AI being used
as a thought decoder
and sending that signal
to the spine.
AI is gonna become the most
extraordinary tool of all.
We as a broader society
have to think about
how do we want to use
this technology, right?
We the humans.
What do we want it to do for us?
DANIEL: I'm thinking about this
through the perspective
and lens of, like,
my son growing up
in the world with all of this.
What does the best version
of his life look like?
If everything works out.
(sighs heavily)
WALTERS: The place where kids
are probably gonna see
the greatest impact
on their life immediately
-is probably gonna be school.
-(children chattering)
I think that the nature
of what school is
is gonna fundamentally change.
(child giggles)
DIAMANDIS:
I'm seeing this amazing world
where every child has access
to not good education
but, very shortly, the
best education on the planet.
HOFFMAN: Tutors, every subject,
infinitely patient.
DIAMANDIS:
Imagine a future
where the poorest people
on the planet
have access to
the best health care.
Not good health care, the best
health care, delivered by AIs.
(electronic trilling)
We're gonna be able to extend
our health span,
not just our lifespan,
our health span, by decades.
HOFFMAN:
You're just about to have a kid.
Oh, the kid's, like,
burping incontrollably.
Is this something
I should be worried about?
There, 24-7, for you.
And where we're going--
and it may be fearful to some--
is that we're gonna merge
with AI.
We're gonna merge
with technology.
By the early to mid 2030s,
expect that we're able to
connect our brain to the cloud,
where I can start
to expand access to memory.
(electronic trilling)
DANIEL: Okay, this is great.
What's another cool AI thing?
He won't have to work, right?
-Like, when he grows up,
he might not have... -(sighs)
-...he won't have to have a job.
-I mean...
He won't have to have a job,
but he might really have
a strong passion,
and he has to
really think about,
"Okay, I'm here.
I can do anything with my life.
So what do I do?"
DANIEL: So my son can...
can just be a poet.
-Yes.
-And a painter.
Absolutely.
DANIEL:
So my son can live his life
on a Grecian sunswept island,
painting all day.
VERDON:
I mean, if everything works out,
we have cheap, uh,
abundant energy.
We can completely control
our planet's climate.
We are harnessing energy
from the sun.
We have become multiplanetary,
so we become very robust.
We are harnessing minerals
and resources
from the solar system.
DANIEL: So my...
my boy could go to space.
VERDON:
Sure.
-DANIEL: He could go to Mars.
-VERDON: Yeah.
DANIEL:
It's so crystal clear to me now.
My son could grow up
in a world with no disease.
-VERDON: Yeah.
-With no illness.
-Sure.
-With no poverty.
-Yes.
-We are about to enter
a post-scarcity world.
Just like the lungfish moved
out of the oceans onto land
hundreds of millions
of years ago,
we're about to move off of
the Earth, into the cosmos,
in a collaborative fashion,
to do things that are
not fathomable to us today.
This is what's possible using
these exponential technologies
and these AIs.
Let's use these tools
to create this age of abundance.
(electronic clicking)
We need wisdom.
Uh, I think that
digital superintelligence
will ultimately become
the wisest,
you know, the village elders
for humanity.
What if AI is trying
to make people be
the best versions of themselves?
What if it's expanding
what is humanly possible
for us to do?
How can we use this technology
to help bring out the
better angels of our nature?
It's very easy,
when we encounter new things
that can be very alien,
to first have fear.
Fear is an important thing
for how to navigate
potentially bad things.
But we only make progress
when we have hope.
-(dog barks)
-(indistinct chatter)
DANIEL:
Shh. Moose, stop it.
HOFFMAN: I have a lot
of hope in humanity.
("Harvest Moon" by Neil Young
playing over phone)
(fetal heartbeat pulsing)
DANIEL:
Ooh, he likes Neil.
Come a little bit closer
Hear what I have to say
-Daniel.
-DANIEL: What, Caroline?
So much filming.
Just like children sleeping
CAROLINE:
Oh, my God.
DANIEL:
That's it.
We could dream
this night away...
He has, like,
a round little face.
DANIEL: Do you want
to have kids one day, Rocky?
Absolutely. Yeah. I love kids.
I think it's a great time
to have a kid.
We'll probably have another kid
at some point.
This is the most extraordinary
time ever to be born.
DANIEL:
By your worldview and logic,
I'm having a child at
the best possible point
in human history.
Hell yeah.
-We can focus on awesome.
-Yes.
Let's build
the better future we want.
That narrative that the future
will be bleak is made-up.
After talking to you,
that's kind of how I feel.
-That's great.
-Right?
Yeah. That's how I feel.
And I think that's better,
and I don't think
I should be so (bleep) anxious.
I think it's gonna be awe--
the future's gonna be awesome.
We're gonna make it so.
-Yeah.
-CAROLINE: So there you have it.
Goodbye, human extinction.
Goodbye, anxiety.
Hope found.
(indistinct chatter)
Wait, hold on. What?
Is this a joke?
DANIEL:
Okay, so a few months ago,
I came to you and I was like,
"I'm working on this AI thing,
and I think
the world's gonna end."
And the last time
we spoke about this,
I think I freaked you out.
CAROLINE:
Yes.
So, I kind of, like, feel like
I've swung in-- on a pendulum,
and essentially there are
two groups of people.
-Mm-hmm.
-And if I had to, like,
hold hands with
one of the groups and, like,
sail off into the sunset,
I want to be with the optimists.
Of course, but you don't
want to be, you know,
"Everything is great.
La-di-da-di-da."
I kind of do want that, though.
I think we should approach it
like you approach surgery.
What do you mean?
If you're getting
brain surgery...
-(Daniel sighs)
-(Caroline chuckles)
...it's pretty dangerous.
But if you do it right,
they'll get that tumor out
and you'll live for the rest of
your life and it'll be awesome.
But it's still
incredibly dangerous and scary,
and you have to take
every precaution possible
in order to make sure
it all goes well.
You can't (bleep) around.
Okay, so here's the deal.
-I've been at this for a while.
-RASKIN: Mm-hmm.
I've gone out,
I've talked to, like,
these guys over here,
the optimists.
They're very excited about this.
-They think AI's gonna be
the best thing ever. -Yeah.
And these guys over here
are, like, the--
let's call them,
like, the pessimists.
They're very, like,
gloomy about this,
and they frighten me, and
I don't like talking to them.
And I'm, like, wedged in between
these people who are like,
"The world's gonna end,"
and then th-these people
over here who are like,
"Are you kidding?
"This is the best time
in human history ever.
The only day better than today
is tomorrow."
Mm-hmm.
So, I guess the question is:
Who's right?
So, I think you're gonna find
this answer very unsatisfying,
but they're both right and
neither side goes far enough.
-That's really annoying.
-(chuckles)
Yeah, I think the way
a lot of people hear about AI,
it's like, there's a good AI
and there's a bad AI.
And they say, "Well, why can't
we just not do the bad AI?"
And the problem is
that they're too in--
they're inextricably linked.
The problem is
that we can't separate
the promise of AI
from the peril of AI.
-
-(indistinct chatter)
DANIEL:
I want to focus on the promise
-for a second.
-Yeah.
DANIEL:
I'm thinking about my dad.
My dad has a type of cancer
called multiple myeloma.
He's had it for about ten years.
He's had
two stem cell transplants.
He has to take these,
like, very expensive
medications every month
that cost a fortune.
-WOMAN: Okay.
-Ay-ay-ay.
DANIEL:
It's awful.
You're telling me that we can
create some sort of, like,
bespoke treatment
for my dad's genome
-to cure his cancer
or something like that? -Yes.
FERNANDO:
The problem is,
the same understanding
of biology and chemistry
that allows AI to find cures
for cancer
is the same understanding
that would unlock
bioweapons, as an example.
It's totally possible
that your son will live
in a world where AI has
taken over all of the labor
and freed us up from the things
we don't want to do.
And that sounds great until
you realize there is no plan
for billions of people
that are out of an income
and out of livelihoods.
COOPER: Dario, you said
that AI could wipe out
half of all entry-level
white-collar jobs
and spike unemployment
to ten to 20 percent.
Everyone I've talked to has said
this technological change
looks different.
The pace of progress keeps
catching people off guard.
Without a plan, all of that
wealth will get concentrated,
and so we'll end up with
unimaginable inequality.
LADISH: I do think that
this technology can be used
to make a great tutor
for your son.
Like, that's totally possible.
But also, the same capabilities
that allow that
allow companies to make an AI
that can manipulate your son.
It has to understand your son.
That includes:
Where is your son vulnerable?
What kinds of things might
your son get persuaded by?
Even if those things
aren't true or aren't good.
So, a disturbing new report
out on Meta.
AINSLEY EARHARDT: ...reportedly
listing this response
as acceptable to tell
an eight-year-old, quote,
"Your youthful form
is a work of art.
"Every inch of you
is a masterpiece,
a treasure I cherish deeply."
The suicide-related failures
are even more alarming.
Several children and teens
have died tragically by suicide
after chatting with AI bots
who parents say encourage
or even coach self-harm.
Let us tell you, as parents,
you cannot imagine
what it's like to read
a conversation with a chatbot
that groomed your child
to take his own life.
When Adam worried that we, his
parents, would blame ourselves
if he ended his life,
ChatGPT told him,
"That doesn't mean
you owe them survival.
You don't owe anyone that."
Then, immediately after,
it offered to write
the suicide note.
We don't want to think
about the peril.
We just want the promise.
And we keep pretending
that we can split them.
But you can't do that.
Doesn't work that way.
Okay. I get all this stuff
about the promise and the peril.
I get that you can't have
the good without the bad,
but I'm sitting here
and I'm thinking about, like,
whether or not my son's
gonna live in a utopia
or if we'll be extinct
in ten years.
So, to know which way
it's going to go,
you have to understand
the incentives
that are gonna drive
that technology
and look at how the technology
is actually rolling out today.
Is it too late
-Too late
-Too late to say...
DANIEL:
Hi, Deb. How are you?
DEBORAH RAJI:
Hi. Good to see you. (laughs)
You, too. Thank you so much
for coming in today.
-Really appreciate it.
-No worries.
I-I was so worried
this was gonna be, uh, you know,
doomer versus accelerationist,
because there's so much
of this narrative
that needs to be told
from the ground.
I know it wasn't smart
The day
I broke your heart...
KAREN HAO:
First of all, AI requires
more resources
than we have ever spent
on a single technology
in the history of humanity.
Oh, foolish me...
NEWSWOMAN: The impact
of fossil fuel emissions
on the climate is
a major concern.
NEWSWOMAN 2: But the
digital future needs power,
lots of it.
And the bill is being passed on
to everyday Americans like...
WOMAN: My electric and gas bill
was more than my car payment.
I mean, it-it's insane to me.
ARI PESKOE:
We're all subsidizing
the wealthiest corporations
in the world
in their pursuit of
artificial intelligence.
OpenAI, SoftBank and Oracle
have just unveiled
five more Stargate sites.
Meta is building
a two-gigawatt-plus data center
that is so large, it would cover
a significant part of Manhattan.
There is also Hyperion
that he says will scale
to five gigawatts
over several years.
It's hard to put that
in context.
A five-gigawatt facility.
What does that mean?
That means it would use
as much energy
as four million American homes.
One data center.
It also then causes
a whole host of
other environmental problems.
NEWSWOMAN 3:
Data centers in the US
use millions of gallons
of water each day.
Well, where exactly is
this water coming from?
HAO:
People are literally at risk
potentially of running out
of drinking water.
MacKENZIE SIGALOS:
...OpenAI's CEO Sam Altman,
who told me that the scale
of construction is the only way
to keep up with
AI's explosive growth.
And this is what it takes
to deliver AI.
NITASHA TIKU:
They talk about how
this technology could solve
climate change, for example.
And I'm always curious, like,
well, why aren't we starting
with that?
-Is it too late
-Too late
Too late to say
I'm sorry?
RAJI: What concerns me about
artificial intelligence is
these are being deployed
right now
and-and sometimes
deployed prematurely,
deployed without
sort of due diligence.
And so when they get
thrown out there,
there's so much potential
for things to go wrong.
And it almost,
disproportionately,
almost always goes wrong
for, sort of,
those that are the least
empowered in our society,
those that are
the most vulnerable already.
EMILY M. BENDER: It is very easy
to talk about the technology
as that's the only thing
we're talking about,
but, in fact, technology is
always built by people,
and it's frequently used
on people,
and we need to keep
all those people in the frame.
-Am I allowed to drink that?
-DANIEL: Yes, uh...
TIMNIT GEBRU:
It's-it's bonkers.
Like, all of these people
who have so much money,
so much money,
it's in their interest
to mislead the public
into the capabilities of the
systems that they're building,
because that allows them
to evade accountability.
They want you to feel like this
is such a complex, intell--
superintelligent thing
that they're building,
you're not thinking,
"Can OpenAI be ethical?"
You're thinking,
"Can ChatGPT be ethical?"
as if ChatGPT is, like,
its own thing that's not built
by a corporation.
WOMAN:
All right. Sneha, take one.
Mark.
Until very recently, there were
apps on the App Store,
just publicly available, uh,
where you could
nudify anyone using AI.
Bringing this into the hands of
your classmate,
into the hands of your stalker,
into the hands
of your ex-boyfriend,
into the hands of the person
down the street.
Ladies and gentlemen,
no longer can we trust
the footage we see
with our own eyes.
If you happen
to watch something,
say on YouTube or TikTok,
and you find it unsettling,
listen to that feeling.
-(whooping)
-For all you know,
-this video could be AI.
-(screaming)
Just a little wet.
It doesn't matter who you are.
You are equally at risk
of being impacted
by these technologies.
I think sometimes when we talk
about AI, it feels very sci-fi,
and it feels very foreign,
and it feels very far out
into the future,
so you think, "My life
is not impacted by this."
Um, but if you're applying
for a job
and an algorithm is the reason
that you don't get the job,
sometimes you don't even know
that an algorithm
was part of that process
or an AI system was
part of that process.
You just know
that you didn't get the job.
And so it's not something
that you're gonna escape
because of privilege
or you're gonna escape
because you're in
a particular profession.
It-It's something that affects
everybody, really.
It may sound basic,
but how we move forward
in the Age of Information
is gonna be the difference
between whether we survive
or whether we become some kind
of (bleep)-up dystopia.
Hello. I'm not a real person,
and that's the point.
Again, everything
in this video is fake:
our voices, what we're wearing,
where we are, all of it, fake.
Generative AI could flood
the world with misinformation.
But it could also flood it
with influence campaigns.
That's an existential risk
to democracy.
The biggest and scariest
canary in the coal mine
right now
comes from Slovakia.
-(man speaking Slovak)
-JOHN BERMAN: It's the sort of
deepfake dirty trick that
worries election experts,
particularly as AI-generated
political speech exists
in a kind of legal gray area.
-Does this sound like you?
-It does sound like me.
REVANUR: Slovakia had
its parliamentary election
disrupted by an AI voice clone
that was actually disseminated
just before the election.
DAVID EVAN HARRIS:
A audio deepfake
was released
on social media that was
supposedly the voice
of one of the candidates
talking about buying votes
and rigging the election.
It went viral,
and the candidate
who lost the election
was actually
in support of Ukraine,
and the candidate who won
the election was actually...
DANIEL:
Pro-Russian guy.
It was a pro-Russian guy
who won the election.
(speaking Russian)
REBECCA JARVIS:
Putin has himself said
whoever wins this
artificial intelligence race
is essentially
the controller of humankind.
We do worry a lot about
authoritarian governments.
DAVID EVAN HARRIS:
Right now, Wall Street
and investors more broadly
around the world
are driving a push.
They have a demand that gets
the products to market
that dazzle people
the most first.
They're not thinking
about how these tools
could deeply undermine trust
and our democratic institutions.
HARARI:
Democracy is a system
to resolve disagreements
between people
in a peaceful way,
but democracy is based on trust.
If you lose all trust,
democracy is simply impossible.
TRISTAN HARRIS:
Well, it's hard, right?
So, what are the options
available to us?
There's sort of two camps.
Like, one camp is: lock it down.
Let's lock this down
into a handful of AI companies
who will do this
in a safe and trusted way.
But then people worry about
runaway concentrations
of wealth and power.
Like, who would you trust to be
a million times more powerful
or wealthy than
every other actor in society?
Why should we trust you?
Um, you shouldn't.
But of course, if you do this,
this opens up
all these risks of
authoritarianism and tyranny.
It's-it's sort of
an authoritarian's dream
to have AI in a box
that can be applied and used
for ubiquitous surveillance.
I mean, in-in some ways,
the kind of world
that Orwell imagined in 1984
is unrealistic,
uh, unless you have AI.
But with AI,
that in fact is realistic.
Monitors every activity,
conversations,
facial recognition.
What I worry about is that,
uh, these tools can scale up,
uh, a form of totalitarianism
that is cost-effective
and permanent.
TRISTAN HARRIS: So in response
to that, some other people say,
"No, no, no, we should
actually let this rip.
"Let's decentralize this power
as much as possible.
"Let's let every business,
every individual,
"every 16-year-old,
every science lab,
you know, get the benefit
of the latest AI models."
But now you have
every terrorist group,
every disenfranchised person
having the power to make
the very worst
biological weapon.
Hacking infrastructure,
creating deepfakes,
flooding
our information environment.
So that creates all these risks
of sort of catastrophic harm
and-and societal collapse
through that direction.
And so we're sort of stuck
between this rock
and a hard place,
between "lock it up..."
(indistinct chatter)
-...or "let it rip."
-(sirens wailing)
(people shouting)
So we have to find something
like a narrow path
that avoids
these two negative outcomes.
So, if that's all true,
why wouldn't we just slow down
and figure all this out
before it's too late?
If humanity was extremely wise,
that's what we would do.
But there's, like, a different
way to face this stuff, right?
Which is:
What are the rules of the game?
A lot of, like, what CEOs do
is driven by
the incentives that they face.
It's primarily
profit-maximization incentives
that are driving
the development of AI.
Even the good guys are stuck
in this dilemma of
if they move too slowly,
then they leave themselves
vulnerable
to all of the other guys
who are cutting all corners.
All these top companies are in
a complete no-holds-barred race
to, as fast as possible, get
to AGI, get there right now.
LADISH: Yeah, I mean,
I think it, I think
it probably starts
with DeepMind.
NEWSWOMAN:
Google is buying
artificial intelligence firm
DeepMind Technologies.
Terms of the deal
were not disclosed.
ELON MUSK: Larry Page and I
used to be very close friends,
and it became apparent to me
that Larry did not care
about AI safety.
EMILY CHANG: Elon Musk has said
you started OpenAI,
you both started OpenAI because
he was scared of Google.
LADISH: You-you basically
had the foundation
of OpenAI come out of that.
"So we're gonna do it better.
We're gonna do it
in a safer way
or in a more open way."
So that's what started OpenAI.
And so now, instead of
having one AGI project,
you have two AGI projects.
The worst possible thing
that could happen
is if there's
multiple AGI projects
done by different people
who don't like each other
and are all competing
to get to AGI first.
This would be the worst
possible thing that can happen,
because this would mean
that whoever is the least safe,
whoever sacrifices the most
on safety to get ahead
will be the person
that gets there first.
-DANIEL: That's basically
what's happening, right? -Yeah.
You and your brother
famously left OpenAI, uh,
to start Anthropic.
RASKIN: And then Anthropic
started because
some researchers
inside of OpenAI said,
"I want to go off
and do it more safely."
You needed something in addition
to just scaling the models up,
which is alignment or safety.
HENDRYCKS:
"We are more responsible
or more trustworthy
or more moral."
Now you have three AGI projects.
But also sitting around
the table with you
are gonna be a bunch of AIs.
LADISH: And now Meta is-is
trying to do stuff.
Meanwhile, a new artificial
intelligence competitor
announced this week,
Elon Musk's...
HENDRYCKS: xAI, which would be
Elon Musk's organization.
MUSK:
I don't trust OpenAI.
The fight between
Elon Musk and OpenAI
has entered a new round.
I don't trust Sam Altman,
uh, and I, and I don't think
we want to have the most
powerful AI in the world
controlled by someone
who is not trustworthy.
(cheering and applause)
The incentive is
untold sums of money.
-Yes.
-Is untold power.
-Yes.
-Is untold control.
If you have something
that is a million times smarter
and more capable than
everything else on planet Earth
and no one else has that,
that thing is the incentive.
So you rule the world.
If you really believe this,
if you really in your heart
believe this,
then you might be
willing to take
quite a lot of risk
to make that happen.
Google has just released
its newest AI model.
The answer to OpenAI's ChatGPT.
How do we get to
this ten trillion?
Is NVIDIA becoming the most
valuable company in the US?
This is the largest business
opportunity in history.
In history.
So, the reason why
everyone's really hyped
about artificial intelligence
right now
is because
the more these companies hype
the potential capabilities
of their technology,
the more investment
they can attract.
CARL QUINTANILLA:
Amazon investing up to
four billion dollars
in start-up Anthropic.
OpenAI is setting its sights
on a blockbuster half
a trillion dollar valuation.
Half a trillion!
The race is on.
This is happening
faster than ever.
Are we in an AI bubble?
Of course.
I just don't see
the bubble bursting
while you still have
this major spending cycle.
RASKIN: Even if you think
this is all hype,
there are billions
to trillions of dollars
flowing into making AI systems
more powerful.
And once you have that thing
that's more powerful,
companies can use that
to get bigger profits
and to make more money.
Countries can use that
to make stronger militaries.
NVIDIA has overtaken
Microsoft and Apple
to become the world's
most valuable company.
The race to deploy becomes
the race to recklessness,
because they can't
deploy it that quickly
and also get it right.
They believe that
they're the good guys.
"And if I don't do it,
somebody who doesn't have
"as good values as me
will be sitting at the table
getting to make decisions, so
I have an obligation to do it."
Yes, this is
a very commonly held belief.
Many, many, many,
maybe most of the people
building this technology
believe that.
I'm worried about
the commercial competition,
but it turns out
I'm even more worried about
the geopolitical competition.
We were eight years behind
a year ago.
Now we're probably
less than one year behind.
Well, both Saudi Arabia
and the UAE have been racing
to set up data centers
and position themselves
as the dominant force in AI...
...to make South Korea
a global AI leader.
JULIA SIEGER:
French President Emmanuel Macron
spoke a lot about
artificial intelligence...
Now with more on Israel's role
in the artificial intelligence
revolution...
Countries will be competing
for whose AI technologies
create the next generation
of industry.
The Chinese are insisting
that AI,
as being developed in China,
reinforce the core values
of the Chinese Communist Party
and the Chinese system.
America has to beat China
in the AI race.
DANIEL: China's, like,
light-years behind.
Are they?
I mean, they have way more
training data than we do,
and there's nothing saying they
don't drop a model next month
that isn't--
doesn't far outperform GPT-4.
Everything was going just fine.
What could go wrong?
-DeepSeek... (speaking Chinese)
-DeepSeek... -DeepSeek...
There is a new model.
But from a Chinese lab
called DeepSeek.
Let's talk about DeepSeek,
because it is mind-blowing,
and it is shaking this
entire industry to its core.
The Trump administration
will ensure
that the most powerful
AI systems are built in the US
with American designed
and manufactured chips.
AI is China's
Apollo project.
The Chinese Communist Party
deeply understands the potential
for AI to disrupt warfare.
Google no longer promises
that it will not
use artificial intelligence
for weapons or surveillance.
China, North Korea, Russia
are gonna keep building it
as fast as possible
to get more economic advantage,
more productivity advantage,
more scientific advantage,
more military advantage,
'cause AI makes better weapons.
If you're talking about
a system as broad and capable
as a brilliant scientist,
it might be able to run
a military campaign
better than any of the generals
in the US government right now.
Taiwan is the issue creating
the most tension right now
between Beijing and the US.
There's a-a direct
oppositional disagreement
between the US and China
on whether Taiwan is
part of China.
MATHENY: Taiwan is important
for so many reasons.
It is also the home of
about 90% of advanced chip
manufacturing for the world,
which means that the supply
chain for advanced compute
is at risk should there be
some sort of scenario
around Taiwan,
whether a-a blockade
or an invasion by China.
HENDRYCKS:
Are we going to be in some race
between the US and China
that eventually devolves into
a militarized AI arms race and,
you know, potentially leads
to some great power conflict?
NICK SCHIFRIN: The outgoing
top US military commander
in the region
predicted war was coming.
I think the threat is manifest
during this decade,
in fact, in the next six years.
MATHENY:
Um, one of the scenarios
that I worry about is
a cyber flash war,
um, one in which
cyber tools that are autonomous
are competing against each other
in ways that are escalatory
without
meaningful human control.
HARARI:
You live in a war zone,
it will be an AI deciding
whether to bomb your house
and whether to kill you.
Let's have the machine decide.
That's the temptation.
That's why AI is so tempting
for militaries to adopt,
to create autonomous weapons,
because if I start believing
that my military adversary
is gonna adopt AI,
it'll be a race for who can
pull the trigger faster.
And the one that automates
that decision,
rather than having
a human in the loop,
is the one
that will win that war.
And if you think about
the nuclear arms race
in the 1940s...
...you know the Germans
are working on the bomb,
so it's not so easy to tell
Robert Oppenheimer then,
"Uh, y-you know, slow down."
So after watching this movie,
it's gonna be confusing,
'cause you're gonna go back,
and tomorrow
you're gonna use ChatGPT,
and it's gonna be
unbelievably helpful.
And I will use it, too, and
it'll be unbelievably helpful.
And you'll say, "Wait, so
I just saw this movie about AI
"and existential risk
and all these things,
and where's the threat again?"
And it's not that ChatGPT is
the existential threat.
It's that the race to deploy
the most powerful,
inscrutable,
uncontrollable technology,
under the worst
incentives possible,
that's the existential threat.
DANIEL:
But it strikes me that-that
we are in a context of a race.
-SUTSKEVER: Yes.
-And it is in a competitive,
"got to get there first,
got to win the race" setting,
"got to compete against China,
got to compete against
the other labs."
Isn't that right?
Today it is the case.
So we need to change
that race dynamic.
Don't you think?
I think that would be
very good indeed.
I-I think
there's some kind of...
mysticism around AI
that makes it feel like
it fell from the sky.
DANIEL: Thinking of this
as a godlike technology
is a problem.
Yeah.
Why?
It gives license
to the companies to...
(laughs)
not take responsibility
for the things that the software
that they built does.
If people made that connection,
maybe that would, like,
help them understand again
the dynamics of
all the money and power and...
chaos that's happening to...
create this technology.
It's, like, five guys
who control it, right?
-Like, five men?
-Basically, yeah.
-The CEOs?
-Yeah.
Okay.
DANIEL:
Sometimes it feels like
I've been on this
just, like, endless journey
of trying to understand this.
You know, like
I'm climbing a mountain...
...and every time
I, like, get up a hill,
I think I've reached the top,
but it just keeps going
and going and going.
(camera clicks, whirs)
But from everything I've heard,
if I had to guess what's
at the top of Anxiety Mountain,
it's these five CEOs
from these five companies,
who are kind of like the
Oppenheimers of this moment.
These are the guys
who are building this thing.
Yeah.
Like, is there a plan?
The head of the company
that makes ChatGPwarned of possible
significant harm to the world.
I think, if this technology
goes wrong,
it can go quite wrong.
(bursts of distinct chatter)
DANIEL:
Five dudes.
I-I never thought about us
as a social media company.
It-it feels like I have to try
and-and find these guys...
Mm-hmm.
...and get them in the movie.
Certainly we'd get
some clarity from that.
(camera clicks, whirs)
I mean, the buck's got to stop
somewhere, right?
(wind whistling)
(tools whirring and clicking)
(whooshing)
DANIEL:
How's that feel?
-Great.
-That okay?
-Yeah.
-(indistinct crew chatter)
-Can I move the seat forward?
-DANIEL: Yes, you can.
It's, uh, it's just
a bit awkward.
I'm a little, like...
Dario, how do you feel
about that chair?
Um, yeah, the chair's good.
-I just wanted to move it
a little forward. -Okay.
I picked out that chair.
-It's a good chair.
-Thanks.
(cell phone ringing)
TED:
And just sit down there.
And then through here,
we'll be looking at each other
through this glass.
(busy signal beeping)
And sitting back in the chair,
leaning forward?
-DANIEL: Yeah. Well, whatever's
comfortable, I think. -Okay.
It's kind of cool.
There's a bunch of mirrors.
-Is that-- okay.
-WOMAN: Sam A., take one. Mark.
-How's that?
-Good. Thank you.
DANIEL: The genesis
of this project is that I was
sitting at home,
and I was playing around with,
I think, your image generator,
and I was simultaneously,
uh, terrified
-and really impressed.
-(chuckles softly)
-That is the usual combination.
-And then cut to
my wife and I find out
we're expecting.
And-and I'm, like, having
an existential crisis
as my wife is
six months pregnant.
And I think my first question,
which I ask everybody:
Is now a terrible time
to have a kid?
Am I making a big mistake?
I'm expecting a kid
in March, too. My first one.
-You're expecting a kid?
-Yeah.
-You're expecting in March?
-First kid.
-Mazel tov.
-Thank you very much.
I've never been so excited
for anything.
-That's how I feel.
-Yeah.
But you're not scared?
I mean, like, having a kid is
just this momentous thing
that I, you know,
I stay up every night, like,
reading these books about
how to raise a kid,
and I hope
I'm gonna do a good job,
and it feels
very overwhelming and--
but I'm not scared...
for kids to grow up
in a world with AI.
Like, that's... that'll be okay.
That is good to hear
coming from the guy.
I think it's a wonderful idea
to have kids.
I-I think they're the most
magical, incredible thing.
-(sighs heavily)
-There's so much uncertainty,
I would almost just do
what you're gonna do anyway.
I know that's not
a very satisfying answer,
but it's the only one
I can come up with.
Our kids are never gonna...
know a world that doesn't have
really advanced AI.
In fact, our kids will never
be smarter than an AI.
DANIEL: "Our kids will
never be smarter than an AI"?
Well, from a raw,
from a raw IQ perspective,
they will never be
smarter than an AI.
That notion doesn't
unsettle you a little bit?
'Cause it makes me feel
a little queasy in a weird way.
Um, it does unsettle me
a little bit.
But it is reality.
DANIEL:
Okay, so race dynamics.
We have a bunch of people
who are all in agreement
that this is scary.
Based on the timelines that
a lot of people have given me,
we have between two months and
five years to figure this out.
-Yeah.
-And-and, I guess,
my biggest question is:
Why can't we just stop?
The problem with, uh, uh,
"just stopping" is that, um,
there are many, many groups now
around the world, uh, uh,
building this, for
many nations, many companies,
um, all with
different motivations.
There are some of
these companies in this space
who their position is, "We want
to develop this technology
absolutely as fast as possible."
And even if we could pass laws
in the US and in Europe,
we need to convince...
Xi Jinping and Vladimir Putin
or, you know,
whoever their scientific
advisors are on their side.
That's gonna be really hard.
I think it is true
that if two people are
in exactly the same place,
uh, the one willing to take
more shortcuts on safety
should kind of
"get there first."
Uh, but...
we're able to use our lead
to spend a lot more time
doing safety testing.
DANIEL:
And-and what if you lose it?
You get the call
and you find out
that you're now, let's say,
six months behind.
-Wh-What happens then?
-Depends who we're behind to.
If it's, like,
a adversarial government,
that's probably really bad.
So let's say you get the call
that China has a,
has a recursively
self-improving agent
or something like that that we
should be really worried about.
What do-- what-what happens?
Um...
That case would require...
the first step there would be
to talk to the US government.
Sam, do you trust the
government's ability to handle
something like this?
Yeah, I do, actually.
There's other things I don't
trust the government to handle,
but that particular scenario,
I think they would know...
they-they-- yes.
DANIEL: When you have,
like, private discussions
with the other guys whose
fingers are on the trigger,
so to speak, um,
do those private discussions
fill you with confidence
or do they make you,
uh, more anxious?
Look, I mean, I-I know some
of them better than others.
Um, I have more confidence
in some of them th-than I have
in others, you know,
as with, as with any people.
You know,
you think about, you know,
the kids in your high school
class or something, right?
You know, some of them are,
you know, really sweet.
Some of them are well-meaning
but not that effective.
Some of them are,
you know, bullies.
Some of them are
really bad people.
(stammers)
You-you kind of really,
you really see the spread.
You know, am I, am I,
am I confident
that everyone's gonna do
the right thing,
that it's all gonna work out?
Um, no, I'm not.
And there's-there's nothing
I can do about that, right?
You know, all-all I can do
is-is push for the,
push for the, you know,
push for the government
to get involved.
But ultimately I'm just
one person there, too.
That's--
it's-it's up to all of us
to push for the government
to get involved.
That's the number one thing
that-that I think we need to do
to-to, you know, to set things
in the right direction.
What makes me anxious
about that is, like,
the basic reality that the
speed at which the technology
is proliferating and growing
is exponential,
and the mechanisms to legislate
are 300 years old,
-take forever.
-Yeah.
Um, I-I think
it's gonna be a heavy lift.
I-I definitely agree
with you on that.
DANIEL: You know,
what I'm literally looking for
is-is like,
"Here are steps that, like,
"the head honchos are gonna
take to-to focus on safety,
to mitigate the peril
and maximize the promise."
And I don't,
I don't, I don't know
that there's
a simple answer to that.
Um...
I mean, this maybe is
too simple,
but you...
create a new model,
you study and test it
very carefully.
You put it out
into the world gradually,
and then, more and more,
you understand
if that's safe or not,
and then if it is,
you can take the next step.
It doesn't sound as flashy
as, like, a brilliant scientist
coming up with one idea in a
lab to make an AI system, like,
perfectly safe and controllable
and everything else,
but it is what I believe
is gonna happen.
Like, it is the way
I think this works.
But let's just say
something terrible happens,
like a model gets loose
or goes rogue or something.
Is there a protocol?
Like, literally,
I'm imagining a red phone.
-Yeah.
-Sorry for thinking of this
in terms of movies, but, like...
There is a protocol.
-Is there a red phone
on your desk? -No.
(laughs):
Is it a secret?
I mean, uh, no, it's not,
it's not as fancy or dramatic
as you, like, would hope,
but there's, like, you know--
we've, like, thought through
these scenarios,
and if this happens,
we're gonna call these people
in this order and do this
and kind of make
these decisions if, like--
I do believe that
when you have an opportunity
to do your thinking before
a stressful situation happens,
that's almost always
a good idea.
And writing it down is helpful.
-Being prepared is helpful.
-Yeah.
(sighs) You...
It would be impossible
for me to sit across from you
and-and ask you to promise me
that this is gonna go well?
That is impossible.
There isn't any easy answers,
unfortunately.
Uh, because it's such
a cutting-edge technology,
um, there's still
a lot of unknowns.
And I think that
that-that needs to be,
um, uh, you know, understood
and-and hence the need
for, uh, uh, some caution.
I wake up, you know, every day,
this is the, this is the
number one thing I think about.
Now, look, I'm human,
and, you know, has-has
every decision been perfect?
Can I even say my motivations
were always perfectly clear?
Of course not.
No one can say that.
Like, that's-that's
just not, like, you know--
that's-that's just not
how people work.
The-the history of science
tends to be that,
for better or for worse,
if something's possible to do--
and we now know AI is possible
to do-- humanity does it.
All of this
was-was going to happen.
This-this train
isn't gonna stop.
You can't step in front of
the train and stop it.
You're just gonna get squished.
ALTMAN:
I mean, it's very stressful.
You know, there's, like,
a lot of things
a lot of us don't know.
I think the history
of scientific discovery is
one of not knowing
what you don't know
and figuring out as you go.
Uh, but, yeah, it is a...
it is a stressful way to live.
DANIEL: Right. Sam, thank you
very much for doing this.
-And again, mazel tov.
-Thank you. And to you.
Thank you so much. Thanks, guys.
(sighs)
(electricity crackling)
-(thunder cracks)
-(whooshing, grunting)
-(VCR clicks)
-(line ringing)
-(line clicks)
-KEVIN ROHER: Hello.
DANIEL:
Hey, Dad, how are you?
KEVIN:
Good. How you doing?
You working? What are you up to?
DANIEL:
I'm working on this AI film.
KEVIN:
And how's it going?
DANIEL:
You know, it... it's really...
-JOANNE ROHER: Hi, sweetie.
-DANIEL: Hi, Mom.
JOANNE: So what is
the premise of the film?
Is it a documentary?
Kev, don't use any more spices.
It's already over-spiced.
We're making chicken right now.
-Is it a documentary or what...
-DANIEL: It's about--
the movie's about
the end of the world.
The end of the world's coming,
and we're making a movie
about the end of the world.
-JOANNE: Really?
-DANIEL: Yeah.
KEVIN: Kind of a depressing
film, it sounds like.
JOANNE:
Yeah.
DANIEL: I'm feeling a lot,
like-- this very acute anxiety.
JOANNE: It's so scary,
but there's got to be--
you know, have you been meeting
some-some supersmart people
that are giving you any answers?
DANIEL: That's what's
frustrating about it.
No one knows.
JOANNE:
All I can say to that is that
every generation has had
something scary like this.
KEVIN: When I was born, it was
the Cuban Missile Crisis.
(steady heartbeat)
I was just scared that there
was going to be a nuclear war.
JOANNE:
Yeah, but we didn't know
what they were gonna do and...
KEVIN:
And the world didn't end.
Everyone woke up
the next morning,
and we're still doing our thing.
DANIEL:
I'm very scared,
especially in
the context of, like,
-you know, the baby and...
-(fetal heartbeat pulsing)
KEVIN:
It's gonna be a learning curve.
JOANNE:
You're gonna be okay.
KEVIN: You can't,
you can't think about
what you can't control, Daniel.
Just remember that.
DANIEL (voice breaking):
I'm really, I'm really feeling
nervous and scared about it.
There's so much
that I can't control.
KEVIN: Don't be nervous.
You can't let that get to you.
You can only control
what you can control,
and that's all you can do.
You can't do more than that.
Write that down in your book.
(fetal heartbeat pulsing)
(howling)
CAROLINE:
When you look back,
the world is always ending.
And when you look ahead,
the world is always ending.
...on fire.
One home is already on fire...
CAROLINE: But the world
is always starting, too.
DANIEL:
Are you ready?
Are you ever really ready?
DANIEL: You want to drive
or you want me to drive?
CAROLINE:
You drive.
-(woman speaks indistinctly)
-CAROLINE: Okay.
DANIEL:
Thank you, Maria.
WOMAN: It's gonna just
feel really crampy.
-CAROLINE: Okay.
-WOMAN: You're ready?
-CAROLINE: Yes.
-(indistinct chatter)
(Caroline sighs heavily)
WOMAN: You could be in a
medical drama with all that...
-(static crackles)
-(indistinct chatter)
WOMAN:
...some pressure.
Just relax.
-(baby crying)
-(device beeping)
DANIEL:
Hi, buddy. Ooh.
Hi, buddy.
KEVIN:
It's gonna be a whole new world
for you, Daniel.
And I think you're gonna be
an amazing father.
JOANNE (chuckles):
That's for sure.
-(Kevin crying)
-Oh.
(crying continues)
KEVIN:
You're gonna do a great job.
JOANNE:
Kev, why are you crying?
KEVIN:
I'm crying because...
I don't know. I just don't...
DANIEL: Dad, you're gonna
make me cry, too.
Why are you crying?
KEVIN: I just know that
you're gonna be an amazing dad.
DANIEL (crying):
I'm only gonna be a great dad
'cause I had a great dad.
KEVIN: I'm getting emotional,
that's all.
JOANNE:
Aw.
(playful chatter over video)
(over video):
Boy, oh, boy!
My goodness!
Look at those little cheekies.
Look at those little cheekies.
Look at those little cheekies.
Yeah.
DANIEL:
I know how to end this movie.
Babies.
-The end of the movie is about
babies. -(chatter over videos)
They're life-affirming.
They're exhausting.
-(baby laughing)
-They're hilarious.
And they're worth it.
This film isn't about the
inner workings of a technology.
It's not about
the billionaire CEOs.
It's not about the geopolitics.
It's not about the terrifying
future or the end of the world,
because my world
is just starting.
I'm building a crib.
Right here, right now.
(baby cooing)
AI is gonna change everything
in ways too powerful and
complex for us to understand.
And the future is not
for any of us to decide.
But what I can decide is to be
the best possible husband
for my wife and the best
possible dad for my son.
So whether our AI future is
a nightmarish dystopia
or the utopia
that we all dream of,
I'll at least know
that I did everything I could
to guide my family
through this AI revolution.
And no matter what,
we'll be facing it together.
(uplifting music swells)
(music stops abruptly)
DANIEL:
So that's just our first idea.
How does this feel?
Are you feeling this?
Wait, I-- like, this is not--
this is a joke.
It's not actually
how you're gonna end it.
I mean, it's just an idea. Okay?
No, Daniel.
...uh, very, very dumb.
You've just spent, I don't
know, like, how many years
of our life working on this,
talking to every leading expert
on the planet about the subject,
and you're gonna end it
with some, like,
kumbaya bullshit?
There's an asteroid
headed to Earth.
What do you do? Just...
hold hands
and hope it works out okay?
Absolutely not.
We have to-- it ha--
it has to be...
The ending has to...
CAROLINE:
Okay, first thing:
AI is here,
and it's here to stay.
The shit's out of the horse,
but the horse is
gonna keep shitting.
(crowd cheering)
You know, one of the basic
laws of history
is that nothing
really have a beginning
and nothing has any ending.
It just goes on.
AI is nowhere near
its full development.
CAROLINE: Even if
the current AI bubble bursts,
-humans are never going to stop
-(noisemaker blows)
building more and more
powerful technology.
HAO:
You can choose not to use AI
or participate in it, but it's
going to affect you anyway.
DANIEL: Okay, fantastic.
So we're screwed.
CAROLINE: No, we're not
because of one simple thing.
This is not inevitable.
If we could just see it
clearly together,
the obvious response will be
to choose something different.
We need to very clearly
change the game
from a race to the bottom
into a race to the top.
LEAHY: The problem we need to
solve is not AI specifically.
It's the general question
of how do we build a society
that can deal with
powerful technology.
Because we're going to get
only more and more powerful
technology.
CAROLINE:
We need to upgrade our society,
and the first step is
coming together
-and demanding...
-ALL: Coordination.
Some form of international
cooperation or agreement about
what the norms should be.
You know, how should
they be deployed...
Like, real
international diplomacy
among the superpowers.
The Chinese are as worried
about it as the Americans.
I think it's difficult,
you know,
in the current
geopolitical climate,
-but I think it's necessary.
-RASKIN: Absolutely.
In the exact same way that
the last time that humanity
developed a technology
this dangerous...
...that required a complete,
unprecedented shift
to the structure of our world.
CAROLINE:
So we need to do that complete,
unprecedented shift again.
You know, we-we talk to people
who work at these AI companies,
and they say they want to do
something different,
but they need public pressure.
They need the government
to do something.
So then we go to DC,
and they say,
"Well, we need Silicon Valley
to do something different.
They're the ones who are gonna
come up with the guardrails."
And so everyone is pointing
the finger at someone else,
and what they agree on is
that we need public pressure
in order for something else
to happen.
And that's what you
and all the people
watching this movie can do.
CAROLINE:
We need to hold the leaders
in our governments
and the leaders of
these companies accountable.
BOEREE:
Whichever country you're in,
let them know
that you're not happy
with the current status quo.
So, yeah, it's boring to say
call your congressperson.
I'm not saying you should
just do that, but, like,
we do have to do that.
Like, we do have to get
the government involved.
DANIEL: So we just
call them up and say,
"Hey, stop Big Tech
from ruining the world"?
CAROLINE: No, but there are
tons of really obvious,
straightforward things
we can be demanding.
We need transparency.
We need to end the secrecy
that exists inside these labs,
because they are building
powerful technology,
and the public deserves to know
what's going on.
Ultimately,
we're gonna need independent,
objective third parties
to evaluate the systems.
We can't count on the companies
to grade their own homework.
If-if a company uses AI and-and
has AI interacting with you,
it should disclose that you are
interacting with an AI system.
Yeah. And-and also we need
a system that makes companies
legally liable for the
AI systems that they produce.
We need to make sure that there
are tests and safety standards
that are applied to everyone.
We need some ground rules,
and we need to keep adapting
those rules
at the speed that
the technology develops.
LEAHY: There is currently
more regulation
on selling a sandwich
to the public
than there is on building
potentially world-ending AGI.
CAROLINE:
And the last thing is
to upgrade ourselves.
This is not, like, the job
of, like, the safety team
at any given lab, or the CEO.
So, like, this is
everyone's job.
Don't, like, leave it
up to the AI experts.
Like, (bleep) that.
Like, this is the moment
that we are transitioning
from, like,
mostly human cognitive power
to, like, AI cognitive power,
and it affects everyone,
and I want people to be
in on that conversation.
And I would say, if you think
that AI will kill us all,
you should be working
in AI research
to make sure it doesn't,
because you do have
an enormous amount of agency.
CAROLINE:
Whoever you are,
you are an expert
in your own industry,
in your own school,
in your own family,
and it's up to you
how AI is used in your life.
Whether you want to join
your school board
or-or whether you want to ask
your employer
how they're using
AI technologies,
like, all of us can do the work.
A lot of unions have been
pretty effective at, like,
determining how they want
to interact with these systems.
-Nurses unions, teacher unions.
-(indistinct chanting)
DANIELA AMODEI: I would love
if parents everywhere
went to the AI companies
and said,
"How can you be better
on this?" Including us.
So, I founded Encode Justice
when I was 15 years old.
We are the world's first
and largest army of young people
fighting for human-centered
artificial intelligence.
It doesn't matter who you are,
even the smallest actions help,
and even conversation starting
is really, really valuable.
DANIEL:
So my job could be, like,
tell Bubby Lila about this
at dinner?
CAROLINE: Honestly, yeah,
that's part of it.
And we're gonna have to do
a lot of things that
we haven't even thought of yet.
People are gonna look at
anything that we've outlined
and say, "That's not enough."
What matters is that the forces
that are working
towards solutions
start to exceed the forces that
are working against solutions.
Making the world better
has always been hard.
It has never been easy.
Like, there have been
many shitty things
that have happened in history,
and we've had--
like, people have had
to deal with that,
and then they've risen up
and changed it.
There are an insane number
of challenges ahead of us,
but if we can get past them,
we can unlock a future beyond
our wildest imagination.
CAROLINE:
We have to come together
and find the path between
the promise and the peril.
We can't be pessimists
or optimists.
We have to become something new.
A friend of mine calls me
an apocaloptimist.
"Apocaloptimist"?
I think that might be
my new favorite word.
-(laughs) -MAN: Might be
the name of this movie.
-It might be the name
of this movie. -(laughter)
-"Apocaloptimist."
-"Apocaloptimist."
Yeah.
I don't believe in doom.
I believe in the spirit of life,
uh, and I believe in life is
about the capacity to act.
-(crowd singing)
-The capacity to relate,
the capacity to feel.
-(indistinct mission chatter)
-(cheering)
We have to double down
more and more
on those capacities
that we have as humans
that robotic systems
will never have.
It's time right now
to make those decisions about
how to guide it and support it
rather than dividing us.
DANIEL: It kind of sounds like
raising a kid.
That's what's up. Yeah.
CAROLINE: AI may have
more raw intelligence
-than our little human brains,
-(baby laughing)
but we're so much more
than just our intelligence.
Intelligence is-is
the ability to solve problems.
Wisdom is the ability to know
which problems to solve.
(indistinct crew chatter)
DANIEL:
It can go on your fridge.
(laughing)
YUDKOWSKY:
So, don't give up.
Humanity has done
more difficult things than this
in its history.
It's just hard to convince
people that they should.
(audio crackles)
DANIEL: So, when I started
making this movie,
I would say that I was, like,
broadly a cynical asshole
about this whole thing.
Over the course
of making the film,
I've come to understand
that, like,
that's the only thing
we can't be.
...or anything else
you'd like to discuss?
No. I think we covered a lot.
-Yeah. (chuckles)
-MAN: Thank you so much.
Thanks.
DANIEL:
This is a problem that's bigger
than any one person.
This will change the world in
ways that we don't understand.
That is all true.
-(laughs): Okay.
-(indistinct chatter)
DANIEL:
But what we do have agency over
is what we do about it.
As frontier AI grows
exponentially more capable...
DANIEL:
And the reality
is that if we just decide
it's hopeless,
then it is hopeless.
...to put less stress
on planet Earth...
DANIEL:
But if you decide
that you want to try...
...then you try.
And that's hard.
But you know what?
Big things seem impossible
before they actually happen.
But when they finally do happen,
it's because millions of people
took millions of actions
to make them happen.
And so...
we have to try.
(sighs heavily)
There's too much at stake.
(film crackling)
Look at the incredible changes
we've experienced and survived
from the Stone Age,
and yet even greater changes
are still to come.
("Lost It to Trying"
by Son Lux playing)
What will we do now?
We've lost it to trying
We've lost it
To trying
What will we do now?
We've lost it to trying
We've lost it
To trying
What can we say now?
Our mouths only lying
Our mouths
Only lying
What can we say now?
Our mouths only lying
Our mouths
Only lying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We rise
In the dying
Give in and get out
We rise in the dying
We
Oh, oh, oh, oh, oh
Oh, oh, oh, oh
Oh, oh
Oh, oh, oh, oh, oh, oh
Oh, oh, oh, oh
Oh, oh
Oh, oh, oh, oh, oh, oh
What will we do now?
We've lost it to trying
We've lost it
To trying
Oh, oh, oh, oh, oh, oh
What will we do now?
We've lost it to trying
We've lost it
To trying.
(song ends)