AI Revolution (2024) Movie Script
1
"Viewers like you make
this program possible".
"Support your local PBS station".
MILES O'BRIEN:
Machines that think like humans.
Our dream to create machines
in our own image
that are smart and intelligent
goes back to antiquity.
Well, can it bring it to me?
O'BRIEN:
Is it possible that the dream
of artificial intelligence
has become reality?
They're able to do things
that we didn't
think they could do.
MANOLIS KELLIS:
Go was thought to be a game
where machines would never win.
The number of choices
for every move is enormous.
O'BRIEN:
And now, the possibilities
seem endless.
MUSTAFA SULEYMAN:
And this is going to be
one of the greatest boosts
to productivity in the history
of our species.
That looks like just a hint
of some type of smoke.
O'BRIEN:
Identifying problems
before a human can...
LECIA SEQUIST:
We taught the model to recognize
developing lung cancer.
O'BRIEN:
and inventing new drugs.
PETRINA KAMYA:
I never thought
that we would be able
to be doing the things
we're doing with A.I..
O'BRIEN:
But along with the hope...
[imitating Obama]:
This is a dangerous time.
O'BRIEN:
comes deep concern.
One of the first drops
in the feared flood
of A.I.-created disinformation.
We have lowered barriers
to entry to manipulate reality.
We're going to live in a world
where we don't know what's real.
The risks are uncertain
and potentially enormous.
O'BRIEN:
How powerful is A.I.?
How does it work?
And how can we reap
its extraordinary benefits...
Sybil looked here,
and anticipated
that there would be a problem.
O'BRIEN:
without jeopardizing
our future?
"A.I. Revolution"
right now, on "NOVA!"
[whirring]
Tell me the backstory
on inflection A.I..
[voiceover]:
Our story begins
with the making of this story.
PI [on computer]:
The story of Inflection A.I.
is an exciting one.
O'BRIEN [voiceover]:
I was researching
an interview subject.
Who is Mustafa Suleyman?
[voiceover]:
Something I've done
a thousand times
in my 40-year career.
PI [on computer]:
Mustafa Suleyman is a
true pioneer
in the field
of artificial intelligence.
[voiceover]:
But this time, it was different:
I wasn't typing out
search terms.
What is machine learning?
O'BRIEN [voiceover]:
I was having a conversation
with a computer.
PI:
Sounds like an
exciting project, Miles.
[voiceover]:
It felt like something
big had changed.
PI:
Machine learning, ML, is a type
of artificial intelligence.
O'BRIEN [voiceover]:
And as it happened,
I was focused on one of the
innovators of this revolution.
Okay, so if I do this...
[voiceover]:
Mustafa Suleyman is co-founder
of a startup called Inflection.
It makes an artificial
intelligence assistant
called Pi.
So now you can speak...
O'BRIEN [voiceover]:
I met them both in London.
It's fundamentally
different, isn't it?
Yeah,
it's a conversational style.
All of us humans
learn through stories,
and through narrative, and
through interactive dialogue.
And now, the machine can
kind of come alive,
and talk to you
about whatever it is
that's on top of your mind.
O'BRIEN:
Tell me about the PBS program
"NOVA."
[voiceover]:
Chatbots can offer up
quick answers, write poems,
finish essays,
and translate languages
among many other things.
PI [over phone]:
"NOVA" is a science
documentary series...
O'BRIEN [voiceover]:
They aren't perfect,
but they have put artificial
intelligence in our hands,
and into
the public consciousness.
And it seems
we're equal parts leery
and intrigued.
SULEYMAN:
A.I. is a tool
for helping us to understand
the world around us,
predict what's likely to happen,
and then invent
solutions that help improve
the world around us.
My motivation was to try
to use A.I. tools
to, uh, you know,
invent the future.
The rise
in artificial intelligence...
REPORTER:
A.I. technology is developing...
O'BRIEN [voiceover]:
Lately, it seems a dark future
is already here...
The technology could replace
millions of jobs...
O'BRIEN [voiceover]:
if you listen
to the news reporting.
The moment civilization
was transformed.
O'BRIEN [voiceover]:
So how can
artificial intelligence help us,
and how might it hurt us?
At the center of
the public handwringing:
how should we put
guardrails around it?
We definitely need
more regulations in place...
O'BRIEN [voiceover]:
Artificial intelligence
is moving fast
and changing the world.
Can we keep up?
Non-human minds
smarter than our own.
O'BRIEN [voiceover]:
The news coverage may make it
seem like
artificial intelligence
is something new.
At a moment of revolution...
O'BRIEN [voiceover]:
But human beings have been
thinking about this
for a very long time.
I have a very fine brain.
Our dream to create machines
in our own image
that are smart and intelligent
goes back to antiquity.
Uh, it's,
it's something that has,
has permeated the evolution
of society and of science.
[mortars firing]
O'BRIEN [voiceover]:
The modern origins
of artificial intelligence
can be traced
back to World War II,
and the prodigious
human brain of Alan Turing.
The legendary
British mathematician
developed a machine
capable of deciphering
coded messages from the Nazis.
After the war, he was among
the first to predict computers
might one day match
the human brain.
There are no surviving
recordings of Turing's voice,
but in 1951, he gave
a short lecture on BBC radio.
We asked an A.I.-generated voice
to read a passage.
TURING A.I. VOICE:
I think it is probable,
for instance,
that at the end of the century,
it will be possible
to program a machine
to answer questions
in such a way
that it will be extremely
difficult to guess
whether the answers are being
given by a man
or by the machine.
O'BRIEN [voiceover]:
And so,
the Turing test was born.
Could anyone build a machine
that could converse
with a human in a way
that is indistinguishable
from another person?
In 1956,
a group of pioneering scientists
spent the summer
brainstorming
at Dartmouth College.
And they told the world that
they have coined
a new academic field of study.
They called it
artificial intelligence
O'BRIEN [voiceover]:
For decades,
their aspirations remained
far ahead of
the capabilities of computers.
In 1978,
"NOVA" released its first film
on artificial intelligence.
We have seen the first
crude beginnings
of artificial intelligence...
O'BRIEN [voiceover]:
And the legendary science
fiction writer,
Arthur C. Clark was,
as always, prescient.
It doesn't really exist yet at
any level,
because our most complex
computers are still morons,
high-speed morons,
but still morons.
Nevertheless, we have
the possibility of machines
which can outpace their
creators,
and therefore,
become more intelligent than us.
At the time, researchers were
developing "expert systems,"
purpose-built
to perform specific tasks.
So the thing that we need to do
to make machine understand, um,
you know, our world,
is to put all our knowledge
into a machine
and then provide it
with some rules.
O'BRIEN [voiceover]:
Classic A.I. reached a pivotal
moment in 1997
when an artificial intelligence
program devised by IBM,
called "Deep Blue" defeated
world chess champion
and grandmaster Garry Kasparov.
It searched about 200 million
positions a second,
navigating through
a tree of possibilities
to determine the best move.
RUS:
The program analyzed
the board configuration,
could project forward
millions of moves
to examine millions of
possibilities,
and then picked the best path.
O'BRIEN [voiceover]:
Effective, but brittle,
Deep Blue wasn't
strategizing as a human does.
From the outset, artificial
intelligence researchers
imagined making machines
that think like us.
The human brain, with
more than 80 billion neurons,
learns not by following rules,
but rather by taking
in a steady stream of data,
and looking for patterns.
KELLIS:
The way that learning
actually works
in the human brain is by
updating the weights
of the synaptic connections
that are underlying this
neural network.
O'BRIEN [voiceover]:
Manolis Kellis is a
Professor of Computer Science
at the Massachusetts Institute
of Technology.
So we have trillions
of parameters in our brain
that we can adjust
based on experience.
I'm getting a reward.
I will update
the strength of the connections
that led to this reward...
I'm getting punished,
I will diminish the strength
of the connections
that led to the punishment.
So this is
the original neural network.
We did not invent it,
we, you know, we inherited it.
O'BRIEN [voiceover]:
But could an artificial
neural network
be made in our own image?
Turing imagined it.
But computers were nowhere near
powerful enough to do it
until recently.
It's only with the advent
of extraordinary data sets
that we have, uh,
since the early 2000s,
that we were able to build up
enough images,
enough annotations,
enough text to be able
to finally train
these sufficiently powerful
models.
O'BRIEN [voiceover]:
An artificial neural network is,
in fact,
modeled on the human brain.
It uses interconnected nodes,
or neurons,
that communicate with
each other.
Each node receives
inputs from other nodes
and processes those inputs
to produce outputs,
which are then passed on to
still other nodes.
It learns by adjusting
the strength of the connections
between the nodes based on
the data it is exposed to.
This process
of adjusting the connections
is called training,
and it allows an
artificial neural network
to recognize patterns
and learn from its experiences
like humans do.
A child,
how is it learning so fast?
It is learning so fast
because it's constantly
predicting the future
and then seeing what happens
and updating their weights in
their neural network
based on what just happened.
Now you can take this
self-supervised learning
paradigm
and apply it to machines.
O'BRIEN [voiceover]:
At first, some of these
artificial neural networks
were trained on vintage
Atari video games
like "Space Invaders"
and "Breakout."
Games reduce the complexity
of the real world
to a very narrow set
of actions that can be taken.
O'BRIEN [voiceover]:
Before he started Inflection,
Mustafa Suleyman co-founded
a company called
DeepMind in 2010.
It was acquired by Google
four years later.
When an A.I. plays a game,
we show it frame-by-frame,
every pixel
in the moving image.
And so the A.I. learns
to associate pixels
with actions that it can take
moving left or right
or pressing the fire button.
O'BRIEN [voiceover]:
When it obliterates blocks
or shoots aliens,
the connections between the
nodes that enabled that success
are strengthened.
In other words, it is rewarded.
When it fails, no reward.
Eventually,
all those reinforced connections
overrule the weaker ones.
The program has learned
how to win.
This sort of repeated allocation
of reward
for repetitive behavior
is a great way to train a dog.
It's a great way to teach a kid.
It's a great way for us
as adults to adapt our behavior.
And in fact,
it's actually a good way
to train machine learning
algorithms to get better.
O'BRIEN [voiceover]:
In 2014, DeepMind began work
on an artificial neural network
called "AlphaGo"
that could play the ancient,
and deceptively complex,
board game of Go.
KELLIS:
Go was thought to be a game
where machines would never win.
The number of choices
for every move is enormous.
O'BRIEN [voiceover]:
But at DeepMind,
they were counting on
the astounding growth
of compute power.
And I think that's the key
concept to try to grasp,
is that we are massively,
exponentially growing
the amount of computation used,
and in some sense,
that computation is a proxy
for how intelligent
the model is.
O'BRIEN [voiceover]:
AlphaGo was trained two ways.
First, it was fed a large
data set of expert Go games
so that it could
learn how to play the game.
This is known
as supervised learning.
Then the software played against
itself many millions of times,
so-called
reinforcement learning.
This gradually improved
its skills and strategies.
In March 2016,
AlphaGo faced Lee Sedol,
one of the world's
top-ranking players
in a five-game match in
Seoul, South Korea.
AlphaGo not only won,
but also made a move so novel,
the Go cognoscenti
thought it was a huge blunder.
That's a very surprising move.
There's no question to me
that these A.I. models
are creative.
They're incredibly creative.
O'BRIEN [voiceover]:
It turns out the move
was a stroke of brilliance.
And this
emergent creative behavior
was a hint of what was to come:
generative A.I.
Meanwhile,
a company called OpenA.I.
was creating
a generative A.I. model
that would become ChatGPT.
It allows users
to engage in a dialogue
with a machine
that seems uncannily human.
It was first released in 2018,
but it was a subsequent version
that became a global sensation
in late 2022.
This promises to be
the viral sensation
that could completely reset
how we do things.
Cranking out entire essays
in a matter of seconds.
O'BRIEN [voiceover]:
Not only did it wow the public,
it also caught
artificial intelligence
innovators off guard.
YOSHUA BENGIO:
It surprised me a lot
that they're able
to do things that
we didn't think they could do
simply by
learning to imitate
how humans respond.
And I thought this
kind of abilities would take
many more years or decades.
O'BRIEN [voiceover]:
ChatGPT is
a large language model.
LLMs start by consuming massive
amounts of text:
books, articles and websites,
which are publicly available on
the internet.
By recognizing patterns
in billions of words,
they can make guesses
at the next word in a sentence.
That's how ChatGPgenerates unique answers
to your questions.
If I ask for a haiku
about the blue sky
it writes something
that seems completely original.
KELLIS:
If you're good at predicting
this next word,
it means you're understanding
something about the sentence.
What the style
of the sentence is,
what the feeling
of the sentence is.
And you can't tell whether
this was a human or a machine.
That's basically the definition
of the Turing test.
O'BRIEN [voiceover]:
So, how is this changing
our world?
Well, It might change my world...
As an arm amputee.
Ready for my casting call, right?
MONROE [chuckling]:
Yes.
Let's do it. All right.
O'BRIEN [voiceover]:
That's Brian Monroe of
the Hanger Clinic.
He's been my prosthetist
since an injury
took my arm above the elbow
ten years ago.
So what we're going to do today
is take a mold of your arm.Uh-huh.
Kind of is like
a cast for a broken bone.
O'BRIEN [voiceover]:
Up until now, I have used
a body-powered prosthetic.
Harness and a cable allow me
to move it
by shrugging my shoulders.
The technology is
more than a century old.
But artificial intelligence,
coupled with small
electric motors,
is finally pushing prosthetics
into the 21st century.
Which brings me to Chicago
and the offices of
a small company called Coapt.
I met the C.E.O., Blair Locke,
a pioneer in the push
to apply artificial intelligence
to artificial limbs.
So, what do we have here?
What are we going to do?
This allows us to very easily
test how your control would be
using a pretty simple cuff;
this has electrodes in it,
and we'll let the power
of the electronics
that are doing
the machine learning
see what you're capable
of. All right, let's give it a try.
[voiceover]:
Like most amputees,
I feel my missing hand almost
as if it was still there...
A phantom.
Everything will touch.
Is that okay?
Yeah. Not too tight?
No. All good.Okay.
O'BRIEN [voiceover]:
It's almost entirely immobile,
stuck in molasses.
Make a fist, not too hard.
O'BRIEN [voiceover]:
But I am able to imagine
moving it ever so slightly.
And I'm gonna have you squeeze
into that a little bit harder.
Very good, and I see the
pattern on the screen
change a little bit.
O'BRIEN [voiceover]:
And when I do,
I generate an array of faint
electrical signals in my stump.
That's your muscle information.
It feels,
it feels like I'm overcoming
something that's really stuck.
I don't know,
is that enough signal?
Should be.
Oh, okay.
We don't need a lot of signal,
we're going for information
in the signal,
not how loud it is.
O'BRIEN [voiceover]:
And this is where artificial
intelligence comes in.
Using a virtual
prosthetic depicted on a screen,
I trained a machine learning
algorithm to become fluent
in the language
of my nerves and muscles.
We see eight different signals
on the screen.
All eight of those sensor sites
are going to feed in together
and let the algorithm
sort out the data.
What you are experiencing
is your ability
to teach the system
what is hand-closed to you.
And that's different
than what it would be to me.
O'BRIEN [voiceover]:
I told the software
what motion I desired,
open, close, or rotate,
then imagined moving
my phantom limb accordingly.
This generates an array
of electromyographic,
or EMG, signals in
my remaining muscles.
I was training the A.I.
to connect the pattern
of these electrical signals
with a specific movement.
LOCK:
The system adapts,
and as you add more data
and use it over time,
it becomes more robust,
and it learns
to improve upon use.
O'BRIEN:
Is it me that's learning, or
the algorithm that's learning?
Or are we learning together? LOCK:
You're learning together.
Okay.
O'BRIEN [voiceover]:
So, how does the Coapt pattern
recognition system work?
It's called a Bayesian
classification model.
As I train the software,
it labels my
various EMG patterns
into corresponding
classes of movement...
Hand open, hand closed,
wrist rotation, for example.
As I use the arm,
it compares the electrical
signals I'm transmitting
to the existing library
of classifications I taught it.
It relies on
statistical probability
to choose the best match.
And this is just one way
machine learning
is quietly
revolutionizing medicine.
Computer scientist
Regina Barzilay
first started working on
artificial intelligence
in the 1990s, just as
rule-based A.I. like Deep Blue
was giving way
to neural networks.
She used the techniques
to decipher dead languages.
You might call it
a small language model.
Something that is fun and
intellectually very challenging,
but it's not like
it's going to change our life.
O'BRIEN [voiceover]:
And then her life changed
in an instant.
CONSTANCE LEHMAN:
We see a spot there.
O'BRIEN [voiceover]:
In 2014, she was diagnosed
with breast cancer.
BARZILAY [voiceover]:
When you go through the
treatment,
there are a lot of
people who are suffering.
I was interested in
what I can do about it, and
clearly it was not continuing
deciphering dead languages,
and it was quite a journey.
O'BRIEN [voiceover]:
Not surprisingly, she began that
journey with mammograms.
LEHMAN:
It's a little bit
more prominent.
O'BRIEN [voiceover]:
She and Constance Lehman,
a radiologist at
Massachusetts General Hospital,
realized the Achilles heel
in the diagnostic system
is the human eye.
BARZILAY [voiceover]:
So the question that we ask is,
what is the likelihood
of these patients
to develop cancer
within the next five years?
We, with our human eyes,
cannot really make these
assertions
because the patterns
are so subtle.
LEHMAN: Now, is that different
from the surrounding tissue?
O'BRIEN [voiceover]:
It's a perfect use case
for pattern recognition
using what is known as
a convolutional neural network.
Here's an example
of how CNNs get smart:
they comb through a picture with
many virtual magnifying glasses.
Each one is looking for
a specific kind of puzzle piece,
like an edge,
a shape, or a texture.
Then it makes
simplified versions,
repeating the process
on larger and larger sections.
Eventually
the puzzle can be assembled.
And it's time to make a guess.
Is it a cat? A dog? A tree?
Sometimes the guess is right,
but sometimes it's wrong.
And here's the learning part:
with a process
called backpropagation,
labeled images are sent back to
correct the previous operation.
So the next time
it plays the guessing game,
it will be even better.
To validate the model,
Regina and her team gathered up
more than 128,000 mammograms
collected at seven sites
in four countries.
More than 3,800 of them
led to a cancer diagnosis
within five years.
You just give to it the image,
and then
the five years of outcomes,
and it can learn the likelihood
of getting a cancer diagnosis.
O'BRIEN [voiceover]:
The software, called Mirai,
was a success.
In fact, it is between
75% and 84% accurate
in predicting
future cancer diagnoses.
Then, a friend of
Regina's developed lung cancer.
SEQUIST:
In lung cancer, it's actually
sort of mind boggling
how much has changed.
O'BRIEN [voiceover]:
Her friend saw oncologist
Lecia Sequist.
She and Regina wondered
if artificial intelligence
could be applied
to CAT scans of patients' lungs.
SEQUIST:
We taught the model
to recognize the patterns
of developing lung cancer
by using thousands of CAT scans
from patients who were
participating
in a clinical trial.
From the new study?
Oh, interesting.Correct.
SEQUIST [voiceover]:
We had a lot of information
about them.
We had demographic information,
we had health information,
and we had outcomes information.
O'BRIEN [voiceover]:
They call the model Sibyl.
In the retrospective study,
right,
so the retrospective data...
O'BRIEN [voiceover]:
Radiologist Florian Fintelmann
showed me what it can do.
FINTELMANN:
This is earlier,
and this is later.
There is nothing
that I can perceive, pick up,
or describe.
There's no, what we call,
a precursor lesion
on this CT scan.
Sibyl looked here
and then anticipated that
there would be a problem
based on the baseline
scan. What is it seeing?
That's the million dollar
question.
And, and maybe not
the million dollar question.
Does it really matter? Does it?
O'BRIEN [voiceover]:
When they compared
the predictions
to actual outcomes from previous
cases, Sybil fared well.
It correctly forecast cancer
between 80% and 95% of the time,
depending on the population
it studied.
The technique is
still in the trial phase.
But once it is deployed,
it could provide
a potent tool for prevention.
The hope is that if you
can predict very early on
that the patient
is in the wrong way,
you can do clinical trials,
you can develop the drugs
that are doing the prevention,
rather than treatment
of very advanced disease
that we are doing today.
O'BRIEN [voiceover]:
Which takes us back to DeepMind
and AlphaGo.
The fun and games
were just the beginning,
a means to an end.
We have always set out
at DeepMind
to, um, use our technologies to
make the world a better place.
O'BRIEN [voiceover]:
In 2021,
the company released AlphaFold.
It is pattern
recognition software
designed to make
it easier for researchers
to understand proteins,
long chains of amino acids
involved in nearly
every function in our bodies.
How a protein folds
into a specific,
three-dimensional shape
determines how it interacts
with other molecules.
SULEYMAN:
There's this correlation between
what the protein does
and how it's structured.
So if we can predict
how the protein folds,
then say something
about their function.
O'BRIEN:
If we know how a disease's
protein is shaped, or folded,
we can sometimes create
a drug to disable it.
But the shape of millions
of proteins remained a mystery.
DeepMind trained AlphaFold
on thousands of
known protein structures.
It leveraged this knowledge
to predict
200 million protein structures,
nearly all the proteins
known to science.
SULEYMAN:
You take some high-quality
known data,
and you use that to, you know,
make a prediction about how
a similar piece of information
is likely to unfold
over some time series,
and the structure
of proteins is,
you know, in that sense,
no different to
making a prediction in
the game of Go or in Atari
or in a mammography scan,
or indeed,
in a large language model.
KAMYA:
These thin sticks here?
Yeah? They represent
the amino acids
that make up a protein.
O'BRIEN [voiceover]:
Theoretical chemist
Petrina Kamya works for
a company called
Insilico Medicine.
It uses AlphaFold
and its own deep-learning models
to make accurate predictions
about protein structures.
What we're doing in drug design
is we're designing a molecule
that is analogous
to the natural molecule
that binds to the protein,
but instead it will lock it,
if this molecule
is involved in a disease
where it's hyperactive.
O'BRIEN [voiceover]:
If the molecule fits well,
it can inhibit the
disease-causing proteins.
So you're filtering it down
like you're choosing
an Airbnb or something to,
you know, number of bedrooms,
whatever. To suit your needs.
[laughs]Exactly, right.
Right, yeah. That's a very good analogy.
It's sort of like Airbnb.
So you are putting in
your criteria,
and then Airbnb will filter out
all the different properties
based on your criteria.
So you can be very, very
restrictive
or you can be very,
very free... Right.
In terms of guiding the
generative algorithms
and telling them
what types of molecules
you want them to generate.
O'BRIEN [voiceover]:
It will take 48 to 72 hours
of computing time
to identify the best
candidates ranked in order.
How long would it have taken you
to figure that out
as a computational chemist?
I would have thought of
some of these,
but not all of them.Okay.
O'BRIEN [voiceover]:
While there are no shortcuts
for human trials,
nor should we hope for that,
this could greatly speed up
the drug development pipeline.
There will not be the need
to invest so heavily
in preclinical discovery,
and so,
drugs can therefore be cheaper.
And you can go
after those diseases
that are otherwise neglected,
because you don't have
to invest so heavily
in order for you
to come up with a drug,
a viable drug.
O'BRIEN [voiceover]:
But medicine isn't
the only place
where A.I. is breaking
new frontiers.
It's conducting
financial analysis,
helps with fraud detection.
[mechanical whirring]
It's now being deployed
to discover novel materials
and could help us build
clean energy technology.
And It is even helping
to save lives
as the climate crisis
boils over.
[indistinct radio chatter]
In St. Helena, California,
dispatchers at the
CAL FIRE Sonoma-Lake-Napa
Command Center
caught a break in 2023.
Wildfires blackened nearly
700 acres of their territory.
We were at 400,000 acres
in 2020.
Something like that would
generate a response from us...
O'BRIEN [voiceover]:
Chief Mike Marcucci has
been fighting fires
for more than 30 years.
MARCUCCI [voiceover]:
Once we started having
these devastating fires,
we needed more intel.
The need for intelligence
is just overwhelming
in today's fire service.
O'BRIEN [voiceover]:
Over the past 20 years,
California
has installed a network
of more than
1,000 remotely operated
pan, tilt, zoom surveillance
cameras on mountaintops.
PETE AVANSINO:
Vegetation fire,
Highway 29 at Doton Road.
O'BRIEN [voiceover]:
All those cameras generate
petabytes of video.
CAL FIRE partnered with
scientists at U.C. San Diego
to train a neural network
to spot the early signs
of trouble.
It's called ALERT California.
SeLEGUE:
So here's one
that just popped up.
Here's an anomaly.
O'BRIEN [voiceover]:
CAL FIRE Staff Chief of Fire
and Intelligence Philip SeLegue
showed me how it works
while it was in action,
detecting nascent fires,
micro fires.
That looks like
just a little hint
of some type of smoke
that was there...
O'BRIEN [voiceover]:
Based on this,
dispatchers can orchestrate
a fast response.
A.I. has given us the ability
to detect and to see
where those fires are starting.
AVANSINO:
Transport 1447
responding via MDC.
O'BRIEN [voiceover]:
For all they know,
they have nipped
some megafires in the bud.
The success are the fires
that you don't hear about
in the news.
O'BRIEN [voiceover]:
Artificial intelligence
can't put out
wildfires just yet.
Human firefighters
still need to do that job.
But researchers are pushing hard
to combine neural networks
with mobility and dexterity.
This is where people
get nervous.
Will they take our jobs?
Or could they turn against us?
But at M.I.T.,
they're exploring ideas
to make robots
good human partners.
We are interested in
making machines
that help people with
physical and cognitive tasks.
So this is really great,
it has the stiffness
that we wanted...
O'BRIEN [voiceover]:
Daniela Rus is director of
M.I.T.'s Computer Science
and Artificial Intelligence Lab.
Oh, can you bring it to me?
O'BRIEN [voiceover]:
CSAIL.
They are different, like,
kind of like muscles
or actuators.
RUS [voiceover]:
We can do so much more
when we get people and machines
working together.
We can get better reach.
We can get lift,
precision, strength, vision.
All of these are
physical superpowers
we can get through machines.
O'BRIEN [voiceover]:
So, they're focusing
on making it safe for humans
to work in close proximity
to machines.
They're using some of
the technology that's inside
my prosthetic arm.
Electrodes that can read
the faint EMG signals generated
as our nerves command
our muscles to move.
They have the capability to
interact with a human,
to understand the human,
to step in and help the human
as needed.
I am at your disposal with
187 other languages,
along with their various
dialects and sub tongues.
O'BRIEN [voiceover]:
But making robots as useful
as they are in the movies
is a big challenge.
Most neural networks run on
powerful supercomputers...
Thousands of processors
occupying entire buildings.
RUS:
We have brains that require
massive computation,
which you cannot include
on a self-contained body.
We address the size challenge by
making liquid networks.
O'BRIEN [voiceover]:
Liquid networks.
So it looks like
an autonomous vehicle
like I've seen before,
but it is a little
different, right?
ALEXANDER AMINI:
Very different.
This is an autonomous vehicle
that can drive in
brand-new environments
that it has never seen
before for the first time.
O'BRIEN [voiceover]:
Most self-driving cars
today rely,
to some extent,
on detailed databases
that help them recognize
their immediate environment.
Those robot cars get lost
in unfamiliar terrain.
O'BRIEN:
In this case,
you're not relying on
a huge, expansive
neural network.
You're running on
19 neurons, right?
Correct.
O'BRIEN [voiceover]:
Computer scientist
Alexander Amini
took me on a ride
in an autonomous vehicle
with a liquid neural
network brain.
AMINI:
We've become very accustomed
to relying on
big, giant data centers
and cloud compute.
But in an autonomous vehicle,
you cannot make
such assumptions, right?
You need to be able to operate,
even if you lose
internet connectivity
and you cannot
talk to the cloud anymore,
your entire neural network,
the brain of the car,
needs to live on the car,
and that imposes a lot
of interesting constraints.
O'BRIEN [voiceover]:
To build a brain smart enough
and small enough to do this job,
they took some inspiration
from nature,
a lowly worm
called C. elegans.
Its brain contains all of
300 neurons,
but it's a very
different kind of neuron.
It can capture
more complex behaviors
in every single piece
of that puzzle.
And also the wiring,
how a neuron talks to
another neuron
is completely different
than what we see
in today's neural networks.
O'BRIEN [voiceover]:
Autonomous cars that tap
into today's neural networks
require huge amounts of
compute power in the cloud.
But this car is using
just 19 liquid neurons.
A worm at the wheel...
sort of.
AMINI [voiceover]:
Today's A.I. models
are really
pushing the boundaries
of the scale of compute
that we have.
They're also pushing
the boundaries
of the data sets that we have.
And that's not sustainable,
because ultimately,
we need to deploy A.I.
onto the device itself, right?
Onto the cars,
onto the surgical robots.
All of these edge devices
that actually makes
the decisions.
O'BRIEN [voiceover]:
The A.I. worm may, in fact,
turn.
The portability of
artificial intelligence
was on my mind when it came time
to pick up
my new myoelectric arm...
equipped with
Coapt A.I. pattern recognition.
All right, let's just check this
real quick...
O'BRIEN [voiceover]:
A few weeks after
my trip to Chicago,
I met Brian Monroe
at his home office
outside Washington, D.C.
Are you happy with
the way it came out? Yeah.
Would you tell me otherwise?
[laughing]:
Yeah, I would, yeah...
O'BRIEN [voiceover]:
As usual,
he did a great job
making a tight socket.
How's the socket feel?
Does it feel like
it's sliding down or
falling out... No, it fits like a glove.
O'BRIEN [voiceover]:
It's really important in
this case,
because the electrodes designed
to read the signals
from my muscles...
have to stay in place snugly
in order to generate
accurate, reliable commands
to the actuators in my new hand.
Wait, is that you? That's me.
[voiceover]:
He also provided me with
a human-like bionic hand.
But getting it
to work just right
took some time.
That's open and it's closing.
It's backwards?
Yeah. Now try.
If it's reversed,
I can swap the electrodes. There we go.
That's got it. Is it the right direction?
Yeah.Uh-huh. Okay.
O'BRIEN [voiceover]:
It's a long way from the movies,
and I'm no Luke Skywalker.
But my new arm and I
are now together.
And I'm heartened to know
that I have the freedom
and independence
to teach and tweak it
on my own.
That's kind of cool.Yeah.
[voiceover]:
Hopefully we will listen to
each other.
It's pretty awesome.
O'BRIEN [voiceover]:
But we might want to listen
with a skeptical ear.
JORDAN PEELE [imitating Obama]:
You see, I would never
say these things,
at least not in
a public address,
but someone else would.
Someone like Jordan Peele.
This is a dangerous time.
O'BRIEN [voiceover]:
It's even more dangerous now
than it was in 2018
when comedian Jordan Peele
combined his pitch-perfect
Obama impression
with A.I. software to make
this convincing fake video.
or whether we become some
kind of [bleep] up dystopia.
O'BRIEN [voiceover]:
Fakes are about as old as
photography itself.
Mussolini, Hitler, and Stalin
all ordered that pictures be
doctored or redacted,
erasing those
who fell out of favor,
consolidating power,
manipulating their followers
through images.
HANY FARID:
They've always been manipulated,
throughout history, but...
There was literally,
you can count on one hand,
the number of people
in the world who could do this.
But now,
you need almost no skill.
And we said, "Give us an image
"of a middle-aged woman,
newscaster,
sitting at her desk,
reading the news."
O'BRIEN [voiceover]:
Hany Farid is a professor
of computer science
at U.C. Berkeley.
[on computer]:
And this is your daily dose
of future flash.
O'BRIEN [voiceover]:
He and his team
are trying to navigate
the house of mirrors
that is the world of
A.I.-enabled deepfake imagery.
Not perfect.
She's not blinking,
but it's pretty good.
And by the way, he did this
in a day and a half.
FARID [voiceover]:
It's the
classic automation story.
We have lowered
barriers to entry
to manipulate reality.
And when you do that,
more and more people will do it.
Some good people will do it,
but lots of bad people
will do it.
There'll be some
interesting use cases,
and there'll be a lot of
nefarious use cases.
Okay, so, um...
Glasses off.
How's the framing?
Everything okay?
[voiceover]:
About a week before
I got on a plane to see him... Hold on.
O'BRIEN [voiceover]:
He asked me to meet him on Zoom
so he could get a good recording
of my voice and mannerisms.
And I assume
you're recording, Miles.
O'BRIEN [voiceover]:
And he turned the table on me
a little bit,
asking me a lot of questions
to get a good sampling.
FARID [on computer]:
How are you feeling about
the role of A.I.
as it enters into our world
on a daily basis?
I think it's very important,
first of all,
to calibrate the concern level.
Let's take it away from
the "Terminator" scenario...
[voiceover]:
The "Terminator" scenario.
Come with me
if you want to live.
O'BRIEN [voiceover]:
You know, a malevolent
neural network
hellbent on exterminating
humanity.
You're really real.
O'BRIEN [voiceover]:
In the film series,
the cyborg assassin
is memorably played
by Arnold Schwarzenegger.
Hany thought it would be fun
to use A.I.
to turn Arnold into me.
Okay.
O'BRIEN [voiceover]:
A week later, I showed up at
Berkeley's
School of Information,
ironically located in
the oldest building on campus.
So you had me do
this strange thing on Zoom.
Here I am.
What did you do with me?
Yeah, well, it's gonna teach you
to let me record
your Zoom call, isn't it?
I did this
with some trepidation.
[voiceover]:
I was excited to see what tricks
were up his sleeve.
FARID [voiceover]:
I uploaded 90 seconds of audio,
and I clicked a box saying
"Miles has given me
permission to use his voice,"
which I don't actually
think you did.
[chuckles]
Um, and, I waited about,
eh, maybe 20 seconds,
and it said, "Okay, what would
you like for Miles to say?"
And I started typing,
and I generated an audio
of you saying
whatever I wanted you to say.
We are synthesizing,
at much, much lower resolution.
O'BRIEN [voiceover]:
You could have knocked me over
with a feather
when I watched this.
A.I. O'BRIEN:
Terminators were
science fiction back then,
but if you follow the
recent A.I. media coverage,
you might think that Terminators
are just around the corner.
The reality is...
O'BRIEN [voiceover]:
The eyes and the mouth
need some work,
but it sure does sound like me.
And consider what happened
in May of 2023.
Someone posted
this A.I.-generated image
of what appeared to be
a terrorist bombing
at the Pentagon.
NEWS ANCHOR:
Today we may have witnessed
one of the first drops
in the feared flood
of A.I.-created
disinformation.
O'BRIEN [voiceover]:
It was shared on Twitter
via what seemed to be
a verified account
from Bloomberg News.
NEWS ANCHOR:
It only took seconds
to spread fast.
The Dow now down about
200 points...
Two minutes later,
the stock market dropped
a half a trillion dollars
from a single fake image.
Anybody could've made
that image,
whether it was intentionally
manipulating the market
or unintentionally,
in some ways,
it doesn't really matter.
O'BRIEN [voiceover]:
So what are the technological
innovations that make this tool
widely available?
One technique is called
the generative
adversarial network,
or GAN.
Two algorithms
in a dizzying
student-teacher back and forth.
Let's say it's learning how to
generate a cat.
FARID:
And it starts by
just splatting down
a bunch of pixels onto a canvas.
And it sends it over to
a discriminator.
And the discriminator has access
to millions and millions
of images
of the category that you want.
And it says,
"Nope, that doesn't look
like all these other things."
So it goes back to the generator
and says, "Try again."
Modifies some pixels,
sends it back
to the discriminator,
and they do this in
what's called
an adversarial loop.
O'BRIEN [voiceover]:
And eventually,
after many thousands of volleys,
the generator
finally serves up a cat.
And the discriminator says,
"Do more like that."
Today, we have a whole new way
of doing these things.
They're called diffusion-based.
What diffusion does
is it has vacuumed up
billions of images
with captions
that are descriptive.
O'BRIEN [voiceover]:
It starts by making those
labeled images
visually noisy on purpose.
FARID:
And then it corrupts it more,
and it goes backwards
and corrupts it more,
and goes backwards
and corrupts it more
and goes backwards...
And it does that
six billion times.
O'BRIEN [voiceover]:
Eventually it corrupts it
so it's unrecognizable
from the original image.
Now that it knows how
to turn an image into nothing,
it can reverse the process,
turning seemingly nothing,
into a beautiful image.
FARID:
What it's learned is how to take
a completely indescript image,
just pure noise,
and go back to a coherent image,
conditioned on a text prompt.
You're basically
reverse engineering an image
down to the pixel.
Yeah, exactly, yeah.
And it's... and by the way...
If you had asked me,
"Will this work?"
I would have said,
"No, there's no way
this system works."
It just, it just doesn't
seem like it should work.
And that's sort of the magic
of when you get this much data
and very powerful algorithms
and very powerful computing
to be able to crunch
these massive data sets.
I mean, we're not
going to contain it.
That's done.
[voiceover]:
I sat down with Hany
and two of his grad students:
Justin Norman
and Sarah Barrington.
We looked at some
the A.I. trickery
they have seen and made.
Somebody else
wrote some base code
and they got grew on to
and grow on to and
grow on to and eventually...
O'BRIEN [voiceover]:
In a world where anything
can be manipulated
with such ease
and seeming authenticity,
how are we to know
what's real anymore?
How you look at the world,
how you interact with
people in it,
and where you look for
your threats of that change.
O'BRIEN [voiceover]:
Generative A.I. is now
part of a larger ecosystem
that is built on mistrust.
We're going to live
in a world where
we don't know what's real.
FARID [voiceover]:
There is distrust of
governments,
there is distrust of media,
there is distrust of academics.
And now throw on top of that
video evidence.
So-called video evidence.
I think this is
the very definition
of throwing jet fuel onto
a dumpster fire.
And it's already happening,
and I imagine
we will see more of it.
[Arnold's voice]:
Come with me
if you want to live.
O'BRIEN [voiceover]:
But it also can be
kind of fun.
As Hany promised,
here's my face
on the Terminator's body.
[gunfire blasting]
Long before A.I. might take
an existential turn
against humanity,
we will need to
reckon with the likes...
Go! Now! O'BRIEN [voiceover]:
Of the Milesinator.
TRAILER NARRATOR:
This time, he's back.
[booming]
O'BRIEN [voiceover]:
Who will no doubt, be back.
Trust me.
O'BRIEN [voiceover]:
Trust,
but always verify.
So, what kind of A.I. magic
is readily available online?
It's pretty simple
to make it look
like you're fluent
in another language.
[speaking Mandarin]:
It was pretty easy to do,
I just had to upload
a video and wait.
[speaking German]:
And, suddenly,
I look pretty darn smart.
[speaking Greek]:
Sure, it's fun,
but I think you can see
where it leads to mischief
and possibly even mayhem.
[voiceover]:
Yoshua Bengio is an
artificial intelligence pioneer.
He says he didn't spend
much time
thinking about
science fiction dystopia
as he was creating
the technology.
But as his brilliant ideas
became reality,
reality set in.
BENGIO:
And the more I read,
the more I thought about it...
the more concerned I got.
If we are not honest
with ourselves,
we're gonna fool ourselves.
We're gonna...
lose.
O'BRIEN [voiceover]:
Avoiding that outcome
is now his main priority.
He has signed
several public warnings
issued by A.I. thought leaders,
including this stark
single-sentence statement
in May of 2023.
"Mitigating the risk of
extinction from A.I.
"should be a global priority
"alongside other
societal scale risks,
"such as pandemics
and nuclear war."
As we approach more and more
capable A.I. systems
that might even become stronger
than humans in many areas,
they become
more and more dangerous.
Can't we just pull
the plug on the thing?
Oh, that's
the safest thing to do,
pull the plug.
Before it gets so powerful that
it prevents us from
pulling the plug.
DAVE:
Open the pod bay doors, Hal.
HAL:
I'm sorry, Dave,
I'm afraid I can't do that.
O'BRIEN [voiceover]:
It may be some time
before computers are able
to act like
movie supervillains...
HAL:
Goodbye.
O'BRIEN [voiceover]:
But there are near-term dangers
already emerging.
Besides deepfakes and
misinformation,
A.I. can also supercharge bias
and hate content,
replace human jobs...
This is why we're striking,
everybody.[crowd exclaiming]
O'BRIEN [voiceover]:
And make it easier
for terrorists
to create bioweapons.
And A.I. systems are so complex
that they are difficult
to comprehend,
all but impossible to audit.
RUS [voiceover]:
Nobody really understands
how those systems
reach their decisions.
So we have to be
much more thoughtful
about how we
test and evaluate them
before releasing them.
They're concerned
whether machine will be able
to begin to think for itself.
O'BRIEN [voiceover]:
The U.S. and Europe have begun
charting a strategy
to try to ensure safe, secure,
and trustworthy
artificial intelligence.
RISHI SUNAK:
in a way that will
be safe for our communities...
O'BRIEN [voiceover]:
But how to do that
in the midst of a frenetic race
to dominate a technology
with a predicted economic impact
of 13 trillion dollars by 2030.
There is such a strong
commercial incentive
to develop this
and win the competition
against the other companies,
not to mention
the other countries,
that it's hard
to stop that train.
But that's what
governments should be doing.
NEWS ANCHOR:
The titans of social media
didn't want to come to
Capitol Hill.
O'BRIEN [voiceover]:
Historically, the tech industry
has bridled against regulation.
You have an army of lawyers
and lobbyists
that have fought us on this...
SULEYMAN [voiceover]:
There's no question that
guardrails
will slow things down,
But, the risks are uncertain
and potentially enormous.
So, it makes sense for us
to start having
the conversation right now.
O'BRIEN [voiceover]:
For me, the conversation
about A.I. is personal.
Okay, no network detected.
Okay, um...
Oh, here we go.Okay.
And now I'm going to open,
open, open, open, open...
[voiceover]:
I used the Coapt app
to train the A.I.
inside my new prosthetic.
It says all of my
training data is good,
it's four of five stars.
And now let's try to close.
[whirring]
All right.
Seems to be doing
what it was told.
[voiceover]:
Was my new arm listening?
Maybe.
I decided to make things
simpler.
I took off the hand and
attached a myoelectric hook.
[quietly]:
All right.
[voiceover]:
Function over form.
Not a conversation piece
necessarily at a cocktail party
like this thing is.
This looks more like
Luke Skywalker, I suppose.
But this thing has a tremendous
amount of function to it.
Although, right now,
it wants to stay open.
[voiceover]:
And that problem persisted.
Find a tripod plate...
[voiceover]:
When I tried using it
to set up my basement studio
for a live broadcast.
Come on, close.
[voiceover]:
I was quickly frustrated.
[item drops, audio beep]
Really annoying.
Not useful.
[voiceover]:
The hook continuously
opened on its own.
[clattering]Damn it!
[voiceover]:
So I completely reset
and retrained the arm.
And... reset,
there we go.
Add data...
[voiceover]:
But the software was
artificially unhappy.
"Electrodes are not
making good skin contact."
Maybe that is my problem,
ultimately.
[voiceover]:
My problem really is
I haven't given this
enough time.
Amputees tell me it can take
many months to really learn
how to use an arm like this one.
The choke point isn't
artificial intelligence.
Dead as a doornail.
[voiceover]:
But rather, what is the best way
to communicate
my intentions to it?
Little reboot there, I guess.
All right.
Close.
Open, close.
[voiceover]:
It turns out machine learning
isn't smart enough to
give me a replacement arm
like Luke Skywalker got.
Nor is it capable
of creating the Terminator.
Right now, it seems many
hopes and fears
for artificial intelligence...
Oh!
[voiceover]:
are rooted
in science fiction.
But we are walking down a road
to the unknown.
The door is opening to
a revolution.
[door closes]
"Viewers like you make
this program possible".
"Support your local PBS station".
MILES O'BRIEN:
Machines that think like humans.
Our dream to create machines
in our own image
that are smart and intelligent
goes back to antiquity.
Well, can it bring it to me?
O'BRIEN:
Is it possible that the dream
of artificial intelligence
has become reality?
They're able to do things
that we didn't
think they could do.
MANOLIS KELLIS:
Go was thought to be a game
where machines would never win.
The number of choices
for every move is enormous.
O'BRIEN:
And now, the possibilities
seem endless.
MUSTAFA SULEYMAN:
And this is going to be
one of the greatest boosts
to productivity in the history
of our species.
That looks like just a hint
of some type of smoke.
O'BRIEN:
Identifying problems
before a human can...
LECIA SEQUIST:
We taught the model to recognize
developing lung cancer.
O'BRIEN:
and inventing new drugs.
PETRINA KAMYA:
I never thought
that we would be able
to be doing the things
we're doing with A.I..
O'BRIEN:
But along with the hope...
[imitating Obama]:
This is a dangerous time.
O'BRIEN:
comes deep concern.
One of the first drops
in the feared flood
of A.I.-created disinformation.
We have lowered barriers
to entry to manipulate reality.
We're going to live in a world
where we don't know what's real.
The risks are uncertain
and potentially enormous.
O'BRIEN:
How powerful is A.I.?
How does it work?
And how can we reap
its extraordinary benefits...
Sybil looked here,
and anticipated
that there would be a problem.
O'BRIEN:
without jeopardizing
our future?
"A.I. Revolution"
right now, on "NOVA!"
[whirring]
Tell me the backstory
on inflection A.I..
[voiceover]:
Our story begins
with the making of this story.
PI [on computer]:
The story of Inflection A.I.
is an exciting one.
O'BRIEN [voiceover]:
I was researching
an interview subject.
Who is Mustafa Suleyman?
[voiceover]:
Something I've done
a thousand times
in my 40-year career.
PI [on computer]:
Mustafa Suleyman is a
true pioneer
in the field
of artificial intelligence.
[voiceover]:
But this time, it was different:
I wasn't typing out
search terms.
What is machine learning?
O'BRIEN [voiceover]:
I was having a conversation
with a computer.
PI:
Sounds like an
exciting project, Miles.
[voiceover]:
It felt like something
big had changed.
PI:
Machine learning, ML, is a type
of artificial intelligence.
O'BRIEN [voiceover]:
And as it happened,
I was focused on one of the
innovators of this revolution.
Okay, so if I do this...
[voiceover]:
Mustafa Suleyman is co-founder
of a startup called Inflection.
It makes an artificial
intelligence assistant
called Pi.
So now you can speak...
O'BRIEN [voiceover]:
I met them both in London.
It's fundamentally
different, isn't it?
Yeah,
it's a conversational style.
All of us humans
learn through stories,
and through narrative, and
through interactive dialogue.
And now, the machine can
kind of come alive,
and talk to you
about whatever it is
that's on top of your mind.
O'BRIEN:
Tell me about the PBS program
"NOVA."
[voiceover]:
Chatbots can offer up
quick answers, write poems,
finish essays,
and translate languages
among many other things.
PI [over phone]:
"NOVA" is a science
documentary series...
O'BRIEN [voiceover]:
They aren't perfect,
but they have put artificial
intelligence in our hands,
and into
the public consciousness.
And it seems
we're equal parts leery
and intrigued.
SULEYMAN:
A.I. is a tool
for helping us to understand
the world around us,
predict what's likely to happen,
and then invent
solutions that help improve
the world around us.
My motivation was to try
to use A.I. tools
to, uh, you know,
invent the future.
The rise
in artificial intelligence...
REPORTER:
A.I. technology is developing...
O'BRIEN [voiceover]:
Lately, it seems a dark future
is already here...
The technology could replace
millions of jobs...
O'BRIEN [voiceover]:
if you listen
to the news reporting.
The moment civilization
was transformed.
O'BRIEN [voiceover]:
So how can
artificial intelligence help us,
and how might it hurt us?
At the center of
the public handwringing:
how should we put
guardrails around it?
We definitely need
more regulations in place...
O'BRIEN [voiceover]:
Artificial intelligence
is moving fast
and changing the world.
Can we keep up?
Non-human minds
smarter than our own.
O'BRIEN [voiceover]:
The news coverage may make it
seem like
artificial intelligence
is something new.
At a moment of revolution...
O'BRIEN [voiceover]:
But human beings have been
thinking about this
for a very long time.
I have a very fine brain.
Our dream to create machines
in our own image
that are smart and intelligent
goes back to antiquity.
Uh, it's,
it's something that has,
has permeated the evolution
of society and of science.
[mortars firing]
O'BRIEN [voiceover]:
The modern origins
of artificial intelligence
can be traced
back to World War II,
and the prodigious
human brain of Alan Turing.
The legendary
British mathematician
developed a machine
capable of deciphering
coded messages from the Nazis.
After the war, he was among
the first to predict computers
might one day match
the human brain.
There are no surviving
recordings of Turing's voice,
but in 1951, he gave
a short lecture on BBC radio.
We asked an A.I.-generated voice
to read a passage.
TURING A.I. VOICE:
I think it is probable,
for instance,
that at the end of the century,
it will be possible
to program a machine
to answer questions
in such a way
that it will be extremely
difficult to guess
whether the answers are being
given by a man
or by the machine.
O'BRIEN [voiceover]:
And so,
the Turing test was born.
Could anyone build a machine
that could converse
with a human in a way
that is indistinguishable
from another person?
In 1956,
a group of pioneering scientists
spent the summer
brainstorming
at Dartmouth College.
And they told the world that
they have coined
a new academic field of study.
They called it
artificial intelligence
O'BRIEN [voiceover]:
For decades,
their aspirations remained
far ahead of
the capabilities of computers.
In 1978,
"NOVA" released its first film
on artificial intelligence.
We have seen the first
crude beginnings
of artificial intelligence...
O'BRIEN [voiceover]:
And the legendary science
fiction writer,
Arthur C. Clark was,
as always, prescient.
It doesn't really exist yet at
any level,
because our most complex
computers are still morons,
high-speed morons,
but still morons.
Nevertheless, we have
the possibility of machines
which can outpace their
creators,
and therefore,
become more intelligent than us.
At the time, researchers were
developing "expert systems,"
purpose-built
to perform specific tasks.
So the thing that we need to do
to make machine understand, um,
you know, our world,
is to put all our knowledge
into a machine
and then provide it
with some rules.
O'BRIEN [voiceover]:
Classic A.I. reached a pivotal
moment in 1997
when an artificial intelligence
program devised by IBM,
called "Deep Blue" defeated
world chess champion
and grandmaster Garry Kasparov.
It searched about 200 million
positions a second,
navigating through
a tree of possibilities
to determine the best move.
RUS:
The program analyzed
the board configuration,
could project forward
millions of moves
to examine millions of
possibilities,
and then picked the best path.
O'BRIEN [voiceover]:
Effective, but brittle,
Deep Blue wasn't
strategizing as a human does.
From the outset, artificial
intelligence researchers
imagined making machines
that think like us.
The human brain, with
more than 80 billion neurons,
learns not by following rules,
but rather by taking
in a steady stream of data,
and looking for patterns.
KELLIS:
The way that learning
actually works
in the human brain is by
updating the weights
of the synaptic connections
that are underlying this
neural network.
O'BRIEN [voiceover]:
Manolis Kellis is a
Professor of Computer Science
at the Massachusetts Institute
of Technology.
So we have trillions
of parameters in our brain
that we can adjust
based on experience.
I'm getting a reward.
I will update
the strength of the connections
that led to this reward...
I'm getting punished,
I will diminish the strength
of the connections
that led to the punishment.
So this is
the original neural network.
We did not invent it,
we, you know, we inherited it.
O'BRIEN [voiceover]:
But could an artificial
neural network
be made in our own image?
Turing imagined it.
But computers were nowhere near
powerful enough to do it
until recently.
It's only with the advent
of extraordinary data sets
that we have, uh,
since the early 2000s,
that we were able to build up
enough images,
enough annotations,
enough text to be able
to finally train
these sufficiently powerful
models.
O'BRIEN [voiceover]:
An artificial neural network is,
in fact,
modeled on the human brain.
It uses interconnected nodes,
or neurons,
that communicate with
each other.
Each node receives
inputs from other nodes
and processes those inputs
to produce outputs,
which are then passed on to
still other nodes.
It learns by adjusting
the strength of the connections
between the nodes based on
the data it is exposed to.
This process
of adjusting the connections
is called training,
and it allows an
artificial neural network
to recognize patterns
and learn from its experiences
like humans do.
A child,
how is it learning so fast?
It is learning so fast
because it's constantly
predicting the future
and then seeing what happens
and updating their weights in
their neural network
based on what just happened.
Now you can take this
self-supervised learning
paradigm
and apply it to machines.
O'BRIEN [voiceover]:
At first, some of these
artificial neural networks
were trained on vintage
Atari video games
like "Space Invaders"
and "Breakout."
Games reduce the complexity
of the real world
to a very narrow set
of actions that can be taken.
O'BRIEN [voiceover]:
Before he started Inflection,
Mustafa Suleyman co-founded
a company called
DeepMind in 2010.
It was acquired by Google
four years later.
When an A.I. plays a game,
we show it frame-by-frame,
every pixel
in the moving image.
And so the A.I. learns
to associate pixels
with actions that it can take
moving left or right
or pressing the fire button.
O'BRIEN [voiceover]:
When it obliterates blocks
or shoots aliens,
the connections between the
nodes that enabled that success
are strengthened.
In other words, it is rewarded.
When it fails, no reward.
Eventually,
all those reinforced connections
overrule the weaker ones.
The program has learned
how to win.
This sort of repeated allocation
of reward
for repetitive behavior
is a great way to train a dog.
It's a great way to teach a kid.
It's a great way for us
as adults to adapt our behavior.
And in fact,
it's actually a good way
to train machine learning
algorithms to get better.
O'BRIEN [voiceover]:
In 2014, DeepMind began work
on an artificial neural network
called "AlphaGo"
that could play the ancient,
and deceptively complex,
board game of Go.
KELLIS:
Go was thought to be a game
where machines would never win.
The number of choices
for every move is enormous.
O'BRIEN [voiceover]:
But at DeepMind,
they were counting on
the astounding growth
of compute power.
And I think that's the key
concept to try to grasp,
is that we are massively,
exponentially growing
the amount of computation used,
and in some sense,
that computation is a proxy
for how intelligent
the model is.
O'BRIEN [voiceover]:
AlphaGo was trained two ways.
First, it was fed a large
data set of expert Go games
so that it could
learn how to play the game.
This is known
as supervised learning.
Then the software played against
itself many millions of times,
so-called
reinforcement learning.
This gradually improved
its skills and strategies.
In March 2016,
AlphaGo faced Lee Sedol,
one of the world's
top-ranking players
in a five-game match in
Seoul, South Korea.
AlphaGo not only won,
but also made a move so novel,
the Go cognoscenti
thought it was a huge blunder.
That's a very surprising move.
There's no question to me
that these A.I. models
are creative.
They're incredibly creative.
O'BRIEN [voiceover]:
It turns out the move
was a stroke of brilliance.
And this
emergent creative behavior
was a hint of what was to come:
generative A.I.
Meanwhile,
a company called OpenA.I.
was creating
a generative A.I. model
that would become ChatGPT.
It allows users
to engage in a dialogue
with a machine
that seems uncannily human.
It was first released in 2018,
but it was a subsequent version
that became a global sensation
in late 2022.
This promises to be
the viral sensation
that could completely reset
how we do things.
Cranking out entire essays
in a matter of seconds.
O'BRIEN [voiceover]:
Not only did it wow the public,
it also caught
artificial intelligence
innovators off guard.
YOSHUA BENGIO:
It surprised me a lot
that they're able
to do things that
we didn't think they could do
simply by
learning to imitate
how humans respond.
And I thought this
kind of abilities would take
many more years or decades.
O'BRIEN [voiceover]:
ChatGPT is
a large language model.
LLMs start by consuming massive
amounts of text:
books, articles and websites,
which are publicly available on
the internet.
By recognizing patterns
in billions of words,
they can make guesses
at the next word in a sentence.
That's how ChatGPgenerates unique answers
to your questions.
If I ask for a haiku
about the blue sky
it writes something
that seems completely original.
KELLIS:
If you're good at predicting
this next word,
it means you're understanding
something about the sentence.
What the style
of the sentence is,
what the feeling
of the sentence is.
And you can't tell whether
this was a human or a machine.
That's basically the definition
of the Turing test.
O'BRIEN [voiceover]:
So, how is this changing
our world?
Well, It might change my world...
As an arm amputee.
Ready for my casting call, right?
MONROE [chuckling]:
Yes.
Let's do it. All right.
O'BRIEN [voiceover]:
That's Brian Monroe of
the Hanger Clinic.
He's been my prosthetist
since an injury
took my arm above the elbow
ten years ago.
So what we're going to do today
is take a mold of your arm.Uh-huh.
Kind of is like
a cast for a broken bone.
O'BRIEN [voiceover]:
Up until now, I have used
a body-powered prosthetic.
Harness and a cable allow me
to move it
by shrugging my shoulders.
The technology is
more than a century old.
But artificial intelligence,
coupled with small
electric motors,
is finally pushing prosthetics
into the 21st century.
Which brings me to Chicago
and the offices of
a small company called Coapt.
I met the C.E.O., Blair Locke,
a pioneer in the push
to apply artificial intelligence
to artificial limbs.
So, what do we have here?
What are we going to do?
This allows us to very easily
test how your control would be
using a pretty simple cuff;
this has electrodes in it,
and we'll let the power
of the electronics
that are doing
the machine learning
see what you're capable
of. All right, let's give it a try.
[voiceover]:
Like most amputees,
I feel my missing hand almost
as if it was still there...
A phantom.
Everything will touch.
Is that okay?
Yeah. Not too tight?
No. All good.Okay.
O'BRIEN [voiceover]:
It's almost entirely immobile,
stuck in molasses.
Make a fist, not too hard.
O'BRIEN [voiceover]:
But I am able to imagine
moving it ever so slightly.
And I'm gonna have you squeeze
into that a little bit harder.
Very good, and I see the
pattern on the screen
change a little bit.
O'BRIEN [voiceover]:
And when I do,
I generate an array of faint
electrical signals in my stump.
That's your muscle information.
It feels,
it feels like I'm overcoming
something that's really stuck.
I don't know,
is that enough signal?
Should be.
Oh, okay.
We don't need a lot of signal,
we're going for information
in the signal,
not how loud it is.
O'BRIEN [voiceover]:
And this is where artificial
intelligence comes in.
Using a virtual
prosthetic depicted on a screen,
I trained a machine learning
algorithm to become fluent
in the language
of my nerves and muscles.
We see eight different signals
on the screen.
All eight of those sensor sites
are going to feed in together
and let the algorithm
sort out the data.
What you are experiencing
is your ability
to teach the system
what is hand-closed to you.
And that's different
than what it would be to me.
O'BRIEN [voiceover]:
I told the software
what motion I desired,
open, close, or rotate,
then imagined moving
my phantom limb accordingly.
This generates an array
of electromyographic,
or EMG, signals in
my remaining muscles.
I was training the A.I.
to connect the pattern
of these electrical signals
with a specific movement.
LOCK:
The system adapts,
and as you add more data
and use it over time,
it becomes more robust,
and it learns
to improve upon use.
O'BRIEN:
Is it me that's learning, or
the algorithm that's learning?
Or are we learning together? LOCK:
You're learning together.
Okay.
O'BRIEN [voiceover]:
So, how does the Coapt pattern
recognition system work?
It's called a Bayesian
classification model.
As I train the software,
it labels my
various EMG patterns
into corresponding
classes of movement...
Hand open, hand closed,
wrist rotation, for example.
As I use the arm,
it compares the electrical
signals I'm transmitting
to the existing library
of classifications I taught it.
It relies on
statistical probability
to choose the best match.
And this is just one way
machine learning
is quietly
revolutionizing medicine.
Computer scientist
Regina Barzilay
first started working on
artificial intelligence
in the 1990s, just as
rule-based A.I. like Deep Blue
was giving way
to neural networks.
She used the techniques
to decipher dead languages.
You might call it
a small language model.
Something that is fun and
intellectually very challenging,
but it's not like
it's going to change our life.
O'BRIEN [voiceover]:
And then her life changed
in an instant.
CONSTANCE LEHMAN:
We see a spot there.
O'BRIEN [voiceover]:
In 2014, she was diagnosed
with breast cancer.
BARZILAY [voiceover]:
When you go through the
treatment,
there are a lot of
people who are suffering.
I was interested in
what I can do about it, and
clearly it was not continuing
deciphering dead languages,
and it was quite a journey.
O'BRIEN [voiceover]:
Not surprisingly, she began that
journey with mammograms.
LEHMAN:
It's a little bit
more prominent.
O'BRIEN [voiceover]:
She and Constance Lehman,
a radiologist at
Massachusetts General Hospital,
realized the Achilles heel
in the diagnostic system
is the human eye.
BARZILAY [voiceover]:
So the question that we ask is,
what is the likelihood
of these patients
to develop cancer
within the next five years?
We, with our human eyes,
cannot really make these
assertions
because the patterns
are so subtle.
LEHMAN: Now, is that different
from the surrounding tissue?
O'BRIEN [voiceover]:
It's a perfect use case
for pattern recognition
using what is known as
a convolutional neural network.
Here's an example
of how CNNs get smart:
they comb through a picture with
many virtual magnifying glasses.
Each one is looking for
a specific kind of puzzle piece,
like an edge,
a shape, or a texture.
Then it makes
simplified versions,
repeating the process
on larger and larger sections.
Eventually
the puzzle can be assembled.
And it's time to make a guess.
Is it a cat? A dog? A tree?
Sometimes the guess is right,
but sometimes it's wrong.
And here's the learning part:
with a process
called backpropagation,
labeled images are sent back to
correct the previous operation.
So the next time
it plays the guessing game,
it will be even better.
To validate the model,
Regina and her team gathered up
more than 128,000 mammograms
collected at seven sites
in four countries.
More than 3,800 of them
led to a cancer diagnosis
within five years.
You just give to it the image,
and then
the five years of outcomes,
and it can learn the likelihood
of getting a cancer diagnosis.
O'BRIEN [voiceover]:
The software, called Mirai,
was a success.
In fact, it is between
75% and 84% accurate
in predicting
future cancer diagnoses.
Then, a friend of
Regina's developed lung cancer.
SEQUIST:
In lung cancer, it's actually
sort of mind boggling
how much has changed.
O'BRIEN [voiceover]:
Her friend saw oncologist
Lecia Sequist.
She and Regina wondered
if artificial intelligence
could be applied
to CAT scans of patients' lungs.
SEQUIST:
We taught the model
to recognize the patterns
of developing lung cancer
by using thousands of CAT scans
from patients who were
participating
in a clinical trial.
From the new study?
Oh, interesting.Correct.
SEQUIST [voiceover]:
We had a lot of information
about them.
We had demographic information,
we had health information,
and we had outcomes information.
O'BRIEN [voiceover]:
They call the model Sibyl.
In the retrospective study,
right,
so the retrospective data...
O'BRIEN [voiceover]:
Radiologist Florian Fintelmann
showed me what it can do.
FINTELMANN:
This is earlier,
and this is later.
There is nothing
that I can perceive, pick up,
or describe.
There's no, what we call,
a precursor lesion
on this CT scan.
Sibyl looked here
and then anticipated that
there would be a problem
based on the baseline
scan. What is it seeing?
That's the million dollar
question.
And, and maybe not
the million dollar question.
Does it really matter? Does it?
O'BRIEN [voiceover]:
When they compared
the predictions
to actual outcomes from previous
cases, Sybil fared well.
It correctly forecast cancer
between 80% and 95% of the time,
depending on the population
it studied.
The technique is
still in the trial phase.
But once it is deployed,
it could provide
a potent tool for prevention.
The hope is that if you
can predict very early on
that the patient
is in the wrong way,
you can do clinical trials,
you can develop the drugs
that are doing the prevention,
rather than treatment
of very advanced disease
that we are doing today.
O'BRIEN [voiceover]:
Which takes us back to DeepMind
and AlphaGo.
The fun and games
were just the beginning,
a means to an end.
We have always set out
at DeepMind
to, um, use our technologies to
make the world a better place.
O'BRIEN [voiceover]:
In 2021,
the company released AlphaFold.
It is pattern
recognition software
designed to make
it easier for researchers
to understand proteins,
long chains of amino acids
involved in nearly
every function in our bodies.
How a protein folds
into a specific,
three-dimensional shape
determines how it interacts
with other molecules.
SULEYMAN:
There's this correlation between
what the protein does
and how it's structured.
So if we can predict
how the protein folds,
then say something
about their function.
O'BRIEN:
If we know how a disease's
protein is shaped, or folded,
we can sometimes create
a drug to disable it.
But the shape of millions
of proteins remained a mystery.
DeepMind trained AlphaFold
on thousands of
known protein structures.
It leveraged this knowledge
to predict
200 million protein structures,
nearly all the proteins
known to science.
SULEYMAN:
You take some high-quality
known data,
and you use that to, you know,
make a prediction about how
a similar piece of information
is likely to unfold
over some time series,
and the structure
of proteins is,
you know, in that sense,
no different to
making a prediction in
the game of Go or in Atari
or in a mammography scan,
or indeed,
in a large language model.
KAMYA:
These thin sticks here?
Yeah? They represent
the amino acids
that make up a protein.
O'BRIEN [voiceover]:
Theoretical chemist
Petrina Kamya works for
a company called
Insilico Medicine.
It uses AlphaFold
and its own deep-learning models
to make accurate predictions
about protein structures.
What we're doing in drug design
is we're designing a molecule
that is analogous
to the natural molecule
that binds to the protein,
but instead it will lock it,
if this molecule
is involved in a disease
where it's hyperactive.
O'BRIEN [voiceover]:
If the molecule fits well,
it can inhibit the
disease-causing proteins.
So you're filtering it down
like you're choosing
an Airbnb or something to,
you know, number of bedrooms,
whatever. To suit your needs.
[laughs]Exactly, right.
Right, yeah. That's a very good analogy.
It's sort of like Airbnb.
So you are putting in
your criteria,
and then Airbnb will filter out
all the different properties
based on your criteria.
So you can be very, very
restrictive
or you can be very,
very free... Right.
In terms of guiding the
generative algorithms
and telling them
what types of molecules
you want them to generate.
O'BRIEN [voiceover]:
It will take 48 to 72 hours
of computing time
to identify the best
candidates ranked in order.
How long would it have taken you
to figure that out
as a computational chemist?
I would have thought of
some of these,
but not all of them.Okay.
O'BRIEN [voiceover]:
While there are no shortcuts
for human trials,
nor should we hope for that,
this could greatly speed up
the drug development pipeline.
There will not be the need
to invest so heavily
in preclinical discovery,
and so,
drugs can therefore be cheaper.
And you can go
after those diseases
that are otherwise neglected,
because you don't have
to invest so heavily
in order for you
to come up with a drug,
a viable drug.
O'BRIEN [voiceover]:
But medicine isn't
the only place
where A.I. is breaking
new frontiers.
It's conducting
financial analysis,
helps with fraud detection.
[mechanical whirring]
It's now being deployed
to discover novel materials
and could help us build
clean energy technology.
And It is even helping
to save lives
as the climate crisis
boils over.
[indistinct radio chatter]
In St. Helena, California,
dispatchers at the
CAL FIRE Sonoma-Lake-Napa
Command Center
caught a break in 2023.
Wildfires blackened nearly
700 acres of their territory.
We were at 400,000 acres
in 2020.
Something like that would
generate a response from us...
O'BRIEN [voiceover]:
Chief Mike Marcucci has
been fighting fires
for more than 30 years.
MARCUCCI [voiceover]:
Once we started having
these devastating fires,
we needed more intel.
The need for intelligence
is just overwhelming
in today's fire service.
O'BRIEN [voiceover]:
Over the past 20 years,
California
has installed a network
of more than
1,000 remotely operated
pan, tilt, zoom surveillance
cameras on mountaintops.
PETE AVANSINO:
Vegetation fire,
Highway 29 at Doton Road.
O'BRIEN [voiceover]:
All those cameras generate
petabytes of video.
CAL FIRE partnered with
scientists at U.C. San Diego
to train a neural network
to spot the early signs
of trouble.
It's called ALERT California.
SeLEGUE:
So here's one
that just popped up.
Here's an anomaly.
O'BRIEN [voiceover]:
CAL FIRE Staff Chief of Fire
and Intelligence Philip SeLegue
showed me how it works
while it was in action,
detecting nascent fires,
micro fires.
That looks like
just a little hint
of some type of smoke
that was there...
O'BRIEN [voiceover]:
Based on this,
dispatchers can orchestrate
a fast response.
A.I. has given us the ability
to detect and to see
where those fires are starting.
AVANSINO:
Transport 1447
responding via MDC.
O'BRIEN [voiceover]:
For all they know,
they have nipped
some megafires in the bud.
The success are the fires
that you don't hear about
in the news.
O'BRIEN [voiceover]:
Artificial intelligence
can't put out
wildfires just yet.
Human firefighters
still need to do that job.
But researchers are pushing hard
to combine neural networks
with mobility and dexterity.
This is where people
get nervous.
Will they take our jobs?
Or could they turn against us?
But at M.I.T.,
they're exploring ideas
to make robots
good human partners.
We are interested in
making machines
that help people with
physical and cognitive tasks.
So this is really great,
it has the stiffness
that we wanted...
O'BRIEN [voiceover]:
Daniela Rus is director of
M.I.T.'s Computer Science
and Artificial Intelligence Lab.
Oh, can you bring it to me?
O'BRIEN [voiceover]:
CSAIL.
They are different, like,
kind of like muscles
or actuators.
RUS [voiceover]:
We can do so much more
when we get people and machines
working together.
We can get better reach.
We can get lift,
precision, strength, vision.
All of these are
physical superpowers
we can get through machines.
O'BRIEN [voiceover]:
So, they're focusing
on making it safe for humans
to work in close proximity
to machines.
They're using some of
the technology that's inside
my prosthetic arm.
Electrodes that can read
the faint EMG signals generated
as our nerves command
our muscles to move.
They have the capability to
interact with a human,
to understand the human,
to step in and help the human
as needed.
I am at your disposal with
187 other languages,
along with their various
dialects and sub tongues.
O'BRIEN [voiceover]:
But making robots as useful
as they are in the movies
is a big challenge.
Most neural networks run on
powerful supercomputers...
Thousands of processors
occupying entire buildings.
RUS:
We have brains that require
massive computation,
which you cannot include
on a self-contained body.
We address the size challenge by
making liquid networks.
O'BRIEN [voiceover]:
Liquid networks.
So it looks like
an autonomous vehicle
like I've seen before,
but it is a little
different, right?
ALEXANDER AMINI:
Very different.
This is an autonomous vehicle
that can drive in
brand-new environments
that it has never seen
before for the first time.
O'BRIEN [voiceover]:
Most self-driving cars
today rely,
to some extent,
on detailed databases
that help them recognize
their immediate environment.
Those robot cars get lost
in unfamiliar terrain.
O'BRIEN:
In this case,
you're not relying on
a huge, expansive
neural network.
You're running on
19 neurons, right?
Correct.
O'BRIEN [voiceover]:
Computer scientist
Alexander Amini
took me on a ride
in an autonomous vehicle
with a liquid neural
network brain.
AMINI:
We've become very accustomed
to relying on
big, giant data centers
and cloud compute.
But in an autonomous vehicle,
you cannot make
such assumptions, right?
You need to be able to operate,
even if you lose
internet connectivity
and you cannot
talk to the cloud anymore,
your entire neural network,
the brain of the car,
needs to live on the car,
and that imposes a lot
of interesting constraints.
O'BRIEN [voiceover]:
To build a brain smart enough
and small enough to do this job,
they took some inspiration
from nature,
a lowly worm
called C. elegans.
Its brain contains all of
300 neurons,
but it's a very
different kind of neuron.
It can capture
more complex behaviors
in every single piece
of that puzzle.
And also the wiring,
how a neuron talks to
another neuron
is completely different
than what we see
in today's neural networks.
O'BRIEN [voiceover]:
Autonomous cars that tap
into today's neural networks
require huge amounts of
compute power in the cloud.
But this car is using
just 19 liquid neurons.
A worm at the wheel...
sort of.
AMINI [voiceover]:
Today's A.I. models
are really
pushing the boundaries
of the scale of compute
that we have.
They're also pushing
the boundaries
of the data sets that we have.
And that's not sustainable,
because ultimately,
we need to deploy A.I.
onto the device itself, right?
Onto the cars,
onto the surgical robots.
All of these edge devices
that actually makes
the decisions.
O'BRIEN [voiceover]:
The A.I. worm may, in fact,
turn.
The portability of
artificial intelligence
was on my mind when it came time
to pick up
my new myoelectric arm...
equipped with
Coapt A.I. pattern recognition.
All right, let's just check this
real quick...
O'BRIEN [voiceover]:
A few weeks after
my trip to Chicago,
I met Brian Monroe
at his home office
outside Washington, D.C.
Are you happy with
the way it came out? Yeah.
Would you tell me otherwise?
[laughing]:
Yeah, I would, yeah...
O'BRIEN [voiceover]:
As usual,
he did a great job
making a tight socket.
How's the socket feel?
Does it feel like
it's sliding down or
falling out... No, it fits like a glove.
O'BRIEN [voiceover]:
It's really important in
this case,
because the electrodes designed
to read the signals
from my muscles...
have to stay in place snugly
in order to generate
accurate, reliable commands
to the actuators in my new hand.
Wait, is that you? That's me.
[voiceover]:
He also provided me with
a human-like bionic hand.
But getting it
to work just right
took some time.
That's open and it's closing.
It's backwards?
Yeah. Now try.
If it's reversed,
I can swap the electrodes. There we go.
That's got it. Is it the right direction?
Yeah.Uh-huh. Okay.
O'BRIEN [voiceover]:
It's a long way from the movies,
and I'm no Luke Skywalker.
But my new arm and I
are now together.
And I'm heartened to know
that I have the freedom
and independence
to teach and tweak it
on my own.
That's kind of cool.Yeah.
[voiceover]:
Hopefully we will listen to
each other.
It's pretty awesome.
O'BRIEN [voiceover]:
But we might want to listen
with a skeptical ear.
JORDAN PEELE [imitating Obama]:
You see, I would never
say these things,
at least not in
a public address,
but someone else would.
Someone like Jordan Peele.
This is a dangerous time.
O'BRIEN [voiceover]:
It's even more dangerous now
than it was in 2018
when comedian Jordan Peele
combined his pitch-perfect
Obama impression
with A.I. software to make
this convincing fake video.
or whether we become some
kind of [bleep] up dystopia.
O'BRIEN [voiceover]:
Fakes are about as old as
photography itself.
Mussolini, Hitler, and Stalin
all ordered that pictures be
doctored or redacted,
erasing those
who fell out of favor,
consolidating power,
manipulating their followers
through images.
HANY FARID:
They've always been manipulated,
throughout history, but...
There was literally,
you can count on one hand,
the number of people
in the world who could do this.
But now,
you need almost no skill.
And we said, "Give us an image
"of a middle-aged woman,
newscaster,
sitting at her desk,
reading the news."
O'BRIEN [voiceover]:
Hany Farid is a professor
of computer science
at U.C. Berkeley.
[on computer]:
And this is your daily dose
of future flash.
O'BRIEN [voiceover]:
He and his team
are trying to navigate
the house of mirrors
that is the world of
A.I.-enabled deepfake imagery.
Not perfect.
She's not blinking,
but it's pretty good.
And by the way, he did this
in a day and a half.
FARID [voiceover]:
It's the
classic automation story.
We have lowered
barriers to entry
to manipulate reality.
And when you do that,
more and more people will do it.
Some good people will do it,
but lots of bad people
will do it.
There'll be some
interesting use cases,
and there'll be a lot of
nefarious use cases.
Okay, so, um...
Glasses off.
How's the framing?
Everything okay?
[voiceover]:
About a week before
I got on a plane to see him... Hold on.
O'BRIEN [voiceover]:
He asked me to meet him on Zoom
so he could get a good recording
of my voice and mannerisms.
And I assume
you're recording, Miles.
O'BRIEN [voiceover]:
And he turned the table on me
a little bit,
asking me a lot of questions
to get a good sampling.
FARID [on computer]:
How are you feeling about
the role of A.I.
as it enters into our world
on a daily basis?
I think it's very important,
first of all,
to calibrate the concern level.
Let's take it away from
the "Terminator" scenario...
[voiceover]:
The "Terminator" scenario.
Come with me
if you want to live.
O'BRIEN [voiceover]:
You know, a malevolent
neural network
hellbent on exterminating
humanity.
You're really real.
O'BRIEN [voiceover]:
In the film series,
the cyborg assassin
is memorably played
by Arnold Schwarzenegger.
Hany thought it would be fun
to use A.I.
to turn Arnold into me.
Okay.
O'BRIEN [voiceover]:
A week later, I showed up at
Berkeley's
School of Information,
ironically located in
the oldest building on campus.
So you had me do
this strange thing on Zoom.
Here I am.
What did you do with me?
Yeah, well, it's gonna teach you
to let me record
your Zoom call, isn't it?
I did this
with some trepidation.
[voiceover]:
I was excited to see what tricks
were up his sleeve.
FARID [voiceover]:
I uploaded 90 seconds of audio,
and I clicked a box saying
"Miles has given me
permission to use his voice,"
which I don't actually
think you did.
[chuckles]
Um, and, I waited about,
eh, maybe 20 seconds,
and it said, "Okay, what would
you like for Miles to say?"
And I started typing,
and I generated an audio
of you saying
whatever I wanted you to say.
We are synthesizing,
at much, much lower resolution.
O'BRIEN [voiceover]:
You could have knocked me over
with a feather
when I watched this.
A.I. O'BRIEN:
Terminators were
science fiction back then,
but if you follow the
recent A.I. media coverage,
you might think that Terminators
are just around the corner.
The reality is...
O'BRIEN [voiceover]:
The eyes and the mouth
need some work,
but it sure does sound like me.
And consider what happened
in May of 2023.
Someone posted
this A.I.-generated image
of what appeared to be
a terrorist bombing
at the Pentagon.
NEWS ANCHOR:
Today we may have witnessed
one of the first drops
in the feared flood
of A.I.-created
disinformation.
O'BRIEN [voiceover]:
It was shared on Twitter
via what seemed to be
a verified account
from Bloomberg News.
NEWS ANCHOR:
It only took seconds
to spread fast.
The Dow now down about
200 points...
Two minutes later,
the stock market dropped
a half a trillion dollars
from a single fake image.
Anybody could've made
that image,
whether it was intentionally
manipulating the market
or unintentionally,
in some ways,
it doesn't really matter.
O'BRIEN [voiceover]:
So what are the technological
innovations that make this tool
widely available?
One technique is called
the generative
adversarial network,
or GAN.
Two algorithms
in a dizzying
student-teacher back and forth.
Let's say it's learning how to
generate a cat.
FARID:
And it starts by
just splatting down
a bunch of pixels onto a canvas.
And it sends it over to
a discriminator.
And the discriminator has access
to millions and millions
of images
of the category that you want.
And it says,
"Nope, that doesn't look
like all these other things."
So it goes back to the generator
and says, "Try again."
Modifies some pixels,
sends it back
to the discriminator,
and they do this in
what's called
an adversarial loop.
O'BRIEN [voiceover]:
And eventually,
after many thousands of volleys,
the generator
finally serves up a cat.
And the discriminator says,
"Do more like that."
Today, we have a whole new way
of doing these things.
They're called diffusion-based.
What diffusion does
is it has vacuumed up
billions of images
with captions
that are descriptive.
O'BRIEN [voiceover]:
It starts by making those
labeled images
visually noisy on purpose.
FARID:
And then it corrupts it more,
and it goes backwards
and corrupts it more,
and goes backwards
and corrupts it more
and goes backwards...
And it does that
six billion times.
O'BRIEN [voiceover]:
Eventually it corrupts it
so it's unrecognizable
from the original image.
Now that it knows how
to turn an image into nothing,
it can reverse the process,
turning seemingly nothing,
into a beautiful image.
FARID:
What it's learned is how to take
a completely indescript image,
just pure noise,
and go back to a coherent image,
conditioned on a text prompt.
You're basically
reverse engineering an image
down to the pixel.
Yeah, exactly, yeah.
And it's... and by the way...
If you had asked me,
"Will this work?"
I would have said,
"No, there's no way
this system works."
It just, it just doesn't
seem like it should work.
And that's sort of the magic
of when you get this much data
and very powerful algorithms
and very powerful computing
to be able to crunch
these massive data sets.
I mean, we're not
going to contain it.
That's done.
[voiceover]:
I sat down with Hany
and two of his grad students:
Justin Norman
and Sarah Barrington.
We looked at some
the A.I. trickery
they have seen and made.
Somebody else
wrote some base code
and they got grew on to
and grow on to and
grow on to and eventually...
O'BRIEN [voiceover]:
In a world where anything
can be manipulated
with such ease
and seeming authenticity,
how are we to know
what's real anymore?
How you look at the world,
how you interact with
people in it,
and where you look for
your threats of that change.
O'BRIEN [voiceover]:
Generative A.I. is now
part of a larger ecosystem
that is built on mistrust.
We're going to live
in a world where
we don't know what's real.
FARID [voiceover]:
There is distrust of
governments,
there is distrust of media,
there is distrust of academics.
And now throw on top of that
video evidence.
So-called video evidence.
I think this is
the very definition
of throwing jet fuel onto
a dumpster fire.
And it's already happening,
and I imagine
we will see more of it.
[Arnold's voice]:
Come with me
if you want to live.
O'BRIEN [voiceover]:
But it also can be
kind of fun.
As Hany promised,
here's my face
on the Terminator's body.
[gunfire blasting]
Long before A.I. might take
an existential turn
against humanity,
we will need to
reckon with the likes...
Go! Now! O'BRIEN [voiceover]:
Of the Milesinator.
TRAILER NARRATOR:
This time, he's back.
[booming]
O'BRIEN [voiceover]:
Who will no doubt, be back.
Trust me.
O'BRIEN [voiceover]:
Trust,
but always verify.
So, what kind of A.I. magic
is readily available online?
It's pretty simple
to make it look
like you're fluent
in another language.
[speaking Mandarin]:
It was pretty easy to do,
I just had to upload
a video and wait.
[speaking German]:
And, suddenly,
I look pretty darn smart.
[speaking Greek]:
Sure, it's fun,
but I think you can see
where it leads to mischief
and possibly even mayhem.
[voiceover]:
Yoshua Bengio is an
artificial intelligence pioneer.
He says he didn't spend
much time
thinking about
science fiction dystopia
as he was creating
the technology.
But as his brilliant ideas
became reality,
reality set in.
BENGIO:
And the more I read,
the more I thought about it...
the more concerned I got.
If we are not honest
with ourselves,
we're gonna fool ourselves.
We're gonna...
lose.
O'BRIEN [voiceover]:
Avoiding that outcome
is now his main priority.
He has signed
several public warnings
issued by A.I. thought leaders,
including this stark
single-sentence statement
in May of 2023.
"Mitigating the risk of
extinction from A.I.
"should be a global priority
"alongside other
societal scale risks,
"such as pandemics
and nuclear war."
As we approach more and more
capable A.I. systems
that might even become stronger
than humans in many areas,
they become
more and more dangerous.
Can't we just pull
the plug on the thing?
Oh, that's
the safest thing to do,
pull the plug.
Before it gets so powerful that
it prevents us from
pulling the plug.
DAVE:
Open the pod bay doors, Hal.
HAL:
I'm sorry, Dave,
I'm afraid I can't do that.
O'BRIEN [voiceover]:
It may be some time
before computers are able
to act like
movie supervillains...
HAL:
Goodbye.
O'BRIEN [voiceover]:
But there are near-term dangers
already emerging.
Besides deepfakes and
misinformation,
A.I. can also supercharge bias
and hate content,
replace human jobs...
This is why we're striking,
everybody.[crowd exclaiming]
O'BRIEN [voiceover]:
And make it easier
for terrorists
to create bioweapons.
And A.I. systems are so complex
that they are difficult
to comprehend,
all but impossible to audit.
RUS [voiceover]:
Nobody really understands
how those systems
reach their decisions.
So we have to be
much more thoughtful
about how we
test and evaluate them
before releasing them.
They're concerned
whether machine will be able
to begin to think for itself.
O'BRIEN [voiceover]:
The U.S. and Europe have begun
charting a strategy
to try to ensure safe, secure,
and trustworthy
artificial intelligence.
RISHI SUNAK:
in a way that will
be safe for our communities...
O'BRIEN [voiceover]:
But how to do that
in the midst of a frenetic race
to dominate a technology
with a predicted economic impact
of 13 trillion dollars by 2030.
There is such a strong
commercial incentive
to develop this
and win the competition
against the other companies,
not to mention
the other countries,
that it's hard
to stop that train.
But that's what
governments should be doing.
NEWS ANCHOR:
The titans of social media
didn't want to come to
Capitol Hill.
O'BRIEN [voiceover]:
Historically, the tech industry
has bridled against regulation.
You have an army of lawyers
and lobbyists
that have fought us on this...
SULEYMAN [voiceover]:
There's no question that
guardrails
will slow things down,
But, the risks are uncertain
and potentially enormous.
So, it makes sense for us
to start having
the conversation right now.
O'BRIEN [voiceover]:
For me, the conversation
about A.I. is personal.
Okay, no network detected.
Okay, um...
Oh, here we go.Okay.
And now I'm going to open,
open, open, open, open...
[voiceover]:
I used the Coapt app
to train the A.I.
inside my new prosthetic.
It says all of my
training data is good,
it's four of five stars.
And now let's try to close.
[whirring]
All right.
Seems to be doing
what it was told.
[voiceover]:
Was my new arm listening?
Maybe.
I decided to make things
simpler.
I took off the hand and
attached a myoelectric hook.
[quietly]:
All right.
[voiceover]:
Function over form.
Not a conversation piece
necessarily at a cocktail party
like this thing is.
This looks more like
Luke Skywalker, I suppose.
But this thing has a tremendous
amount of function to it.
Although, right now,
it wants to stay open.
[voiceover]:
And that problem persisted.
Find a tripod plate...
[voiceover]:
When I tried using it
to set up my basement studio
for a live broadcast.
Come on, close.
[voiceover]:
I was quickly frustrated.
[item drops, audio beep]
Really annoying.
Not useful.
[voiceover]:
The hook continuously
opened on its own.
[clattering]Damn it!
[voiceover]:
So I completely reset
and retrained the arm.
And... reset,
there we go.
Add data...
[voiceover]:
But the software was
artificially unhappy.
"Electrodes are not
making good skin contact."
Maybe that is my problem,
ultimately.
[voiceover]:
My problem really is
I haven't given this
enough time.
Amputees tell me it can take
many months to really learn
how to use an arm like this one.
The choke point isn't
artificial intelligence.
Dead as a doornail.
[voiceover]:
But rather, what is the best way
to communicate
my intentions to it?
Little reboot there, I guess.
All right.
Close.
Open, close.
[voiceover]:
It turns out machine learning
isn't smart enough to
give me a replacement arm
like Luke Skywalker got.
Nor is it capable
of creating the Terminator.
Right now, it seems many
hopes and fears
for artificial intelligence...
Oh!
[voiceover]:
are rooted
in science fiction.
But we are walking down a road
to the unknown.
The door is opening to
a revolution.
[door closes]