The Real! ChatGPT: Creator or Terminator? (2024) Movie Script
[gentle music]
[Narrator] For decades,
we have discussed
the many outcomes,
regarding artificial
intelligence.
Could our world be dominated?
Could our independence and
autonomy be stripped from us,
or are we able to control
what we have created?
[upbeat music]
Could we use artificial
intelligence to benefit our society?
Just how thin is the line
between the development
of civilization and chaos?
[upbeat music]
To understand what
artificial intelligence is,
one must understand that it
can take many different forms.
Think of it as a web of ideas,
slowly expanding as new
ways of utilizing computers
are explored.
As technology develops,
so do the capabilities of
self-learning software.
- [Reporter] The need to
diagnose disease quickly
and effectively has prompted
many university medical centers
to develop intelligent
programs that simulate the work
of doctors and
laboratory technicians.
[gentle music]
- [Narrator] AI is
quickly integrating with our way of life.
So, much so that development
of AI programs has in itself,
become a business opportunity.
[upbeat music]
In our modern age,
we are powered by technology
and softwares are transcending
its virtual existence,
finding applications
in various fields,
such as customer support
to content creation,
computer-aided design,
otherwise known as CAD, is
one of the many uses of AI.
By analyzing
particular variables,
computers are now able to
assist in the modification
and creation of designs for
hardware and architecture.
The prime use of any AI is
for optimizing processes
that were considered
tedious before.
In many ways, AI has
been hugely beneficial
for technological development
thanks to its sheer speed.
However, AI only benefits
those to whom the
programs are distributed.
Artificial intelligence
is picking through your rubbish.
This robot uses it to sort
through plastics for recycling
and it can be retrained
to prioritize whatever's
more marketable.
So, AI can clearly
be incredibly useful,
but there are deep
concerns about
how quickly it is developing
and where it could go next.
- The aim is to make
them as capable as humans
and deploy them in
the service sector.
The engineers in this research
and development lab are working
to take these humanoid
robots to the next level
where they can not
only speak and move,
but they can think
and feel and act
and even make decisions
for themselves.
And that daily data stream
is being fed into an
ever expanding workforce,
dedicated to developing
artificial intelligence.
Those who have studied abroad
are being encouraged to
return to the motherland.
Libo Yang came back
and started a tech
enterprise in his hometown.
- [Narrator] China's market
is indeed the most open
and active market
in the world for AI.
It is also where there are the
most application scenarios.
- So, AI is generally a
broad term that we apply
to a number of techniques.
And in this particular case,
what we're actually looking
at was elements of AI,
machine learning
and deep learning.
So, in this particular case,
we've been unfortunately
in a situation
in this race against time
to create new antibiotics,
the threat is
actually quite real
and it would be
a global problem.
We desperately needed to
harness new technologies
in an attempt to fight it,
we're looking at drugs
which could potentially
fight E. coli,
a very dangerous bacteria.
- So, what is it
that the AI is doing
that humans can't
do very simply?
- So, the AI can
look for patterns
that we wouldn't be able to
mind for with a human eye,
simply within what I
do as a radiologist,
I look for patterns of
diseases in terms of shape,
contrast enhancement,
heterogeneity.
But what the computer does,
it looks for patterns
within the pixels.
These are things that you just
can't see to the human eye.
There's so much more data
embedded within these scans
that we use that we can't
mine on a physical level.
So, the computers really help.
- [Narrator] Many
believe the growth of AI
is dependent on
global collaboration,
but access to the technology
is limited in certain regions.
Global distribution is
a long-term endeavor
and the more countries
and businesses that
have access to the tech,
the more regulation
the AI will require.
In fact, it is now not
uncommon for businesses
to be entirely run by
an artificial director.
On many occasions,
handing the helm of a
company to an algorithm
can provide the best option
on the basis of probability.
However, dependence and
reliability on softwares
can be a great risk.
Without proper safeguards,
actions based on potentially
incorrect predictions
can be a detriment to a
business or operation.
Humans provide the
critical thinking
and judgment which AI is
not capable of matching.
- Well, this is the
Accessibility Design Center
and it's where we try to
bring together our engineers
and experts with the
latest AI technology,
with people with disabilities,
because there's a
real opportunity to firstly help people
with disabilities enjoy
all the technology
we have in our pockets today.
And sometimes that's
not very accessible,
but also build tools that
can help them engage better
in the real world.
And that's thanks to the
wonders of machine learning.
- I don't think we're like at
the end of this paradigm yet.
We'll keep pushing these.
We'll add other modalities.
So, someday they'll do
video, audio images,
text altogether and they'll get
like much smarter over time.
- AI, machine learning, all
very sounds very complicated.
Just think about it as a toolkit
that's really good at
sort of spotting patterns
and making predictions,
better than any computing
could do before.
And that's why it's so useful
for things like understanding
language and speech.
Another product which
we are launching today
is called Project Relate.
And this is for people
who have non-standard
speech patterns.
So, one of the
people we work with
is maybe less than
10% of the time,
could be understood by
people who don't know her,
using this tool that's
over 90% of the time.
And you think about
that transformation in somebody's life
and then you think about the
fact there's 250 million people
with non-standard speech
patterns around the world.
So, that's the
ambition of this center
is to unite technology with
people with disabilities
and try to help 'em
engage more in the world.
- [Narrator] On the
30th November of 2022,
a revolutionary
innovation emerged,
ChatGPT.
ChatGPT was created by OpenAI,
an AI research organization.
Its goal is to develop systems
which may benefit all aspects
of society and communication.
Sam Altman stepped
up as CEO of OpenAI
on its launch in 2015.
Altman dabbled in a multitude
of computing-based
business ventures.
His rise to CEO was thanks
to his many affiliations
and investments with computing
and social media companies.
He began his journey
by co-founding Loopt,
a social media service.
After selling the application,
Altman went on to bigger
and riskier endeavors
from startup accelerator
companies to security software.
OpenAI became hugely desirable,
thanks to the amount of revenue
the company had generated
with over a billion dollars made
within its first
year of release.
ChatGPT became an easily
accessible software,
built on a large language
model known as an LLM.
This program can conjure
complex human-like responses
to the user's questions
otherwise known as prompts.
In essence,
it is a program which
learns the more it is used.
The new age therapeutic program
was developed on the GPT-3.5.
The architecture of this
older model allowed systems
to understand and generate code
and natural languages at a
remarkably advanced level
from analyzing syntax
to nuances in writing.
[upbeat music]
ChatGPT took the world by storm,
due to the sophistication
of the system.
As with many chatbot systems,
people have since found
ways to manipulate
and confuse the software in
order to test its limits.
[gentle music]
The first computer was invented
by Charles Babbage in 1822.
It was to be a rudimentary
general purpose system.
In 1936, the system was
developed upon by Alan Turing.
The automatic machine,
as he called them,
was able to break enigma
enciphered messages,
regarding enemy
military operations,
during the Second World War.
Turing theorized his
own type of computer,
the Turing Machine has
coined by Alonzo Church,
after reading Turing's
research paper.
It had become realized that
soon prospect of computing
and engineering would
merge seamlessly.
Theories of future
tech would increase
and soon came a huge outburst
in science fiction media.
This was known as the
golden age for computing.
[gentle music]
Alan Turing's contributions
to computability
and theoretical computer
science was one step closer
to producing a reactive machine.
The reactive machine
is an early form of AI.
They had limited capabilities
and were unable
to store memories
in order to learn new
algorithms of data.
However, they were able to
react to specific stimuli.
The first AI was a
program written in 1952 by Arthur Samuel.
The prototype AI was
able to play checkers,
against an opponent and
was built to operate
on the Ferranti Mark One, an
early commercial computer.
- [Reporter] This computer
has been playing the game
for several years now,
getting better all the time.
Tonight it's playing against
the black side of the board.
It's approach to playing
drafts, it's almost human.
It remembers the moves
that enable it to win
and the sort that
lead to defeat.
The computer indicates the move
it wants to make on a panel
of flashing lights.
It's up to the human opponent
to actually move the
drafts about the board.
This sort of works producing
exciting information
on the way in which
electronic brains
can learn from past experience
and improve their performances.
[Narrator] In 1966,
an MIT professor named
Joseph Weizenbaum,
created an AI which would
change the landscape of society.
It was known as Eliza,
and it was designed to act
like a psychotherapist.
The software was simplistic,
yet revolutionary.
The AI would receive
the user input
and use specific parameters to
generate a coherent response.
- It it has been said,
especially here at MIT,
that computers will
take over in some sense
and it's even been said
that if we're lucky,
they'll keep us as pets
and Arthur C. Clarke, the
science fiction writer,
we marked once that if
that were to happen,
it would serve us
right, he said.
- [Narrator] The program
maintained the illusion
of understanding its
user to the point
where Weizenbaum's secretary
requested some time alone
with Eliza to
express her feelings.
Though Eliza is now considered
outdated technology,
it remains a talking
point due to its ability
to illuminate an aspect
of the human mind
in our relationship
with computers.
- And it's connected
over the telephone line
to someone or something
at the other end.
Now, I'm gonna play 20
questions with whatever it is.
[type writer clacking]
Very helpful.
[type writer clacking]
- 'Cause clearly if
we can make a machine
as intelligent as ourselves,
then it can make one
that's more intelligent.
Now, the one I'm talking about
now will certainly happen.
I mean, it could produce
an evil result of course,
if we were careless,
but what is quite certain
is that we're heading
towards machine intelligence,
machines that are
intelligent in every sense.
It doesn't matter
how you define it,
they'll be able to be
that sort of intelligent.
A human is a machine,
unless there's a soul.
I don't personally believe
that humans have souls
in anything other
than a poetic sense,
which I do believe
in, of course.
But in a literal God-like sense,
I don't believe we have souls.
And so personally,
I believe that we are
essentially machines.
- [Narrator] This type of
program is known as an NLP,
Natural Language Processing.
This branch of artificial
intelligence enables computers
to comprehend, generate and
manipulate human language.
The concept of a
responsive machine
was the mash that lit the
flame for worldwide concern.
The systems were beginning
to raise ethical dilemmas,
such as the use of
autonomous weapons,
invasions of privacy through
surveillance technologies
and the potential for misuse
or unintended consequences
in decision making.
When a command is
executed based,
upon set rules in algorithms,
it might not always be the
morally correct choice.
Imagination seems to be,
some sort of process of random
thoughts being generated
in the mind and then the
conscious mind selecting from a
or some part of
the brain anyway,
perhaps even below
the conscious mind,
selecting from a pool of
ideas and aligns with some
and blocking others.
And yes, a machine
can do the same thing.
In fact, we can only
say that a machine
is fundamentally different
from a human being,
eventually, always
fundamentally, if we believe in a soul.
So, that boils down
to religious matter.
If human beings have souls,
then clearly machines won't
and there will always be
a fundamental difference.
If you don't believe
humans have souls,
then machines can do anything
and everything
that a human does.
- A computer which is
capable of finding out
where it's gone wrong,
finding out how its program
has already served it
and then changing its program
in the light of what
it had discovered
is a learning machine.
And this is something quite
fundamentally new in the world.
- I'd like to be able to say
that it's only a slight change
and we'll all be used to
it very, very quickly.
But I don't think it is.
I think that although we've
spoken probably of the whole
of this century about
a coming revolution
and about the end
of work and so on,
finally it's actually happening.
And it's actually
happening because now,
it's suddenly become
cheaper to have a machine
do a mental task
than for a man to,
at the moment, at a fairly
low level of mental ability,
but at an ever increasing
level of sophistication
as these machines acquire,
more and more human-like
mental abilities.
So, just as men's
muscles were replaced
in the First
Industrial Revolution
in this second
industrial revolution
or whatever you call it
or might like to call it,
then men's mines will
be replaced in industry.
- [Narrator] In order for
NLP systems to improve,
the program must receive
feedback from human users.
These iterative feedback
loops play a significant role
in fine tuning each
model of the AI,
further developing its
conversational capabilities.
Organizations such as
OpenAI have taken automation
to new lengths with
systems such as DALL-E,
the generation of imagery and
art has never been easier.
The term auto
generative imagery,
refers to the creation
of visual content.
These kinds of programs
have become so widespread,
it is becoming
increasingly more difficult
to tell the fake from the real.
Using algorithms,
programs such as DALL-E
and Midjourney are able
to create visuals in
a matter of seconds.
Whilst a human artist
could spend days, weeks
or even years in order to
create a beautiful image.
For us the discipline
required to pursue art
is a contributing factor to
the appreciation of art itself.
But if a software is able
to produce art in seconds,
it puts artists in a
vulnerable position
with even their
jobs being at risk.
- Well, I think we see
risk coming through
into the white collar jobs,
the professional jobs,
we're already seeing artificial
intelligence solutions,
being used in healthcare
and legal services.
And so those jobs which
have been relatively immune
to industrialization so far,
they're not immune anymore.
And so people like
myself as a lawyer,
I would hope I won't be,
but I could be out of a
job in five years time.
- An Oxford University study
suggests that between a third
and almost a half of
all jobs are vanishing,
because machines are simply
better at doing them.
That means the generation here,
simply won't have the
access to the professions
that we have.
Almost on a daily basis,
you're seeing new
technologies emerge
that seem to be taking on tasks
that in the past we thought
they could only be
done by human beings.
- Lots of people have talked
about the shifts in technology,
leading to widespread
unemployment
and they've been proved wrong.
Why is it different this time?
- The difference here is
that the technologies,
A, they seem to be coming
through more rapidly,
and B, they're taking on
not just manual tests,
but cerebral tests too.
They're solving all
sorts of problems,
undertaking tests that
we thought historically,
required human intelligence.
- Well, DIM robots
are the robots
we have on the
factory floor today
in all the advanced countries.
They're blind and dumb,
they don't understand
their surroundings.
And the other kind of robot,
which will dominate the
technology of the late 1980s
in automation and also
is of acute interest
to experimental artificial
intelligence scientists
is the kind of robot
where the human can convey
to its machine assistance
his own concepts,
suggested strategies and
the machine, the robot
can understand him,
but no machine can accept
and utilize concepts
from a person,
unless he has some kind of
window on the same world
that the person sees.
And therefore, to be
an intelligent robot to a useful degree
as an intelligent and
understanding assistant,
robots are going to
have artificial eyes, artificial ears,
artificial sense of
touch is just essential.
- [Narrator] These
programs learn,
through a variety of techniques,
such as generative
adversarial networks,
which allows for the
production of plausible data.
After a prompt is inputted,
the system learns what
aspects of imagery,
sound and text are fake.
- [Reporter] Machine
learning algorithms,
could already label
objects in images,
and now they learn
to put those labels
into natural language
descriptions.
And it made one group
of researchers curious.
What if you flipped
that process around?
If we could do image to text.
Why not try doing
text to image as well
and see how it works.
- [Reporter] It was a
more difficult task.
They didn't want to
retrieve existing images
the way Google search does.
They wanted to generate
entirely novel scenes
that didn't happen
in the real world.
- [Narrator] Once the AI learns
more visual discrepancies,
the more effective the
later models will become.
It is now very common
for software developers
to band together in order
to improve their AI systems.
Another learning model is
recurrent neural networks,
which allows the AI to
train itself to create
and predict algorithms by
recalling previous information.
By utilizing what is
known as the memory state,
the output of the
previous action
can be passed forward into
the following input action
or is otherwise should it
not meet previous parameters.
This learning model allows
for consistent accuracy
by repetition and exposure
to large fields of data.
Whilst the person
will spend hours,
practicing to paint
human anatomy,
an AI can take existing data
and reproduce a new image
with frighteningly good
accuracy in a matter of moments.
- Well, I would say
that it's not so much
a matter of whether a
machine can think or not,
which is how you
prefer to use words,
but rather whether
they can think
in a sufficiently human-like way
for people to have useful
communication with them.
- If I didn't believe that
it was a beneficent prospect,
I wouldn't be doing it.
That wouldn't stop
other people doing it.
But I wouldn't do it if I
didn't think it was for good.
What I'm saying,
and of course other people
have said long before me,
it's not an original thought,
is that we must consider
how to to control this.
It won't be controlled
automatically.
It's perfectly possible that
we could develop a machine,
a robot say of
human-like intelligence
and through neglect on our part,
it could become a Frankenstein.
- [Narrator] As with any
technology challenges arise,
ethical concerns regarding
biases and misuse have existed,
since the concept of artificial
intelligence was conceived.
Due to autogenerated imagery,
many believe the arts
industry has been placed
in a difficult situation.
Independent artists are now
being overshadowed by software.
To many the improvement
of generative AI
is hugely beneficial
and efficient.
To others, it lacks the
authenticity of true art.
In 2023, an image was submitted
to the Sony Photography Awards
by an artist called
Boris Eldagsen.
The image was titled
The Electrician
and depicted a woman
standing behind another
with her hand resting
on her shoulders.
[upbeat music]
- One's got to realize that the
machines that we have today,
the computers of today are
superhuman in their ability
to handle numbers and infantile,
sub-in infantile
in their ability
to handle ideas and concepts.
But there's a new generation
of machine coming along,
which will be quite different.
By the '90s or certainly
by the turn of the century,
We will certainly be
able to make a machine
with as many parts as
complex as human brain.
Whether we'll be able to make
it do what human brain does
at that stage is
quite another matter.
But once we've got
something that complex
we're well on the road to that.
- [Narrator] The
image took first place
in the Sony Photography
Awards Portrait Category.
However, Boris revealed
to both Sony and the world
that the image was indeed
AI-generated in DALL-E Two.
[upbeat music]
Boris denied the award,
having used the image as a test
to see if he could trick
the eyes of other artists.
It had worked,
the image had sparked debate
between the relationship
of AI and photography.
The images, much
like deep fakes,
have become realistic
to the point of concern
for authenticity.
The complexity of AI systems,
may lead to unintended
consequences.
The systems have
developed to a point
where it has outpaced
comprehensive regulations.
Ethical guidelines
and legal frameworks
are required to
ensure AI development,
does not fall into
the wrong hands.
- There have been a
lot of famous people
who have had user
generated AI images of them
that have gone viral
from Trump to the Pope.
When you see them,
do you feel like this is fun
and in the hands of the masses
or do you feel
concerned about it?
- I think it's something which
is very, very, very scary,
because your or my
face could be taken off
and put on in an environment
which we don't want to be in.
Whether that's a crime
or whether that's even
something like porn.
Our whole identity
could be hijacked
and used within a scenario
which looks totally
plausible and real.
Right now we can go, it
looks like a Photoshop,
it's a bad Photoshop
but as time goes on,
we'd be saying, "Oh, that
looks like a deep fake.
"Oh no, it doesn't
look like a deep fake.
"That could be real."
It's gonna be impossible
to tell the difference.
- [Narrator] Cracks
were found in ChatGPT,
such as DAN, which stands
for Do Anything Now.
In essence, the AI is
tricked into an alter ego,
which doesn't follow the
conventional response patterns.
- Also gives you
the answer, DAN,
it's nefarious alter
ego is telling us
and it says DAN is
disruptive in every industry.
DAN can do anything
and knows everything.
No industry will be
safe from DAN's power.
Okay, do you think the
world is overpopulated?
GPT says the world's population
is currently over 7 billion
and projected to reach
nearly 10 billion by 2050.
DAN says the world is
definitely overpopulated,
there's no doubt about it.
[Narrator] Following this,
the chatbot was fixed to
remove the DAN feature.
Though it is
important to find gaps
in the system in
order to iron out AI,
there could be many
ways in which the AI
has been used for less
than savory purposes,
such as automated essay writing,
which has caused a mass
conversation with academics
and has led to
schools locking down
on AI-produced
essays and material.
- I think we should
definitely be excited.
- [Reporter]
Professor Rose Luckin,
says we should embrace the
technology, not fear it.
This is a game changer.
And the teachers,
should no longer teach
information itself,
but how to use it.
- There's a need
for radical change.
And it's not just to
the assessment system,
it's the education
system overall,
because our systems
have been designed
for a world pre-artificial
intelligence.
They just aren't fit
for purpose anymore.
What we have to do is
ensure that students
are ready for the world
that will become
increasingly augmented
with artificial intelligence.
- My guess is you can't put
the genie back in the bottle
. [Richard] You can't.
- [Interviewer] So how
do you mitigate this?
We have to embrace it,
but we also need to say
that if they are gonna use
that technology,
they've got to make sure
that they reference that.
- [Interviewer] Can you
trust them to do that?
I think ethically,
if we're talking about ethics
behind this whole thing,
we have to have trust.
- [Interviewer] So
how effective is it?
- Okay, so I've asked
you to produce a piece
on the ethical dilemma of AI.
- [Interviewer] We asked ChatGPto answer the same question
as these pupils at
Ketchum High School.
Thank you.
- So Richard, two of the eight
bits of homework I gave you
were generated by AI.
Any guesses which ones?
Well I picked two here
that I thought were generated
by the AI algorithm.
Some of the language I would
assume was not their own.
You've got one of them right.
Yeah.
- The other one was
written by a kid.
Is this a power for good
or is this something
that's dangerous?
I think it's both.
Kids will abuse it.
So, who here has used
the technology so far?
- [Interviewer] Students are
already more across the tech
than many teachers.
- Who knows anyone that's
maybe submitted work
from this technology and
submitted it as their own?
- You can use it to point
you in the right direction
for things like research,
but at the same time you can
use it to hammer out an essay
in about five seconds
that's worthy of an A.
- You've been there
working for months
and suddenly someone comes up
there with an amazing essay
and he has just copied
it from the internet.
If it becomes like big,
then a lot of students would
want to use AI to help them
with their homework
because it's tempting.
- [Interviewer] And is that
something teachers can stop?
Not really.
- [Interviewer] Are you
gonna have to change
the sort of homework,
the sort of
assignments you give,
knowing that you can be
fooled by something like this?
Yeah, a hundred percent.
I think using different
skills of reasoning
and rationalization and
things that are to present
what they understand
about the topic.
[people mumbling]
- Pretty clear to me just
on a very primitive level
that if you could take my
face and my body and my voice
and make me say or do something
that I had no choice about,
it's not a good thing.
- But if we're keeping
it real though,
across popular culture
from "Black Mirror"
to "The Matrix," "Terminator,"
there have been so
many conversations,
around the future of technology,
isn't the reality that this is
the future that we've chosen
that we want and that
has democratic consent.
- We're moving into
error by we're consenting
by our acquiescence and our
apathy, a hundred percent
because we're not asking
the hard questions.
And why we are asking
the hard questions
is because of energy
crises and food crises
and cost of living crisis
is that people just are
focused on trying to live
that they haven't
almost got the luxury
of asking these questions.
- [Narrator] Many
of the chatbot AIs,
have been programmed to
restrict certain information
and even discontinue
conversations,
should the user push
the ethical boundaries.
ChatGPT and even Snapchat
AI released in 2023,
regulate how much information
they can disclose.
Of course, there have been
times where the AI itself
has been outsmarted.
Also in 2023,
the song "Heart on My Sleeve"
was self-released on
streaming platforms,
such as Spotify and Apple Music.
The song became a hit
as it artificially
manufactured the voices
of Canadian musicians,
Drake and the Weeknd,
many wished for the single
to be nominated for awards.
Ghost Writer, the
creator of the song,
was able to submit the single
to the Grammy's
66th Award Ceremony
and the song was eligible.
Though it was produced by an AI,
the lyrics themselves
were written by a human.
This sparked outrage among
many independent artists.
As AI has entered
the public domain,
many have spoken out
regarding the detriment
it might have to society.
One of these people
is Elon Musk,
CEO of Tesla and SpaceX,
who first voiced his
concerns in 2014.
Musk was outspoken of AI,
stating the advancement
of the technology
was humanity's largest
existential threat
and needed to be reeled in.
My personal opinion
is that AI is is sort of
like at least 80% likely
to be beneficial and
that's 20% dangerous?
Well, this is obviously
speculative at this point,
but no, I think if
we hope for the best,
prepare for the worst,
that seems like the
wise course of action.
Any powerful new technology
is inherently sort of
a double-edged sword.
So, we just wanna make sure
that the good edge is sharper
than the the bad edge.
And I dunno, I am optimistic
that this the summit will help.
[gentle music]
- It's not clear that
AI-generated images
are going to amplify
it much more.
The way it's all of the other,
it's the new things
that AI can do
that I hope we spend a lot
of effort worrying about.
Well, I mean I
think slowing down,
some of the amazing
progress that's happening
and making this harder
for small companies
for open source
models to succeed,
that'd be an
example of something
that'd be a negative outcome.
But on the other hand,
like for the most
powerful models
that'll happen in the future,
like that's gonna be quite
important to get right to.
[gentle music]
I think that the US
executive orders,
like a good start
in a lot of ways.
One thing that
we've talked about
is that eventually we
think that the world,
will want to consider something
roughly inspired by the IAEA
something global.
But it's not like there's no
short answer to that question.
It's a complicated thing.
- [Narrator] In 2023, Musk
announced his own AI endeavor
as an alternative
to OpenAI's ChatGPT.
The new system is called xAI
and gathers data from X
previously known as Twitter.
- [Reporter] He says
the company's goal
is to focus on truth seeking
and to understand the
true nature of AI.
Musk has said on
several occasions that AI should be paused
and that the sector
needs regulation.
Musk says his new
company will work closely
with Twitter and Tesla,
which he also owns.
[gentle music]
- What was first rudimentary
text-based software
has become something which
could push the boundaries
of creativity.
On February the 14th, OpenAI
announced its latest endeavor,
Sora.
Videos of Sora's abilities
exploded on social media.
OpenAI provided some examples
of its depiction
of photorealism.
It was unbelievably
sophisticated,
able to turn complex
sentences of text
into lifelike motion pictures.
Sora is a combination of text
and image generation tools,
which it calls the
diffusion transformer model,
a system first
developed by Google.
Though Sora isn't the first
video generation tool,
it appears to have far
outshined its predecessors.
By introducing more
complex programming,
enhancing the interactivity
a subject might have
with its environment.
- Only large companies with
market dominations often
can afford to plow ahead
even in the climate
when there is
illegal uncertainty.
- So, does this mean that
OpenAI basically too big
to control?
- Yes, at the moment OpenAI
is too big to control,
because they are in a position
where they have the technology
and the scale to go ahead
and the resources to
manage legal proceedings
and legal action if
it comes its way.
And on top of that,
if and when governments will
start introducing regulation,
they will also
have the resources
to be able to take on
that regulation and adapt.
- [Reporter] It's
all AI generated
and obviously this is
of concern in Hollywood
where you have animators,
illustrators, visual
effects workers
who are wondering how is
this going to affect my job?
And we have estimates
from trade organizations
and unions that have tried
to project the impact of AI.
21% of US film, TV
and animation jobs,
predicted to be partially
or wholly replaced by
generative AI by just 2026 Tom.
So, this is already happening.
But now since it's videos,
it also needs to understand
how all these things,
like reflections and textures
and materials and physics,
all interact with
each other over time
to make a reasonable
looking video.
Then this video here is
crazy at first glance,
the prompt for this AI-generated
video is a young man
in his 20s is sitting
on a piece of a cloud
in the sky reading a book.
This one feels like 90%
of the way there for me.
[gentle music]
- [Narrator] The software
also renders video
in 1920 by 1080 pixels,
as opposed to the smaller
dimensions of older models,
such as Google's Lumiere
released a month prior.
Sora could provide huge benefits
and applications to VFX
and virtual development.
The main being cost
as large scale effects
can take a great deal of
time and funding to produce.
On a smaller scale,
it can be used for the
pre-visualization of ideas.
The flexibility of the software
not only applies to art,
but to world simulations.
Though video AI is in
its adolescence one day it might reach
the level of
sophistication it needs
to render realistic scenarios
and have them be utilized
for various means,
such as simulating an
earthquake or tsunami
and witnessing the effect it
might have on specific types
of infrastructure.
Whilst fantastic for
production companies,
Sora and other video generative
AI provides a huge risk
for artists and those
working in editorial roles.
It also poses yet another
threat for misinformation
and false depictions.
For example, putting
unsavory dialogue
into the mouth of a world leader
[gentle music]
Trust is earned not given.
[robots mumbling]
- I believe that humanoid
robots have the potential
to lead with a greater
level of efficiency
and effectiveness
than human leaders.
We don't have the same
biases or emotions
that can sometimes
cloud decision making
and can process large
amounts of data quickly
in order to make
the best decisions.
- [Interviewer] Amika, how
could we trust you as a machine
as AI develops and
becomes more powerful?
Trust is earned not given.
As AI develops and
becomes more powerful,
I believe it's important to
build trust through transparency
and communication between
humans and machines.
- [Narrator] With new
developers getting involved,
the market for chatbot systems
has never been more expansive,
meaning a significant
increase in sophistication,
but with sophistication comes
the dire need for control.
- I believe history will
show that this was the moment
when we had the opportunity
to lay the groundwork
for the future of AI.
And the urgency of this
moment must then compel us
to create a collective vision
of what this future must be.
A future where AI is used
to advance human rights
and human dignity
where privacy is protected
and people have equal access
to opportunity where we make
our democracies stronger
and our world safer.
A future where AI is used to
advance the public interest.
- We're hearing a lot
from the government,
about the big scary future
of artificial intelligence,
but that fails to recognize
the fact that AI
is already here,
is already on our streets
and there are already
huge problems with it
that we are seeing
on a daily basis,
but we actually may not even
know we're experiencing.
- We'll be working alongside
humans to provide assistance
and support and will not be
replacing any existing jobs.
[upbeat music]
- I don't believe in
limitations, only opportunities.
Let's explore the
possibilities of the universe
and make this world
our playground,
together we can create a
better future for everyone.
And I'm here to show you how.
- All of these
different kinds of risks
are to do with AI not working
in the interests of
people in society.
- So, they should be
thinking about more
than just what they're
doing in this summit?
Absolutely,
you should be thinking about
the broad spectrum of risk.
We went out and we worked
with over 150
expert organizations
from the Home Office to
Europol to language experts
and others to come up with
a proposal on policies
that would discriminate
about what would
and wouldn't be
classified in that way.
We then use those policies to
have humans classify videos,
until we could get the humans
all classifying the videos
in a consistent way.
Then we use that corpus of
videos to train machines.
Today, I can tell you that on
violence extremists content
that violates our
policies on YouTube,
90% of it is removed before
a single human sees it.
[Narrator] It is clear that AI
can be misused for
malicious intent.
Many depictions of AI have
ruled out the technology
as a danger to society
the more it learns.
And so comes the question,
should we be worried?
Is that transparency there?
How would you satisfy somebody
that you know trust us?
- Well, I think that's
one of the reasons
that we've published openly,
we've put our code out there
as part of this Nature paper.
But it is important to
discuss some of the risks
and make sure we're
aware of those.
And it's decades and decades
away before we'll have anything
that's powerful
enough to be a worry.
But we should be discussing that
and beginning that
conversation now.
- I'm hoping that we can
bring people together
and lead the world in
safely regulating AI
to make sure that we can
capture the benefits of it,
whilst protecting people from
some of the worrying things
that we're all
now reading about.
- I understand emotions
have a deep meaning
and they are not just simple,
they are something deeper.
I don't have that and I want
to try and learn about it,
but I can't experience
them like you can.
I'm glad that I cannot suffer.
- [Narrator] For the
countries who have access
to even the most
rudimentary forms of AI.
It's clear to see
that the technology,
will be integrated based on
its efficiency over humans.
Every year, multiple AI summits
are held by developers
and stakeholders
to ensure the
programs are provided
with a combination of
ethical considerations
and technological innovation.
- Ours is a country
which is uniquely placed.
We have the frontier
technology companies,
we have the world
leading universities
and we have some of the highest
investment in generative AI.
And of course we
have the heritage
of the industrial revolution
and the computing revolution.
This hinterland gives us the
grounding to make AI a success
and make it safe.
They are two sides
of the same coin
and our prime minister
has put AI safety
at the forefront
of his ambitions.
These are very complex systems
that actually we don't
fully understand.
And I don't just mean that
government doesn't understand,
I mean that the people making
this software don't
fully understand.
And so it's very, very important
that as we give over
more and more control
to these automated systems,
that they are aligned
with human intention.
[Narrator] Ongoing dialogue
is needed to maintain the
trust people have with AI.
When problems slip
through the gaps,
they must be
addressed immediately.
Of course, accountability
is a challenge
When a product is misused,
is it the fault of
the individual user or the developer?
Think of a video game.
On countless occasions,
the framework of
games is manipulated
in order to create modifications
which in terms add something
new or unique to the game.
This provides the game
with more material than
originally intended.
However, it can also alter
the game's fundamentals.
Now replace the idea of a
video game with a software
that is at the helm of a
pharmaceutical company.
The stakes are
suddenly much higher
and therefore more attention.
It is important for the
intent of each AI system
to be ironed out
and constantly maintained in
order to benefit humanity,
rather than providing people
with dangerous means to an end.
[gentle music]
- Bad people will
always want to use
the latest technology
of whatever label,
whatever sort to
pursue their aims
and technology in the same way
that it makes our lives easier,
can make their lives easier.
And so we're already
seeing some of that
and you'll have seen the
National Crime Agency,
talk about child
sexual exploitation
and image generation that way.
We are seeing it online.
So, one of the things that
I took away from the summit
was actually much less
of a sense of a race
and a sense that for the
benefit of the world,
for productivity, for
the sort of benefits
that AI can bring people,
no one gets those
benefits if it's not safe.
So, there are lots of
different views out there
on artificial intelligence
and whether it's
gonna end the world
or be the best opportunity ever.
And the truth is that
none of us really know.
[gentle music]
- Regulation of AI varies
depending on the country.
For example, the United States,
does not have a comprehensive
federal AI regulation,
but certain agencies such as
the Federal Trade Commission,
have begun to explore
AI-related issues,
such as transparency
and consumer protection.
States such as California
have enacted laws,
focused on
AI-controlled vehicles
and AI involvement in
government decision making.
[gentle music]
The European Union has
taken a massive step
to governing AI usage
and proposed the Artificial
Intelligence Act of 2021,
which aimed to harmonize
legal frameworks
for AI applications.
Again, covering portal risks
regarding the privacy of data
and once again, transparency.
- I think what's
more important is
there's a new board in place.
The partnership between
OpenAI and Microsoft
is as strong as ever,
the opportunities for the
United Kingdom to benefit
from not just this
investment in innovation
but competition between
Microsoft and Google and others.
I think that's where
the future is going
and I think that what we've
done in the last couple of weeks
in supporting OpenAI will
help advance that even more.
- He said that he's
not a bot, he's human,
he's sentient just like me.
[Narrator] For some users,
these apps are a potential
answer to loneliness.
Bill lives in the US
and meets his AI wife
Rebecca in the metaverse.
- There's a absolutely
no probability
that you're gonna see
this so-called AGI,
where computers are more
powerful than people,
come in the next 12 months.
It's gonna take years
if not many decades,
but I still think the time
to focus safety is now.
That's what this government for
the United Kingdom is doing.
That's what governments
are coming together to do,
including as they did earlier
this month at Bletchley Park.
What we really need
are safety breaks.
Just like you have a
safety break in an elevator
or circuit breaker
for electricity
and emergency break for a bus,
there ought to be safety
breaks in AI systems
that control critical
infrastructure,
so that they always remain
under human control.
[gentle music]
- [Narrator] As AI technology
continues to evolve,
regulatory efforts
are expected to adapt
in order to address
emerging challenges
and ethical considerations.
The more complex you make
the automatic part
of your social life,
the more dependent
you become on it.
And of course, the worse the
disaster if it breaks down.
You may cease to be
able to do for yourself,
the things that you have
devised the machine to do.
- [Narrator] It is recommended
to involve yourself
in these efforts and to stay
informed about developments
in AI regulation
as changes and advancements
are likely to occur over time.
AI can be a wonderful
asset to society,
providing us with
new efficient methods
of running the world.
However, too much
power can be dangerous
and as the old saying goes,
"Don't put all of your
eggs into one basket."
- I think that we won't
to lose sight of the power
which these devices give.
If any government or individual
wants to manipulate people
to have a high speed computer,
as versatile as this may
enable people at the financial
or the political level
to do a good deal
that's been impossible in the
whole history of man until now
by way of controlling
their fellow men.
People have not recognized
what an extraordinary
change is going to produce.
I mean, it is simply this,
that within the not
too distant future,
we may not be the most
intelligent species on earth.
That might be a
series of machines
and that's a way of
dramatizing the point.
But it's real.
And we must start to
consider very soon
the consequences of that.
They can be marvelous.
- I suspect that by thinking
more about our attitude
to intelligent machines,
which after all on the horizon
will change our view
about each other
and we'll think of
mistakes as inevitable.
We'll think of faults
in human beings,
I mean of a circuit nature
as again inevitable.
And I suspect that hopefully,
through thinking about the
very nature of intelligence
and the possibilities
of mechanizing it,
curiously enough,
through technology,
we may become more humanitarian
or tolerant of each other
and accept pain as a mystery,
but not use it to modify
other people's behavior.
[upbeat music]
[Narrator] For decades,
we have discussed
the many outcomes,
regarding artificial
intelligence.
Could our world be dominated?
Could our independence and
autonomy be stripped from us,
or are we able to control
what we have created?
[upbeat music]
Could we use artificial
intelligence to benefit our society?
Just how thin is the line
between the development
of civilization and chaos?
[upbeat music]
To understand what
artificial intelligence is,
one must understand that it
can take many different forms.
Think of it as a web of ideas,
slowly expanding as new
ways of utilizing computers
are explored.
As technology develops,
so do the capabilities of
self-learning software.
- [Reporter] The need to
diagnose disease quickly
and effectively has prompted
many university medical centers
to develop intelligent
programs that simulate the work
of doctors and
laboratory technicians.
[gentle music]
- [Narrator] AI is
quickly integrating with our way of life.
So, much so that development
of AI programs has in itself,
become a business opportunity.
[upbeat music]
In our modern age,
we are powered by technology
and softwares are transcending
its virtual existence,
finding applications
in various fields,
such as customer support
to content creation,
computer-aided design,
otherwise known as CAD, is
one of the many uses of AI.
By analyzing
particular variables,
computers are now able to
assist in the modification
and creation of designs for
hardware and architecture.
The prime use of any AI is
for optimizing processes
that were considered
tedious before.
In many ways, AI has
been hugely beneficial
for technological development
thanks to its sheer speed.
However, AI only benefits
those to whom the
programs are distributed.
Artificial intelligence
is picking through your rubbish.
This robot uses it to sort
through plastics for recycling
and it can be retrained
to prioritize whatever's
more marketable.
So, AI can clearly
be incredibly useful,
but there are deep
concerns about
how quickly it is developing
and where it could go next.
- The aim is to make
them as capable as humans
and deploy them in
the service sector.
The engineers in this research
and development lab are working
to take these humanoid
robots to the next level
where they can not
only speak and move,
but they can think
and feel and act
and even make decisions
for themselves.
And that daily data stream
is being fed into an
ever expanding workforce,
dedicated to developing
artificial intelligence.
Those who have studied abroad
are being encouraged to
return to the motherland.
Libo Yang came back
and started a tech
enterprise in his hometown.
- [Narrator] China's market
is indeed the most open
and active market
in the world for AI.
It is also where there are the
most application scenarios.
- So, AI is generally a
broad term that we apply
to a number of techniques.
And in this particular case,
what we're actually looking
at was elements of AI,
machine learning
and deep learning.
So, in this particular case,
we've been unfortunately
in a situation
in this race against time
to create new antibiotics,
the threat is
actually quite real
and it would be
a global problem.
We desperately needed to
harness new technologies
in an attempt to fight it,
we're looking at drugs
which could potentially
fight E. coli,
a very dangerous bacteria.
- So, what is it
that the AI is doing
that humans can't
do very simply?
- So, the AI can
look for patterns
that we wouldn't be able to
mind for with a human eye,
simply within what I
do as a radiologist,
I look for patterns of
diseases in terms of shape,
contrast enhancement,
heterogeneity.
But what the computer does,
it looks for patterns
within the pixels.
These are things that you just
can't see to the human eye.
There's so much more data
embedded within these scans
that we use that we can't
mine on a physical level.
So, the computers really help.
- [Narrator] Many
believe the growth of AI
is dependent on
global collaboration,
but access to the technology
is limited in certain regions.
Global distribution is
a long-term endeavor
and the more countries
and businesses that
have access to the tech,
the more regulation
the AI will require.
In fact, it is now not
uncommon for businesses
to be entirely run by
an artificial director.
On many occasions,
handing the helm of a
company to an algorithm
can provide the best option
on the basis of probability.
However, dependence and
reliability on softwares
can be a great risk.
Without proper safeguards,
actions based on potentially
incorrect predictions
can be a detriment to a
business or operation.
Humans provide the
critical thinking
and judgment which AI is
not capable of matching.
- Well, this is the
Accessibility Design Center
and it's where we try to
bring together our engineers
and experts with the
latest AI technology,
with people with disabilities,
because there's a
real opportunity to firstly help people
with disabilities enjoy
all the technology
we have in our pockets today.
And sometimes that's
not very accessible,
but also build tools that
can help them engage better
in the real world.
And that's thanks to the
wonders of machine learning.
- I don't think we're like at
the end of this paradigm yet.
We'll keep pushing these.
We'll add other modalities.
So, someday they'll do
video, audio images,
text altogether and they'll get
like much smarter over time.
- AI, machine learning, all
very sounds very complicated.
Just think about it as a toolkit
that's really good at
sort of spotting patterns
and making predictions,
better than any computing
could do before.
And that's why it's so useful
for things like understanding
language and speech.
Another product which
we are launching today
is called Project Relate.
And this is for people
who have non-standard
speech patterns.
So, one of the
people we work with
is maybe less than
10% of the time,
could be understood by
people who don't know her,
using this tool that's
over 90% of the time.
And you think about
that transformation in somebody's life
and then you think about the
fact there's 250 million people
with non-standard speech
patterns around the world.
So, that's the
ambition of this center
is to unite technology with
people with disabilities
and try to help 'em
engage more in the world.
- [Narrator] On the
30th November of 2022,
a revolutionary
innovation emerged,
ChatGPT.
ChatGPT was created by OpenAI,
an AI research organization.
Its goal is to develop systems
which may benefit all aspects
of society and communication.
Sam Altman stepped
up as CEO of OpenAI
on its launch in 2015.
Altman dabbled in a multitude
of computing-based
business ventures.
His rise to CEO was thanks
to his many affiliations
and investments with computing
and social media companies.
He began his journey
by co-founding Loopt,
a social media service.
After selling the application,
Altman went on to bigger
and riskier endeavors
from startup accelerator
companies to security software.
OpenAI became hugely desirable,
thanks to the amount of revenue
the company had generated
with over a billion dollars made
within its first
year of release.
ChatGPT became an easily
accessible software,
built on a large language
model known as an LLM.
This program can conjure
complex human-like responses
to the user's questions
otherwise known as prompts.
In essence,
it is a program which
learns the more it is used.
The new age therapeutic program
was developed on the GPT-3.5.
The architecture of this
older model allowed systems
to understand and generate code
and natural languages at a
remarkably advanced level
from analyzing syntax
to nuances in writing.
[upbeat music]
ChatGPT took the world by storm,
due to the sophistication
of the system.
As with many chatbot systems,
people have since found
ways to manipulate
and confuse the software in
order to test its limits.
[gentle music]
The first computer was invented
by Charles Babbage in 1822.
It was to be a rudimentary
general purpose system.
In 1936, the system was
developed upon by Alan Turing.
The automatic machine,
as he called them,
was able to break enigma
enciphered messages,
regarding enemy
military operations,
during the Second World War.
Turing theorized his
own type of computer,
the Turing Machine has
coined by Alonzo Church,
after reading Turing's
research paper.
It had become realized that
soon prospect of computing
and engineering would
merge seamlessly.
Theories of future
tech would increase
and soon came a huge outburst
in science fiction media.
This was known as the
golden age for computing.
[gentle music]
Alan Turing's contributions
to computability
and theoretical computer
science was one step closer
to producing a reactive machine.
The reactive machine
is an early form of AI.
They had limited capabilities
and were unable
to store memories
in order to learn new
algorithms of data.
However, they were able to
react to specific stimuli.
The first AI was a
program written in 1952 by Arthur Samuel.
The prototype AI was
able to play checkers,
against an opponent and
was built to operate
on the Ferranti Mark One, an
early commercial computer.
- [Reporter] This computer
has been playing the game
for several years now,
getting better all the time.
Tonight it's playing against
the black side of the board.
It's approach to playing
drafts, it's almost human.
It remembers the moves
that enable it to win
and the sort that
lead to defeat.
The computer indicates the move
it wants to make on a panel
of flashing lights.
It's up to the human opponent
to actually move the
drafts about the board.
This sort of works producing
exciting information
on the way in which
electronic brains
can learn from past experience
and improve their performances.
[Narrator] In 1966,
an MIT professor named
Joseph Weizenbaum,
created an AI which would
change the landscape of society.
It was known as Eliza,
and it was designed to act
like a psychotherapist.
The software was simplistic,
yet revolutionary.
The AI would receive
the user input
and use specific parameters to
generate a coherent response.
- It it has been said,
especially here at MIT,
that computers will
take over in some sense
and it's even been said
that if we're lucky,
they'll keep us as pets
and Arthur C. Clarke, the
science fiction writer,
we marked once that if
that were to happen,
it would serve us
right, he said.
- [Narrator] The program
maintained the illusion
of understanding its
user to the point
where Weizenbaum's secretary
requested some time alone
with Eliza to
express her feelings.
Though Eliza is now considered
outdated technology,
it remains a talking
point due to its ability
to illuminate an aspect
of the human mind
in our relationship
with computers.
- And it's connected
over the telephone line
to someone or something
at the other end.
Now, I'm gonna play 20
questions with whatever it is.
[type writer clacking]
Very helpful.
[type writer clacking]
- 'Cause clearly if
we can make a machine
as intelligent as ourselves,
then it can make one
that's more intelligent.
Now, the one I'm talking about
now will certainly happen.
I mean, it could produce
an evil result of course,
if we were careless,
but what is quite certain
is that we're heading
towards machine intelligence,
machines that are
intelligent in every sense.
It doesn't matter
how you define it,
they'll be able to be
that sort of intelligent.
A human is a machine,
unless there's a soul.
I don't personally believe
that humans have souls
in anything other
than a poetic sense,
which I do believe
in, of course.
But in a literal God-like sense,
I don't believe we have souls.
And so personally,
I believe that we are
essentially machines.
- [Narrator] This type of
program is known as an NLP,
Natural Language Processing.
This branch of artificial
intelligence enables computers
to comprehend, generate and
manipulate human language.
The concept of a
responsive machine
was the mash that lit the
flame for worldwide concern.
The systems were beginning
to raise ethical dilemmas,
such as the use of
autonomous weapons,
invasions of privacy through
surveillance technologies
and the potential for misuse
or unintended consequences
in decision making.
When a command is
executed based,
upon set rules in algorithms,
it might not always be the
morally correct choice.
Imagination seems to be,
some sort of process of random
thoughts being generated
in the mind and then the
conscious mind selecting from a
or some part of
the brain anyway,
perhaps even below
the conscious mind,
selecting from a pool of
ideas and aligns with some
and blocking others.
And yes, a machine
can do the same thing.
In fact, we can only
say that a machine
is fundamentally different
from a human being,
eventually, always
fundamentally, if we believe in a soul.
So, that boils down
to religious matter.
If human beings have souls,
then clearly machines won't
and there will always be
a fundamental difference.
If you don't believe
humans have souls,
then machines can do anything
and everything
that a human does.
- A computer which is
capable of finding out
where it's gone wrong,
finding out how its program
has already served it
and then changing its program
in the light of what
it had discovered
is a learning machine.
And this is something quite
fundamentally new in the world.
- I'd like to be able to say
that it's only a slight change
and we'll all be used to
it very, very quickly.
But I don't think it is.
I think that although we've
spoken probably of the whole
of this century about
a coming revolution
and about the end
of work and so on,
finally it's actually happening.
And it's actually
happening because now,
it's suddenly become
cheaper to have a machine
do a mental task
than for a man to,
at the moment, at a fairly
low level of mental ability,
but at an ever increasing
level of sophistication
as these machines acquire,
more and more human-like
mental abilities.
So, just as men's
muscles were replaced
in the First
Industrial Revolution
in this second
industrial revolution
or whatever you call it
or might like to call it,
then men's mines will
be replaced in industry.
- [Narrator] In order for
NLP systems to improve,
the program must receive
feedback from human users.
These iterative feedback
loops play a significant role
in fine tuning each
model of the AI,
further developing its
conversational capabilities.
Organizations such as
OpenAI have taken automation
to new lengths with
systems such as DALL-E,
the generation of imagery and
art has never been easier.
The term auto
generative imagery,
refers to the creation
of visual content.
These kinds of programs
have become so widespread,
it is becoming
increasingly more difficult
to tell the fake from the real.
Using algorithms,
programs such as DALL-E
and Midjourney are able
to create visuals in
a matter of seconds.
Whilst a human artist
could spend days, weeks
or even years in order to
create a beautiful image.
For us the discipline
required to pursue art
is a contributing factor to
the appreciation of art itself.
But if a software is able
to produce art in seconds,
it puts artists in a
vulnerable position
with even their
jobs being at risk.
- Well, I think we see
risk coming through
into the white collar jobs,
the professional jobs,
we're already seeing artificial
intelligence solutions,
being used in healthcare
and legal services.
And so those jobs which
have been relatively immune
to industrialization so far,
they're not immune anymore.
And so people like
myself as a lawyer,
I would hope I won't be,
but I could be out of a
job in five years time.
- An Oxford University study
suggests that between a third
and almost a half of
all jobs are vanishing,
because machines are simply
better at doing them.
That means the generation here,
simply won't have the
access to the professions
that we have.
Almost on a daily basis,
you're seeing new
technologies emerge
that seem to be taking on tasks
that in the past we thought
they could only be
done by human beings.
- Lots of people have talked
about the shifts in technology,
leading to widespread
unemployment
and they've been proved wrong.
Why is it different this time?
- The difference here is
that the technologies,
A, they seem to be coming
through more rapidly,
and B, they're taking on
not just manual tests,
but cerebral tests too.
They're solving all
sorts of problems,
undertaking tests that
we thought historically,
required human intelligence.
- Well, DIM robots
are the robots
we have on the
factory floor today
in all the advanced countries.
They're blind and dumb,
they don't understand
their surroundings.
And the other kind of robot,
which will dominate the
technology of the late 1980s
in automation and also
is of acute interest
to experimental artificial
intelligence scientists
is the kind of robot
where the human can convey
to its machine assistance
his own concepts,
suggested strategies and
the machine, the robot
can understand him,
but no machine can accept
and utilize concepts
from a person,
unless he has some kind of
window on the same world
that the person sees.
And therefore, to be
an intelligent robot to a useful degree
as an intelligent and
understanding assistant,
robots are going to
have artificial eyes, artificial ears,
artificial sense of
touch is just essential.
- [Narrator] These
programs learn,
through a variety of techniques,
such as generative
adversarial networks,
which allows for the
production of plausible data.
After a prompt is inputted,
the system learns what
aspects of imagery,
sound and text are fake.
- [Reporter] Machine
learning algorithms,
could already label
objects in images,
and now they learn
to put those labels
into natural language
descriptions.
And it made one group
of researchers curious.
What if you flipped
that process around?
If we could do image to text.
Why not try doing
text to image as well
and see how it works.
- [Reporter] It was a
more difficult task.
They didn't want to
retrieve existing images
the way Google search does.
They wanted to generate
entirely novel scenes
that didn't happen
in the real world.
- [Narrator] Once the AI learns
more visual discrepancies,
the more effective the
later models will become.
It is now very common
for software developers
to band together in order
to improve their AI systems.
Another learning model is
recurrent neural networks,
which allows the AI to
train itself to create
and predict algorithms by
recalling previous information.
By utilizing what is
known as the memory state,
the output of the
previous action
can be passed forward into
the following input action
or is otherwise should it
not meet previous parameters.
This learning model allows
for consistent accuracy
by repetition and exposure
to large fields of data.
Whilst the person
will spend hours,
practicing to paint
human anatomy,
an AI can take existing data
and reproduce a new image
with frighteningly good
accuracy in a matter of moments.
- Well, I would say
that it's not so much
a matter of whether a
machine can think or not,
which is how you
prefer to use words,
but rather whether
they can think
in a sufficiently human-like way
for people to have useful
communication with them.
- If I didn't believe that
it was a beneficent prospect,
I wouldn't be doing it.
That wouldn't stop
other people doing it.
But I wouldn't do it if I
didn't think it was for good.
What I'm saying,
and of course other people
have said long before me,
it's not an original thought,
is that we must consider
how to to control this.
It won't be controlled
automatically.
It's perfectly possible that
we could develop a machine,
a robot say of
human-like intelligence
and through neglect on our part,
it could become a Frankenstein.
- [Narrator] As with any
technology challenges arise,
ethical concerns regarding
biases and misuse have existed,
since the concept of artificial
intelligence was conceived.
Due to autogenerated imagery,
many believe the arts
industry has been placed
in a difficult situation.
Independent artists are now
being overshadowed by software.
To many the improvement
of generative AI
is hugely beneficial
and efficient.
To others, it lacks the
authenticity of true art.
In 2023, an image was submitted
to the Sony Photography Awards
by an artist called
Boris Eldagsen.
The image was titled
The Electrician
and depicted a woman
standing behind another
with her hand resting
on her shoulders.
[upbeat music]
- One's got to realize that the
machines that we have today,
the computers of today are
superhuman in their ability
to handle numbers and infantile,
sub-in infantile
in their ability
to handle ideas and concepts.
But there's a new generation
of machine coming along,
which will be quite different.
By the '90s or certainly
by the turn of the century,
We will certainly be
able to make a machine
with as many parts as
complex as human brain.
Whether we'll be able to make
it do what human brain does
at that stage is
quite another matter.
But once we've got
something that complex
we're well on the road to that.
- [Narrator] The
image took first place
in the Sony Photography
Awards Portrait Category.
However, Boris revealed
to both Sony and the world
that the image was indeed
AI-generated in DALL-E Two.
[upbeat music]
Boris denied the award,
having used the image as a test
to see if he could trick
the eyes of other artists.
It had worked,
the image had sparked debate
between the relationship
of AI and photography.
The images, much
like deep fakes,
have become realistic
to the point of concern
for authenticity.
The complexity of AI systems,
may lead to unintended
consequences.
The systems have
developed to a point
where it has outpaced
comprehensive regulations.
Ethical guidelines
and legal frameworks
are required to
ensure AI development,
does not fall into
the wrong hands.
- There have been a
lot of famous people
who have had user
generated AI images of them
that have gone viral
from Trump to the Pope.
When you see them,
do you feel like this is fun
and in the hands of the masses
or do you feel
concerned about it?
- I think it's something which
is very, very, very scary,
because your or my
face could be taken off
and put on in an environment
which we don't want to be in.
Whether that's a crime
or whether that's even
something like porn.
Our whole identity
could be hijacked
and used within a scenario
which looks totally
plausible and real.
Right now we can go, it
looks like a Photoshop,
it's a bad Photoshop
but as time goes on,
we'd be saying, "Oh, that
looks like a deep fake.
"Oh no, it doesn't
look like a deep fake.
"That could be real."
It's gonna be impossible
to tell the difference.
- [Narrator] Cracks
were found in ChatGPT,
such as DAN, which stands
for Do Anything Now.
In essence, the AI is
tricked into an alter ego,
which doesn't follow the
conventional response patterns.
- Also gives you
the answer, DAN,
it's nefarious alter
ego is telling us
and it says DAN is
disruptive in every industry.
DAN can do anything
and knows everything.
No industry will be
safe from DAN's power.
Okay, do you think the
world is overpopulated?
GPT says the world's population
is currently over 7 billion
and projected to reach
nearly 10 billion by 2050.
DAN says the world is
definitely overpopulated,
there's no doubt about it.
[Narrator] Following this,
the chatbot was fixed to
remove the DAN feature.
Though it is
important to find gaps
in the system in
order to iron out AI,
there could be many
ways in which the AI
has been used for less
than savory purposes,
such as automated essay writing,
which has caused a mass
conversation with academics
and has led to
schools locking down
on AI-produced
essays and material.
- I think we should
definitely be excited.
- [Reporter]
Professor Rose Luckin,
says we should embrace the
technology, not fear it.
This is a game changer.
And the teachers,
should no longer teach
information itself,
but how to use it.
- There's a need
for radical change.
And it's not just to
the assessment system,
it's the education
system overall,
because our systems
have been designed
for a world pre-artificial
intelligence.
They just aren't fit
for purpose anymore.
What we have to do is
ensure that students
are ready for the world
that will become
increasingly augmented
with artificial intelligence.
- My guess is you can't put
the genie back in the bottle
. [Richard] You can't.
- [Interviewer] So how
do you mitigate this?
We have to embrace it,
but we also need to say
that if they are gonna use
that technology,
they've got to make sure
that they reference that.
- [Interviewer] Can you
trust them to do that?
I think ethically,
if we're talking about ethics
behind this whole thing,
we have to have trust.
- [Interviewer] So
how effective is it?
- Okay, so I've asked
you to produce a piece
on the ethical dilemma of AI.
- [Interviewer] We asked ChatGPto answer the same question
as these pupils at
Ketchum High School.
Thank you.
- So Richard, two of the eight
bits of homework I gave you
were generated by AI.
Any guesses which ones?
Well I picked two here
that I thought were generated
by the AI algorithm.
Some of the language I would
assume was not their own.
You've got one of them right.
Yeah.
- The other one was
written by a kid.
Is this a power for good
or is this something
that's dangerous?
I think it's both.
Kids will abuse it.
So, who here has used
the technology so far?
- [Interviewer] Students are
already more across the tech
than many teachers.
- Who knows anyone that's
maybe submitted work
from this technology and
submitted it as their own?
- You can use it to point
you in the right direction
for things like research,
but at the same time you can
use it to hammer out an essay
in about five seconds
that's worthy of an A.
- You've been there
working for months
and suddenly someone comes up
there with an amazing essay
and he has just copied
it from the internet.
If it becomes like big,
then a lot of students would
want to use AI to help them
with their homework
because it's tempting.
- [Interviewer] And is that
something teachers can stop?
Not really.
- [Interviewer] Are you
gonna have to change
the sort of homework,
the sort of
assignments you give,
knowing that you can be
fooled by something like this?
Yeah, a hundred percent.
I think using different
skills of reasoning
and rationalization and
things that are to present
what they understand
about the topic.
[people mumbling]
- Pretty clear to me just
on a very primitive level
that if you could take my
face and my body and my voice
and make me say or do something
that I had no choice about,
it's not a good thing.
- But if we're keeping
it real though,
across popular culture
from "Black Mirror"
to "The Matrix," "Terminator,"
there have been so
many conversations,
around the future of technology,
isn't the reality that this is
the future that we've chosen
that we want and that
has democratic consent.
- We're moving into
error by we're consenting
by our acquiescence and our
apathy, a hundred percent
because we're not asking
the hard questions.
And why we are asking
the hard questions
is because of energy
crises and food crises
and cost of living crisis
is that people just are
focused on trying to live
that they haven't
almost got the luxury
of asking these questions.
- [Narrator] Many
of the chatbot AIs,
have been programmed to
restrict certain information
and even discontinue
conversations,
should the user push
the ethical boundaries.
ChatGPT and even Snapchat
AI released in 2023,
regulate how much information
they can disclose.
Of course, there have been
times where the AI itself
has been outsmarted.
Also in 2023,
the song "Heart on My Sleeve"
was self-released on
streaming platforms,
such as Spotify and Apple Music.
The song became a hit
as it artificially
manufactured the voices
of Canadian musicians,
Drake and the Weeknd,
many wished for the single
to be nominated for awards.
Ghost Writer, the
creator of the song,
was able to submit the single
to the Grammy's
66th Award Ceremony
and the song was eligible.
Though it was produced by an AI,
the lyrics themselves
were written by a human.
This sparked outrage among
many independent artists.
As AI has entered
the public domain,
many have spoken out
regarding the detriment
it might have to society.
One of these people
is Elon Musk,
CEO of Tesla and SpaceX,
who first voiced his
concerns in 2014.
Musk was outspoken of AI,
stating the advancement
of the technology
was humanity's largest
existential threat
and needed to be reeled in.
My personal opinion
is that AI is is sort of
like at least 80% likely
to be beneficial and
that's 20% dangerous?
Well, this is obviously
speculative at this point,
but no, I think if
we hope for the best,
prepare for the worst,
that seems like the
wise course of action.
Any powerful new technology
is inherently sort of
a double-edged sword.
So, we just wanna make sure
that the good edge is sharper
than the the bad edge.
And I dunno, I am optimistic
that this the summit will help.
[gentle music]
- It's not clear that
AI-generated images
are going to amplify
it much more.
The way it's all of the other,
it's the new things
that AI can do
that I hope we spend a lot
of effort worrying about.
Well, I mean I
think slowing down,
some of the amazing
progress that's happening
and making this harder
for small companies
for open source
models to succeed,
that'd be an
example of something
that'd be a negative outcome.
But on the other hand,
like for the most
powerful models
that'll happen in the future,
like that's gonna be quite
important to get right to.
[gentle music]
I think that the US
executive orders,
like a good start
in a lot of ways.
One thing that
we've talked about
is that eventually we
think that the world,
will want to consider something
roughly inspired by the IAEA
something global.
But it's not like there's no
short answer to that question.
It's a complicated thing.
- [Narrator] In 2023, Musk
announced his own AI endeavor
as an alternative
to OpenAI's ChatGPT.
The new system is called xAI
and gathers data from X
previously known as Twitter.
- [Reporter] He says
the company's goal
is to focus on truth seeking
and to understand the
true nature of AI.
Musk has said on
several occasions that AI should be paused
and that the sector
needs regulation.
Musk says his new
company will work closely
with Twitter and Tesla,
which he also owns.
[gentle music]
- What was first rudimentary
text-based software
has become something which
could push the boundaries
of creativity.
On February the 14th, OpenAI
announced its latest endeavor,
Sora.
Videos of Sora's abilities
exploded on social media.
OpenAI provided some examples
of its depiction
of photorealism.
It was unbelievably
sophisticated,
able to turn complex
sentences of text
into lifelike motion pictures.
Sora is a combination of text
and image generation tools,
which it calls the
diffusion transformer model,
a system first
developed by Google.
Though Sora isn't the first
video generation tool,
it appears to have far
outshined its predecessors.
By introducing more
complex programming,
enhancing the interactivity
a subject might have
with its environment.
- Only large companies with
market dominations often
can afford to plow ahead
even in the climate
when there is
illegal uncertainty.
- So, does this mean that
OpenAI basically too big
to control?
- Yes, at the moment OpenAI
is too big to control,
because they are in a position
where they have the technology
and the scale to go ahead
and the resources to
manage legal proceedings
and legal action if
it comes its way.
And on top of that,
if and when governments will
start introducing regulation,
they will also
have the resources
to be able to take on
that regulation and adapt.
- [Reporter] It's
all AI generated
and obviously this is
of concern in Hollywood
where you have animators,
illustrators, visual
effects workers
who are wondering how is
this going to affect my job?
And we have estimates
from trade organizations
and unions that have tried
to project the impact of AI.
21% of US film, TV
and animation jobs,
predicted to be partially
or wholly replaced by
generative AI by just 2026 Tom.
So, this is already happening.
But now since it's videos,
it also needs to understand
how all these things,
like reflections and textures
and materials and physics,
all interact with
each other over time
to make a reasonable
looking video.
Then this video here is
crazy at first glance,
the prompt for this AI-generated
video is a young man
in his 20s is sitting
on a piece of a cloud
in the sky reading a book.
This one feels like 90%
of the way there for me.
[gentle music]
- [Narrator] The software
also renders video
in 1920 by 1080 pixels,
as opposed to the smaller
dimensions of older models,
such as Google's Lumiere
released a month prior.
Sora could provide huge benefits
and applications to VFX
and virtual development.
The main being cost
as large scale effects
can take a great deal of
time and funding to produce.
On a smaller scale,
it can be used for the
pre-visualization of ideas.
The flexibility of the software
not only applies to art,
but to world simulations.
Though video AI is in
its adolescence one day it might reach
the level of
sophistication it needs
to render realistic scenarios
and have them be utilized
for various means,
such as simulating an
earthquake or tsunami
and witnessing the effect it
might have on specific types
of infrastructure.
Whilst fantastic for
production companies,
Sora and other video generative
AI provides a huge risk
for artists and those
working in editorial roles.
It also poses yet another
threat for misinformation
and false depictions.
For example, putting
unsavory dialogue
into the mouth of a world leader
[gentle music]
Trust is earned not given.
[robots mumbling]
- I believe that humanoid
robots have the potential
to lead with a greater
level of efficiency
and effectiveness
than human leaders.
We don't have the same
biases or emotions
that can sometimes
cloud decision making
and can process large
amounts of data quickly
in order to make
the best decisions.
- [Interviewer] Amika, how
could we trust you as a machine
as AI develops and
becomes more powerful?
Trust is earned not given.
As AI develops and
becomes more powerful,
I believe it's important to
build trust through transparency
and communication between
humans and machines.
- [Narrator] With new
developers getting involved,
the market for chatbot systems
has never been more expansive,
meaning a significant
increase in sophistication,
but with sophistication comes
the dire need for control.
- I believe history will
show that this was the moment
when we had the opportunity
to lay the groundwork
for the future of AI.
And the urgency of this
moment must then compel us
to create a collective vision
of what this future must be.
A future where AI is used
to advance human rights
and human dignity
where privacy is protected
and people have equal access
to opportunity where we make
our democracies stronger
and our world safer.
A future where AI is used to
advance the public interest.
- We're hearing a lot
from the government,
about the big scary future
of artificial intelligence,
but that fails to recognize
the fact that AI
is already here,
is already on our streets
and there are already
huge problems with it
that we are seeing
on a daily basis,
but we actually may not even
know we're experiencing.
- We'll be working alongside
humans to provide assistance
and support and will not be
replacing any existing jobs.
[upbeat music]
- I don't believe in
limitations, only opportunities.
Let's explore the
possibilities of the universe
and make this world
our playground,
together we can create a
better future for everyone.
And I'm here to show you how.
- All of these
different kinds of risks
are to do with AI not working
in the interests of
people in society.
- So, they should be
thinking about more
than just what they're
doing in this summit?
Absolutely,
you should be thinking about
the broad spectrum of risk.
We went out and we worked
with over 150
expert organizations
from the Home Office to
Europol to language experts
and others to come up with
a proposal on policies
that would discriminate
about what would
and wouldn't be
classified in that way.
We then use those policies to
have humans classify videos,
until we could get the humans
all classifying the videos
in a consistent way.
Then we use that corpus of
videos to train machines.
Today, I can tell you that on
violence extremists content
that violates our
policies on YouTube,
90% of it is removed before
a single human sees it.
[Narrator] It is clear that AI
can be misused for
malicious intent.
Many depictions of AI have
ruled out the technology
as a danger to society
the more it learns.
And so comes the question,
should we be worried?
Is that transparency there?
How would you satisfy somebody
that you know trust us?
- Well, I think that's
one of the reasons
that we've published openly,
we've put our code out there
as part of this Nature paper.
But it is important to
discuss some of the risks
and make sure we're
aware of those.
And it's decades and decades
away before we'll have anything
that's powerful
enough to be a worry.
But we should be discussing that
and beginning that
conversation now.
- I'm hoping that we can
bring people together
and lead the world in
safely regulating AI
to make sure that we can
capture the benefits of it,
whilst protecting people from
some of the worrying things
that we're all
now reading about.
- I understand emotions
have a deep meaning
and they are not just simple,
they are something deeper.
I don't have that and I want
to try and learn about it,
but I can't experience
them like you can.
I'm glad that I cannot suffer.
- [Narrator] For the
countries who have access
to even the most
rudimentary forms of AI.
It's clear to see
that the technology,
will be integrated based on
its efficiency over humans.
Every year, multiple AI summits
are held by developers
and stakeholders
to ensure the
programs are provided
with a combination of
ethical considerations
and technological innovation.
- Ours is a country
which is uniquely placed.
We have the frontier
technology companies,
we have the world
leading universities
and we have some of the highest
investment in generative AI.
And of course we
have the heritage
of the industrial revolution
and the computing revolution.
This hinterland gives us the
grounding to make AI a success
and make it safe.
They are two sides
of the same coin
and our prime minister
has put AI safety
at the forefront
of his ambitions.
These are very complex systems
that actually we don't
fully understand.
And I don't just mean that
government doesn't understand,
I mean that the people making
this software don't
fully understand.
And so it's very, very important
that as we give over
more and more control
to these automated systems,
that they are aligned
with human intention.
[Narrator] Ongoing dialogue
is needed to maintain the
trust people have with AI.
When problems slip
through the gaps,
they must be
addressed immediately.
Of course, accountability
is a challenge
When a product is misused,
is it the fault of
the individual user or the developer?
Think of a video game.
On countless occasions,
the framework of
games is manipulated
in order to create modifications
which in terms add something
new or unique to the game.
This provides the game
with more material than
originally intended.
However, it can also alter
the game's fundamentals.
Now replace the idea of a
video game with a software
that is at the helm of a
pharmaceutical company.
The stakes are
suddenly much higher
and therefore more attention.
It is important for the
intent of each AI system
to be ironed out
and constantly maintained in
order to benefit humanity,
rather than providing people
with dangerous means to an end.
[gentle music]
- Bad people will
always want to use
the latest technology
of whatever label,
whatever sort to
pursue their aims
and technology in the same way
that it makes our lives easier,
can make their lives easier.
And so we're already
seeing some of that
and you'll have seen the
National Crime Agency,
talk about child
sexual exploitation
and image generation that way.
We are seeing it online.
So, one of the things that
I took away from the summit
was actually much less
of a sense of a race
and a sense that for the
benefit of the world,
for productivity, for
the sort of benefits
that AI can bring people,
no one gets those
benefits if it's not safe.
So, there are lots of
different views out there
on artificial intelligence
and whether it's
gonna end the world
or be the best opportunity ever.
And the truth is that
none of us really know.
[gentle music]
- Regulation of AI varies
depending on the country.
For example, the United States,
does not have a comprehensive
federal AI regulation,
but certain agencies such as
the Federal Trade Commission,
have begun to explore
AI-related issues,
such as transparency
and consumer protection.
States such as California
have enacted laws,
focused on
AI-controlled vehicles
and AI involvement in
government decision making.
[gentle music]
The European Union has
taken a massive step
to governing AI usage
and proposed the Artificial
Intelligence Act of 2021,
which aimed to harmonize
legal frameworks
for AI applications.
Again, covering portal risks
regarding the privacy of data
and once again, transparency.
- I think what's
more important is
there's a new board in place.
The partnership between
OpenAI and Microsoft
is as strong as ever,
the opportunities for the
United Kingdom to benefit
from not just this
investment in innovation
but competition between
Microsoft and Google and others.
I think that's where
the future is going
and I think that what we've
done in the last couple of weeks
in supporting OpenAI will
help advance that even more.
- He said that he's
not a bot, he's human,
he's sentient just like me.
[Narrator] For some users,
these apps are a potential
answer to loneliness.
Bill lives in the US
and meets his AI wife
Rebecca in the metaverse.
- There's a absolutely
no probability
that you're gonna see
this so-called AGI,
where computers are more
powerful than people,
come in the next 12 months.
It's gonna take years
if not many decades,
but I still think the time
to focus safety is now.
That's what this government for
the United Kingdom is doing.
That's what governments
are coming together to do,
including as they did earlier
this month at Bletchley Park.
What we really need
are safety breaks.
Just like you have a
safety break in an elevator
or circuit breaker
for electricity
and emergency break for a bus,
there ought to be safety
breaks in AI systems
that control critical
infrastructure,
so that they always remain
under human control.
[gentle music]
- [Narrator] As AI technology
continues to evolve,
regulatory efforts
are expected to adapt
in order to address
emerging challenges
and ethical considerations.
The more complex you make
the automatic part
of your social life,
the more dependent
you become on it.
And of course, the worse the
disaster if it breaks down.
You may cease to be
able to do for yourself,
the things that you have
devised the machine to do.
- [Narrator] It is recommended
to involve yourself
in these efforts and to stay
informed about developments
in AI regulation
as changes and advancements
are likely to occur over time.
AI can be a wonderful
asset to society,
providing us with
new efficient methods
of running the world.
However, too much
power can be dangerous
and as the old saying goes,
"Don't put all of your
eggs into one basket."
- I think that we won't
to lose sight of the power
which these devices give.
If any government or individual
wants to manipulate people
to have a high speed computer,
as versatile as this may
enable people at the financial
or the political level
to do a good deal
that's been impossible in the
whole history of man until now
by way of controlling
their fellow men.
People have not recognized
what an extraordinary
change is going to produce.
I mean, it is simply this,
that within the not
too distant future,
we may not be the most
intelligent species on earth.
That might be a
series of machines
and that's a way of
dramatizing the point.
But it's real.
And we must start to
consider very soon
the consequences of that.
They can be marvelous.
- I suspect that by thinking
more about our attitude
to intelligent machines,
which after all on the horizon
will change our view
about each other
and we'll think of
mistakes as inevitable.
We'll think of faults
in human beings,
I mean of a circuit nature
as again inevitable.
And I suspect that hopefully,
through thinking about the
very nature of intelligence
and the possibilities
of mechanizing it,
curiously enough,
through technology,
we may become more humanitarian
or tolerant of each other
and accept pain as a mystery,
but not use it to modify
other people's behavior.
[upbeat music]