Cyberwar (2016) s01e15 Episode Script

The Future of War

Scientists design machines that think for themselves I guess I have an aversion to the word "drone".
It implies something mindless.
with the potential to become weaponized The military would be crazy not to take these technologies and utilize them in ways that would help us perform better on the battlefield.
or even hacked.
There will always be vulnerabilities.
You can't design them out of your system just with a sheer force of will and intelligence.
But how will it change the way we fight wars? It is a fantasy to think that you can delegate the decision over life and death to a robot.
50 years ago, it was pure science fiction to imagine a U.
S.
soldier sitting in a bunker in Nevada could remotely pilot a plane flying thousands of miles away, and then drop a hellfire missile on an enemy target.
But this is reality in 2016.
Today, unmanned aerial vehicles, or UAVs, are a vital part of warfare.
They're in the stockpiles of more than two dozen nations, and target and kill thousands around the world.
The next generation of drone technology is even more sophisticated.
I found it here, in this idyllic little field just outside of Budapest, Hungary.
When you look at a flock of starlings in the sky, their motion is based on very simple rules.
The whole pattern, their motion is some kind of meta level intelligence or collective intelligence, and the same kind of thing applies to drones as well.
When I see the whole flock flying, it's always magical for me after so many years.
This is Gabor Vasarhelyi.
He's part of a team developing technology they hope will be used in agriculture or search and rescue.
They're working on drone swarms, which is to say a flock of autonomous drones in the sky, moving and making decisions as a posse of like-minded robots.
Unlike the typical drones used today, there's no human operator directing their movement individually.
Oh, now you're hearing them kinda creep up, eh? These machines are given a set of instructions Holy then they figure out how to execute them as a team, solving problems in real time.
That beacon in my hands makes the drones track whoever's holding it, wherever they go.
Notice how they coordinate and keep their distance from one another automatically.
Ah! How are they communicating with one another? The drones have a local communication network.
Every single drone just sends out some info about itself, and all the drones that are close enough to receive this, they receive and integrate into the decisions.
Why do you think the military's interested in this kind of technology? If you have one big aircraft and someone shoots it, then it's a lot of damage and then your whole man is gone.
When you have 100 drones instead and someone shoots at it, then one drone is taken away and the rest can do the same.
That's how mosquitos work, for example.
Like there are many small mosquitos, everybody is just sipping a little bit of blood, but the whole species goes on.
Air Forces of the future may be stoked on the idea of drone swarms, but what about the navies? Here at a NATO research facility in Italy, scientists are essentially developing autonomous unmanned subs.
But signals travel more slowly underwater, and that makes it harder to create a submarine swarm.
One of the things we're trying to do is create the internet for underwater robots.
So we have the internet on land, and now we're getting the Internet of Things.
Well, we're doing Internet of Things for underwater things.
So, how is the internet underwater? It sucks! John Potter is the scientist in charge of strategic development.
He hopes that the drones they're developing here will be used to map the oceans.
So when it comes to autonomous or when it comes to, let's say, drones in the water, what is coming over the horizon? I guess I have an aversion to the word "drone".
It implies something mindless.
From the outset, there was never an option to go the "drone" route, of having a dumb device.
Because once you put it in the sea, and it dives and it's gone more than 100 metres away from you, you don't know what it's up to.
And if you ever want to see it come back you want to have something a little smarter than that.
So what would you call them, unmanned? So these vehicles are a step forward in autonomy with respect to what's currently typically in operation at the moment.
They're able to adaptively change what they are doing.
So you can put vehicles in and have them work as a team.
And if one vehicle sees something and it knows it has a team member with maybe a high resolution camera or imaging system that's just over here somewhere, and it knows how far away it is, it says, "Hey, Bill.
Come and take a look at this, I think I found a so-and-so.
You wanna check that out, see what you think of it?" And I guess that's one of the other things people are going to wonder, is like is something like an underwater system like that, could that be weaponized at some point? That's not something we are really working on here.
How these autonomous abilities eventually translate into operational systems depends on lots of other folk and organizations within NATO and elsewhere.
It is something that's being considered.
But I would like to see all the stakeholders - which basically means all of society - become informed, well informed, and think carefully about this.
What is it that we actually want from autonomous systems? And that really hit me.
The people developing these systems aren't the ones who will decide how they're used in military conflict.
But whether or not you're comfortable with autonomous drone swarms, they're coming.
I'm in Portugal, heading out to a NATO research ship in the Atlantic to get a first-hand view of underwater drone testing.
I'm gonna have to climb that puppy right there, like Blackbeard or something.
This is more or less our control centre where we keep the picture of everything that we have deployed at sea.
This here is the position of the ship, this is where we are now.
You'll see here these green points is our field of deployed assets for the current tests we're doing.
This is basically your ocean lab.
It's our ocean lab, exactly.
That's a good way to put it, it's our ocean lab.
Joao Alves is the Coordinator for Underwater Communication.
He and his crew spend weeks at sea experimenting with things like submarine communication and anti-sub warfare tech.
"Time Bandit.
" So these are the anti-submarine warfare ones? These are the ones that we employ for our multistatic anti-submarine warfare missions, yes indeed.
Now I know these machines right now are just they're doing nothing more than reconnaissance or surveillance or even just mapping, or anti-submarine warfare in terms of detecting different possible enemy craft, but do you think at any point these things could be weaponized? Do you think that's a possibility? We, as a science-based research centre, are interested in developing the autonomy, the capabilities of these machines.
Then, I mean, it's totally out of our scope, the usage of We are very excited on the examples you just gave, on the improving the reconnaissance, improving the mapping capabilities other than that, I mean, we You're not touching the rest? No, not at all.
I mean, not even our interest.
Joao says he's not interested in weaponizing these drones.
But like other people working on the same autonomous machines, he doesn't decide how they're ultimately used.
That's up to the military.
Unlike the carpet bombings of the past, drone strikes are supposed to be precise, limiting civilian casualties.
That said, it's tough to know exactly how many people around the world were killed by drones in the past decade.
Estimates range from the thousands to the tens of thousands, and it's even harder to figure out how many of those killed or injured were civilians.
But it's not just the numbers that alarm some critics.
Have you seen a US president go on TV to say, "Tonight, I ordered airstrikes in Libya"? Or Pakistan or Yemen? They don't do that anymore.
Because something about drone technology and other weapons technology has enabled US presidents and politicians to basically shift what the norm is.
Naureen Shah is the director of Amnesty International USA's Security and Human Rights Program.
She's a vocal opponent of the US military's targeted killing campaigns, and worries about the autonomous technologies in development.
So the idea that this could reduce civilian casualties to you is impossible? Our concern is it actually would increase civilian casualties, increase the risk of civilian casualties.
I of course agree that if we can keep people out of harm's way, that's vital.
But if you don't have governments having to weigh the costs to their own citizens, when they decide to go to war, then you're really making it so that their stakes are a lot lower.
And that could mean that the US and other governments are just engaging a lot more in warfare than they did before, just not calling it warfare, and we already see that.
The US is using lethal force right now in Libya, Syria, Iraq, Afghanistan, Yemen, Pakistan, Somalia.
There's something kind of invisible about the use of autonomous weapons, or robots.
Something that enables policy makers to think, well, we could go we could use lethal force in a surgical way, in a limited way, and we wouldn't even have to have a big public debate about it.
We're supposedly just at war all the time.
But at what point is this technology just inevitable? One of the things that is so difficult is that technology is fascinating for all of us.
And it's so fascinating that it creates almost a kind of glee.
It's a fixation that our generation has on the possibilities provided by technology.
It can make us better than our own selves, that somehow a robot is more selfless than a human being is, doesn't have the prejudices of a human being, and that can solve the fundamental gruesomeness of war.
It is a fantasy to think that you can delegate the decision over life and death to a robot and somehow that inherently makes it more humane and more precise and more lawful.
Because there's nothing that would give us any reason to believe that a robot has human empathy, that it has the ability to make a judgment about who is a civilian and who is not a civilian, and whether or not in those particular circumstances a civilian really poses a threat.
But some scientists completely disagree with Naureen, and think that it is in fact possible to program a robot to act in a more humane, precise and lawful way than a human soldier.
And here on the Georgia Tech campus, they tried to prove it.
So if you did something bad, and you felt guilty about it, you would be less likely to do it again.
We want the robot to experience to behave in A similar way, right.
The robots None of these robots feel emotions, okay? They don't feel anything.
They're not sentient.
But people can perceive them as feeling things.
Ron Arkin is a roboticist and professor.
He's developing technology designed to program ethics into robots.
There are issues associated with human war fighters who occasionally are careless, make mistakes, and in some cases commit atrocities.
So you really think that you can program a robot to be a better and more efficient killer than a human being, and without making the same kind of mess ups that we might see, say in friendly fire or civilian casualties? I'm not interested in making them better and more efficient killers; I'm interested in making them better protectors of non-combatants and better protectors of civilians while they are conducting missions than human war fighters are.
So my goal is to make them better adhere to international humanitarian law as embodied in the Geneva conventions and the rules of engagement.
You cannot shoot in a no-kill zone, you cannot shoot people that have surrendered, you cannot carry out some summary executions.
These systems should have the right to refuse an order as well, too.
- Really? - Yeah.
This is not just a decision of when to fire, it's also a decision of when not to fire.
So that if someone tells it to attack something that is against international humanitarian law, and its programming tells it that this is against international humanitarian law, it should not engage that particular target.
So will these weapon systems, could they be like the Tesla self-driving car? You know, you won't get into as many accidents.
Yes, and that's the argument that self-driving cars use.
It's the same sort of thing.
They say that human beings are the most dangerous things on the road because we get angry, we drink, we are distracted.
It'd be better to have the robots driving us there.
Look at 9/11.
There was no reason an aircraft should've crashed into those buildings.
It's an easy task for a control system, now and then, to be able to change its altitude when it recognizes it's in a collision course and avoid that particular object.
We chose not to do that.
We choose to trust human beings over and over and over again, but that's not always the best solution.
If there's one thing I've learned over the last few years, it's that it's possible to hack virtually everything running on code - from a nuclear enrichment facility to an SUV.
But what about the drone swarms of the future? Here in Texas, there's a team of researchers that demonstrated how military drones used today are hackable.
Todd Humphreys is the director of the Radionavigation Lab.
So this is our arena.
Nets for the drones? The nets' for the FAA.
So there's a future, you think, wherein you could see a ton of these things? Dinner table-sized drones possibly, like in a swarm? Oh yeah.
- Attacking a target? - That's right.
And they have no regard for their own life, right? So these are suicide drones.
They pick their target, they go directly at it, and the kinds of close-in weapons systems and large-scale weapon systems that our US destroyers and the Navy have today or other ships, they'll be no match against 16 of these at once.
Right, and it all starts here in Austin.
Well, we don't intend to do development of war machines here.
But I will say that our somewhat whimsical games that we'll be playing, they are going to engage our operators and our drones in scenarios that are applicable to all sorts of fields.
One of those so-called whimsical games he and his students showed me was a drone version of Capture the Flag.
But that's not all they've been up to.
Back in 2011, when the Iranians claimed to have hacked a US military drone, Todd and his team proved that it was indeed possible, and demonstrated how a simple method called spoofing could have been used to jack US military hardware.
Every drone has a few vital links.
One of them is to its ground controller, and one of them is to overhead satellites for navigation.
Spoofing attacks one of those vital links.
It basically falsifies a GPS signal, makes a forged signal, sends it over to the drone and convinces the drone that it's in a different place or at a different time, you know, because GPS gives us both time and position.
And you've actually proven this is possible? We've demonstrated it, yeah.
So we've done this, we've done it with a drone, we've done it with a 210-foot super yacht.
So this is back in 2011 it was possible.
- Yeah.
- How about now? Have drones kind of caught up to this? Have engineers realized that, you know, when you put this autonomous thing in the air, essentially it can be overtaken by a hostile actor? I'd like to be able to say yes, but no.
The FAA has has charged a tiger team to looking into this, and they came back after two years of study and have put together a set of proposals that would make commercial airliners more resilient to spoofing, more have better defenses against spoofing.
They've also looked at even smaller unmanned aircraft.
But things move slowly in the world of commercial airliners.
And as far as smaller unmanned aircraft, I think those of us who are just playing around with the small toys and such aren't really thinking so much about security at this point.
So we know that hacking drones are possible, and yet people are starting to think about creating drone swarms.
Mm-hmm.
What if your drone swarm you send one off and you're thinking you're gonna destroy your enemy, and all of a sudden it turns right back around and it comes at you because it's been hacked? Right.
I mean, do you have to prepare it to be able to destroy your own drone swarm? Absolutely.
I believe you have to have a killswitch for your own swarm, your own resources, your assets, and that that must be an ironclad kill switch.
The only way to think about security is as an arms race.
There will always be vulnerabilities, and you can't somehow design them out of your system just with sheer force of will and intelligence.
I'm at the Pentagon in Washington to meet with Robert Work.
He's basically the number two at the Department of Defense, which is why he gets the Blackhawk helicopter treatment.
We're heading out to the US Army Research Lab at the Aberdeen Proving Ground in Maryland.
Work is leading the Third Offset Strategy, an initiative aimed at building up the military's tech capabilities to counter big-time adversaries like Russia or China.
The strategy calls for the US military to focus on robotics, miniaturization, 3D printing, and autonomous systems.
So why is the US government so interested in autonomous weapons systems and autonomous machinery for the military? Autonomy and artificial intelligence is changing our lives every day.
And the military would be crazy not to take these technologies and utilize them in ways that would help us perform better on the battlefield.
I mean, would it be crazy because other countries are going to do it too? We know that other great powers like Russia and China are investing a lot of money in autonomy and AI, and they think differently about it than we do.
You know, we think about autonomy and AI of enabling the human to be better.
Authoritarian regimes sometimes think about taking the human out of the equation and allowing the machine to make the decision, and we think that's very dangerous.
That's not our conception at all.
It's more like Iron Man, where you would use the machine as an exoskeleton to make the human stronger, allow the human to do more; an autonomous intelligence that's a part of the machine to help the human make better decisions.
Would taking humans out of the loop give your adversary an advantage? This is a question just like cyber vulnerability that keeps us up at night.
Would a network that is working at machine speed all the time be able to beat a network in which machines and humans work together? Um, and in certain instances like as I said, cyber warfare, electronic warfare, machines will always beat humans.
I mean, that will always happen.
This is a competition, and we think that the way this will go for the next multiple decades is AI and autonomy will help the human.
And you will never try to make it go all automatic.
But we have to watch, and we have to be careful, and make sure that that doesn't happen.
But not everyone thinks about the Third Offset Strategy in such an optimistic way.
In fact, some people think it's only creating a new arms race for robotic war tools that could escalate quickly.
Naureen Shah is one of those people.
What would you say to the DOD policy-makers who are actually looking into researching autonomous weapons systems? I would say ban these weapons systems, because you yourselves know what the consequences would be if other governments had them.
Look at this issue not from the perspective just of the US government, but from how to keep communities all around the world safe from unlawful use of lethal force.
If the US government is concerned about what it sees as bad actors having these weapons systems, then it shouldn't develop them.
Once you start to manufacture these weapons systems and authorize arms sales to all these countries, this technology is going to proliferate, and it will unfortunately spiral out of control.
We know that you can do it.
We know that the technology is tantalizing, but you know that if you start down this road, you're going to a very, very dangerous place, and you shouldn't go there.
At this point, it might be too late.
These war toys are on the way.
And the thing is, almost every developer of autonomous machines I met is separated from the people who actually decide how their creations will enter war.
So while researchers toss around ideas about humane battles or friendly killer robots, you get the feeling conflict isn't going to be any less horrifying using autonomous bots.
Whether it's stick and stones, guns, missiles, or drones, people will always go to war, and it's always brutal.
Even if a robot is fighting for you.

Previous EpisodeNext Episode