“I’ll be back.”
Arnold Schwarzenegger’s murderous cyborg uttered those infamous words in a sci-fi classic, The Terminator. True to form, the killer robot returned in the sequel Terminator 2 — but his words also carry a more ominous connotation today, when killer robots are actually about to hit the city streets.
Cities like San Francisco have considered deploying police robots equipped with explosives, which means we are now extending the realm of military robots from the battlefield into our daily lives. But just how dangerous are these robots to humans – and is a future where intelligent robots seek revenge on humans inevitable?
Reverse spoke with experts to determine whether robot-human conflicts in Terminator 2 are a distant threat – or pounding on our front door.
“There is a real danger to humans from certain military robots that are designed with malicious intent,” said futurist Andrew Curry Reverse.
Coil Science is a Reverse series that reveals the real (and fake) science behind your favorite movies and series.
Are killer robots really real?
In a word: yes. We have seen killer robots being used in a variety of military applications, from the controversial use of autonomous drones in the war in Afghanistan to the deployment by the British of a large-scale unmanned naval training operation. Curry calls these military drones “flying robots.”
Most uses of killer robots have been in the military, but in 2016 a deadly robot shot down an armed sniper in Dallas. Some say killer robots can help prevent casualties by endangering machines rather than people – like the robot dogs that were briefly used by the NYPD and then removed from duty in New York amid the backlash.
“A police force may decide to send a group of robots to try to neutralize a group of criminals in a critical situation, where the risk to the police is deemed to be too high for them to want to send human officers. », Sven Nyholm, assistant professor of philosophy at the University of Utrecht and author of the book Humans and robots: ethics, agency and anthropomorphism, recount Reverse.
Curry also says that there are robots that are simply used to clear mines and do not use lethal force. But activist groups like Stop Killer Robots oppose the implementation of deadly robots, citing ethical concerns.
“There is a very heated debate about this in military and security circles, and there are activist groups trying to build a code of ethics around the use of such robots against humans,” explains Curry.
Are “intelligent” killer robots in our future?
Right now, most killer robots aren’t particularly “intelligent” – and many require humans to control them, such as autonomous drones. But as AI technology advances and becomes integrated with robotics, more and more people are sounding the alarm about intelligent robots – like the kind we see in the terminator candor – threatening humans. Other AI experts worry about intelligent robots replicating existing biases in law enforcement, such as targeting people with darker skin.
“Machines don’t see us as people, just another piece of code to process and sort through,” writes Stop Killer Robots on their website.
But experts say there’s a big leap between the killer robots we see in law enforcement today and the sophisticated futuristic robots in Terminator 2who can basically mimic humans – to some degree.
“I think it’s important to make a distinction between whether robots will (soon) be super-intelligent in general, on the one hand, and whether they will become extremely capable with respect to a task or set smaller tasks,” says Nyholm.
So-called general AI – in which robots can perform a series of flexible tasks in different functions – is still difficult to achieve. Instead, the AI may well perform a series of narrowly defined tasks, like a military robot searching for hidden explosives. So the same robot that can kill humans efficiently would be useless to talk like a person.
“It might be possible – in the near future – to build super-intelligent robots with respect to certain narrow tasks/sets of tasks,” says Nyholm.
However, Nyholm thinks AI doesn’t need to be smart to harm us. They can hurt us accidentally by taking actions that no human would do – even if it’s not intended to harm humans – or by design, like military robots.
“It is to be expected that AI technologies that generally work very well – and overall benefit human beings – will sometimes underperform and harm human beings,” says Nyholm.
Will robots inevitably collide with humans?
Although the reprogrammed Terminator in the sequel protects young John Connor, the robots are generally tools of the super-intelligent AI Skynet sent to destroy humanity. But experts don’t think a robot uprising is in the cards in reality.
“In terms of civilian robots going rogue, it’s a bit of a deep myth of industrial civilization, going back to Frankenstein. It’s less likely to happen,” Curry says.
Nyholm agrees, laying out the general premise of Terminator 2 firmly in the realm of science fiction.
“What’s less likely – and should be considered science fiction – is that there would be a full-scale clash between an army of robots on one side and human beings on the other, where both are supposed to be members of the same team,” adds Nyholm.
But that doesn’t mean robots aren’t a threat to humans, far from it. In fact, robots are already unintentionally hurting people. Curry talks about self-driving cars that have crashes because they can’t process road conditions. Nyholm also says that a robot cop might accidentally hurt one of the cops it’s helping instead of the criminals it’s supposed to catch.
Even scarier, Nyholm says a robot-human war could happen – but not in the way Terminator 2 depicts. Instead, a technologically developed nation like the United States could send robots to fight humans in a country with fewer technological resources.
“So on one side of an armed conflict there would be mostly robots on the front line while on the other side there would be mostly humans on the front line,” says Nyholm.
Terminator 2: Jjudgment day is now streaming on HBO Max.