Terminator Conundrum: Could AI technology overstep its boundaries in the field?

We’ve all seen movies like “Terminator” and “iRobot” where artificial intelligence is portrayed as unreliable and dangerous when manipulated, yet now we’re one step closer to releasing autonomous machines into warfare in place of humans. The United State’s Department of Defense aims to create autonomous fighter jets that will fight alongside men. These jets would be able to identify enemy targets carrying weapons and would potentially reduce the risks of endangering soldiers’ lives. However, these jets, intended to be fully independent robots, pose a threat by relying too heavily on artificial intelligence.

“It could not turn itself on and just fly off. It had to be told by humans where to go and what to look for. But once aloft, it decided on its own how to execute its orders.” Matthew Rosenberg and John Markoff wrote in their New York Times article, “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own.”

These robotic jets are given the power to kill, yet there is no guarantee that these machines will not make any mistakes. They are given a power that even people cannot master. Americans have seen instances where civilians have been killed unintentionally, when they weren’t the targets. According to CBS News, Mary Knowlton, a librarian participating in one of Punta Gorda Police Department’s “shoot-don’t shoot” exercises, was fatally shot on Aug. 10, 2016. We cannot fully trust human intelligence let alone an artificial one.

One of the other issues with creating independent machines lies in the uncertainty of who holds responsibility over the robots’ actions. On Feb. 14, Google experienced their first accident where their autonomous vehicle was at fault. According to the LA Times, the car was on self-driving mode when it collided with a transit bus, but if it hadn’t been, the accident could have been avoided. In this situation, is the driver at fault because he/she owns the vehicle and turned on the self-driving mode? Or is the creator of the vehicles at fault since it was artificial intelligence that made the judgment to turn, resulting in the collision?

“The accident is more proof that robot car technology is not ready for auto pilot and a human driver needs to be able to take over when something goes wrong,” John M. Simpson, Google’s privacy project director, said to the LA Times.

This is a clear example of why autonomous machines should not be completely independent. Although there was only one accident that Google’s autonomous vehicles were found at fault, that one accident could have cost human lives. If autonomous vehicles are not yet ready to make better judgments than people, how can we trust that autonomous weapons will? We can’t.

Although accidents with self-driving cars will not always cost lives, creating an autonomous weapon will, without a doubt. The artificial intelligence that the Pentagon wants to invest in can put innocent lives at risk. The real question here is: How can we protect our country if we’re not in control?