A robot and a human hand extend toward each other, their index fingers are touching.
Design by Emma Sortor.

John Badham’s 1983 blockbuster film, “WarGames,” served as an early warning about the growing connections between technology and warfare. Its vision of a world where artificial intelligence wields an alarming level of power offers a glimpse into the dangers of entrusting AI with control over military technology. Fast forward four decades to 2023, and this once seemingly distant future has become ominously closer to reality. 

The mid-2010s marked the emergence of a new type of arms race between global superpowers, focusing on AI as opposed to the development of physical weaponry. Unmanned Aerial Vehicles and autonomous drones, for instance, are programmed to perform surveillance and can execute entire combat missions and drone strikes with minimal human input. The U.S. Air Force’s experimental AI fighter jet, the Valkyrie, took flight for the first time last month with no humans aboard. The technologies are quickly beginning to outperform humans, as was the case with the AI algorithm that easily beat multiple human F-16 pilots in dogfighting trials back in August. 

Many experts have expressed concerns about the moral implications of these developments. Mary Wareham, one of the leading activists in the fight to restrict AI weaponry, expressed such sentiments in an interview with the Center for Public Integrity. Wareham’s primary argument revolves around the fact that machines lack compassion. As a result, they are unable to sort through difficult ethical alternatives — using AI controlled machines to kill crosses a moral threshold. The deployment of these autonomous weapons could lead to unintended casualties, as AI is entrusted with making life-and-death decisions without proper oversight. 

These AI-powered military systems are especially vulnerable to cyberattacks as well. Data poisoning and hacking allow malicious individuals to infiltrate and disrupt AI algorithms, rendering them useless or even turning them against their operators. Such was the case in 2016, when a team of hackers successfully hijacked Jeep’s digital systems. After tapping into the system, they were able to remotely disable the car’s brakes, accelerate the vehicle or even bring it to a complete standstill on the highway. The demonstration prompted Chrysler to initiate a recall for 1.4 million vehicles, shedding light on the grave danger to civilians if AI-controlled machinery systems are compromised. 

The rapid progression of AI in this sector has brought many of its developers to a moral tipping point. Many CEOs of leading AI companies have come forward, voicing their growing concerns over the risks that the technology could pose. It was these very concerns that prompted over 33,000 technology leaders and researchers to sign an open letter calling for a six-month moratorium in AI development.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter reads. “Not even their creators … can understand, predict, or reliably control (them).”

The situation has been unsettling, with Dr. Geoffrey Hinton publicly announcing his departure from Google in May of this year. Known as “The Godfather of AI,” Hinton’s decision was frightening for many. Upon his resignation, Hinton said that AI may pose a threat.

“The alarm bell I’m ringing has to do with the existential threat of (AI) taking control,” Hinton wrote. “I used to think it was a long way off, but now I think it’s serious and fairly close.” 

His words serve as a reminder of the urgency and magnitude of the risks posed by the rapid advancement of AI in the military realm. 

It is important that developers tread carefully in the coming months. Simulated neural networks in these algorithms are composed of increasing layers of processing systems — ones that are becoming increasingly intertwined and more difficult to understand. The result has been escalating cases of “black box” AI, a subset of deep machine learning so complex that it is impossible for the user to explain or understand. This makes code correction and troubleshooting nearly impossible — a sobering reality when considered in the context of military AI. 

If left unregulated, experts fear that we may risk losing control entirely. As the stakes become higher, the consequences of error could be catastrophic. The path forward requires careful deliberation, oversight and ethical transparency of algorithms. Similarly to “WarGames,” society has indeed found itself wrapped up in a strange game — one in which the only winning move may be to not play. 

Tate Moyer is an Opinion Columnist from Los Angeles, California. She writes about the influence of digital culture and technology, and can be reached at moyert@umich.edu