ChatGPT Sparks U.S. Debate Over Military Use of AI


June 2023
By Michael Klare and Chris Rostampour

The release of ChatGPT and other “generative” artificial intelligence (AI) systems has triggered intense debate in the U.S. Congress and among the public over the benefits and risks of commercializing these powerful but error-prone technologies.

Sam Altman of OpenAI testified before a U.S. Senate subcommittee in May about the promise and dangers of ChatGPT, a “generative” artificial intelligence system. (Photo by Win McNamee/Getty Images)Proponents argue that by rapidly utilizing such systems, the United States will acquire a significant economic and military advantage over China and other competing powers. But many experts warn that the premature release of such potent but untested technologies could lead to catastrophic consequences and so the systems should be constrained by rules and regulations.

Generative AI systems employ sophisticated algorithms to convert vast amounts of raw data into texts, images, and other content that seem to be produced by humans. It is thought to have widespread application in industrial, business, and military operations.

The potentially disruptive consequences of exploiting generative AI technologies for commercial and geopolitical advantage and the accompanying need for new laws in the area provoked heated discussion at a hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16.

The lead witness, Sam Altman of OpenAI, the San Francisco startup responsible for ChatGPT, highlighted the technology’s great promise, but also warned of its inherent defects, such as a tendency to produce false or misleading results. “If this technology goes wrong, it can go quite wrong,” he told the subcommittee. “[W]e want to work with the government to prevent that from happening.”

Many analysts are particularly worried about the hasty application of advanced AI in the military realm, where the consequences of things going wrong could prove especially catastrophic.

Lawmakers and senior Pentagon officials who seek to apply these technologies as rapidly as possible argue that this approach will provide the United States with a distinct advantage over China and other rivals. But concerns have emerged in these circles over the premature application of AI to military use. Although officials believe that AI utilization by the military will enhance U.S. combat capabilities, many worry about the potential for accidents, enemy interference, and other dangerous unintended outcomes and so favor the cautious, regulated utilization of AI.

“To stay ahead of our potential adversaries…we need to identify key technologies and integrate them into our systems and processes faster than they can,” Sen. Joe Manchin (D-W.Va.), chair of the Senate Armed Services cybersecurity subcommittee, said during an April 19 hearing on AI use by the Defense Department. But, he added, “the applications deployed must be more secure and trusted, meaning we [must adopt] more rigorous policy enforcement mechanisms to prevent misuse or unintended use.”

Witnesses at the April 19 hearing, including officials of leading defense contractors making use of AI, emphasized the need to adopt AI swiftly to avoid being overtaken by China and Russia in this critically important area. But under questioning, they acknowledged that the hasty application of AI to military systems entailed significant risk.

Josh Lospinoso of Shift5, for example, warned that the data used in training the algorithms employed in generative AI systems “can be altered by nefarious actors” and also are vulnerable to “spoofing” by potential adversaries. “We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly,” he said.

Manchin indicated that, for these and other reasons, Congress must acquire a better understanding of the risks posed by the Pentagon’s utilization of AI and develop appropriate guardrails. He asked for the witnesses’ help in “looking at how we would write legislation not to repeat the mistakes of the past,” alluding to congressional failure to impose such controls on social media and the internet.

Judging by his comments and those of his colleagues, any legislation that emerges from Manchin’s subcommittee is likely to incorporate measures intended to ensure the reliability of the data used in training complex algorithms and to prevent unauthorized access to these systems by hostile actors.

Other lawmakers have sought to write legislation aimed at preventing another danger arising from the hasty and misguided application of AI by the military: the possibility that AI-enabled autonomous weapons systems, or “robot generals,” will someday acquire the capacity to launch nuclear weapons.

The Defense Department currently is modernizing its nuclear command, control, and communications systems, including through the widespread integration of advanced AI systems. Some analysts fear that this process will dilute human control over nuclear launch decision-making. (See ACT, April 2020.)

To ensure that machines never replace humans in this momentous role, a bipartisan group of legislators introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act on April 26. If enacted, the law would prohibit the use of federal funds to “use an autonomous weapons system that is not subject to meaningful human control…to launch a nuclear weapon; or…to select or engage targets for the purposes of launching a nuclear weapon.”

In the House, the legislation was introduced by Ted Lieu (D-Calif.) and co-sponsored by Don Beyer (D-Va.) and Ken Buck (R-Colo.). A companion bill was introduced in the Senate on May 1 by Edward Markey (D-Mass.) and co-sponsors Jeff Merkley (D-Ore.), Bernie Sanders (I-Vt), and Elizabeth Warren (D-Mass.).

Lieu said that passage of the bill “will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon, not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”