Login/Logout

*
*  

"I greatly appreciate your very swift response, and your organization's work in general. It's a terrific source of authoritative information."

– Lisa Beyer
Bloomberg News
August 27, 2018
Michael Klare

ChatGPT Sparks U.S. Debate Over Military Use of AI


June 2023
By Michael Klare and Chris Rostampour

The release of ChatGPT and other “generative” artificial intelligence (AI) systems has triggered intense debate in the U.S. Congress and among the public over the benefits and risks of commercializing these powerful but error-prone technologies.

Sam Altman of OpenAI testified before a U.S. Senate subcommittee in May about the promise and dangers of ChatGPT, a “generative” artificial intelligence system. (Photo by Win McNamee/Getty Images)Proponents argue that by rapidly utilizing such systems, the United States will acquire a significant economic and military advantage over China and other competing powers. But many experts warn that the premature release of such potent but untested technologies could lead to catastrophic consequences and so the systems should be constrained by rules and regulations.

Generative AI systems employ sophisticated algorithms to convert vast amounts of raw data into texts, images, and other content that seem to be produced by humans. It is thought to have widespread application in industrial, business, and military operations.

The potentially disruptive consequences of exploiting generative AI technologies for commercial and geopolitical advantage and the accompanying need for new laws in the area provoked heated discussion at a hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16.

The lead witness, Sam Altman of OpenAI, the San Francisco startup responsible for ChatGPT, highlighted the technology’s great promise, but also warned of its inherent defects, such as a tendency to produce false or misleading results. “If this technology goes wrong, it can go quite wrong,” he told the subcommittee. “[W]e want to work with the government to prevent that from happening.”

Many analysts are particularly worried about the hasty application of advanced AI in the military realm, where the consequences of things going wrong could prove especially catastrophic.

Lawmakers and senior Pentagon officials who seek to apply these technologies as rapidly as possible argue that this approach will provide the United States with a distinct advantage over China and other rivals. But concerns have emerged in these circles over the premature application of AI to military use. Although officials believe that AI utilization by the military will enhance U.S. combat capabilities, many worry about the potential for accidents, enemy interference, and other dangerous unintended outcomes and so favor the cautious, regulated utilization of AI.

“To stay ahead of our potential adversaries…we need to identify key technologies and integrate them into our systems and processes faster than they can,” Sen. Joe Manchin (D-W.Va.), chair of the Senate Armed Services cybersecurity subcommittee, said during an April 19 hearing on AI use by the Defense Department. But, he added, “the applications deployed must be more secure and trusted, meaning we [must adopt] more rigorous policy enforcement mechanisms to prevent misuse or unintended use.”

Witnesses at the April 19 hearing, including officials of leading defense contractors making use of AI, emphasized the need to adopt AI swiftly to avoid being overtaken by China and Russia in this critically important area. But under questioning, they acknowledged that the hasty application of AI to military systems entailed significant risk.

Josh Lospinoso of Shift5, for example, warned that the data used in training the algorithms employed in generative AI systems “can be altered by nefarious actors” and also are vulnerable to “spoofing” by potential adversaries. “We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly,” he said.

Manchin indicated that, for these and other reasons, Congress must acquire a better understanding of the risks posed by the Pentagon’s utilization of AI and develop appropriate guardrails. He asked for the witnesses’ help in “looking at how we would write legislation not to repeat the mistakes of the past,” alluding to congressional failure to impose such controls on social media and the internet.

Judging by his comments and those of his colleagues, any legislation that emerges from Manchin’s subcommittee is likely to incorporate measures intended to ensure the reliability of the data used in training complex algorithms and to prevent unauthorized access to these systems by hostile actors.

Other lawmakers have sought to write legislation aimed at preventing another danger arising from the hasty and misguided application of AI by the military: the possibility that AI-enabled autonomous weapons systems, or “robot generals,” will someday acquire the capacity to launch nuclear weapons.

The Defense Department currently is modernizing its nuclear command, control, and communications systems, including through the widespread integration of advanced AI systems. Some analysts fear that this process will dilute human control over nuclear launch decision-making. (See ACT, April 2020.)

To ensure that machines never replace humans in this momentous role, a bipartisan group of legislators introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act on April 26. If enacted, the law would prohibit the use of federal funds to “use an autonomous weapons system that is not subject to meaningful human control…to launch a nuclear weapon; or…to select or engage targets for the purposes of launching a nuclear weapon.”

In the House, the legislation was introduced by Ted Lieu (D-Calif.) and co-sponsored by Don Beyer (D-Va.) and Ken Buck (R-Colo.). A companion bill was introduced in the Senate on May 1 by Edward Markey (D-Mass.) and co-sponsors Jeff Merkley (D-Ore.), Bernie Sanders (I-Vt), and Elizabeth Warren (D-Mass.).

Lieu said that passage of the bill “will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon, not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”

In the U.S. Congress and among the public, there are rising questions about the benefits and risks of commercializing these powerful but error-prone technologies, including in the military sphere.

A US-China War Over Taiwan?

News Date: 
April 26, 2023 -04:00

Creating a Hypersonic Pentagon Budget

News Date: 
April 16, 2023 -04:00

Creating a Hypersonic Pentagon Budget

News Date: 
April 16, 2023 -04:00

Dueling Views on AI, Autonomous Weapons


April 2023
By Michael Klare

The international debate over controlling artificial intelligence (AI) and autonomous weapons systems, often called “killer robots” by critics, is heating up, with the contending approaches generally falling into two camps.

In February, Bonnie Jenkins, U.S. undersecretary of state for arms control and international security, outlined a U.S. proposal on the responsible military use of AI and autonomous weapons systems. She is pictured here during a session of the UN Conference on Disarmament in Geneva. (Photo by Fabrice Coffrini/AFP via Getty Images)One approach, favored by the United States, the United Kingdom, and many of their allies, calls for the adoption of voluntary codes of conduct to govern the military use of AI and autonomous systems in warfare. Another approach, advocated by Austria, most Latin American countries, and members of the Non-Aligned Movement, supports the adoption of a legally binding, international ban on autonomous weapons or a set of restrictions on their deployment and use.

These contending views were brought into sharp focus at three recent international meetings that considered proposals for regulating the military use of AI and autonomous weapons systems.

The first meeting, held in Belén, Costa Rica, on Feb. 23–24, was attended by government officials from nearly every country in Latin America and the Caribbean, as well as officials from the United States and 12 other countries. Also present was strong representation from civil society organizations, including the Campaign to Stop Killer Robots.

After hearing from government officials and civil society representatives about the risks posed by the deployment of autonomous weapons systems, the Latin American officials adopted a statement, entitled the Belén Communiqué, calling for further international efforts “to promote the urgent negotiation of an international legally-binding instrument with prohibitions and regulations with regard to autonomy in weapons systems.”

The meeting had scarcely concluded when the next gathering, the Responsible AI in the Military Domain summit, was convened in The Hague. Held on Feb. 15–16 and sponsored by South Korea and the Netherlands, the event favored an alternative approach. Rather than advocate for a ban on the military use of AI and autonomous weapons systems, summit participants, including the United States, called for voluntary measures allowing for their use in a safe, responsible manner.

This approach was outlined in a keynote address by Bonnie Jenkins, U.S. undersecretary of state for arms control and disarmament. She argued that the utilization of AI by the military could have positive outcomes in aiding combat operations and enhancing compliance with international humanitarian law. But she acknowledged that because its use also poses a risk of malfunction and unintended consequences, it must be subject to rigorous controls and oversight.

“AI capabilities will increase accuracy and precision in the use of force, which will also help strengthen implementation of international humanitarian law’s protections for civilians,” Jenkins said. “But we must do this safely and responsibly.”

Underscoring this view, Jenkins released a U.S.-crafted “political declaration” on the responsible military use of AI and autonomous weapons systems. Drawing on guidelines issued by the U.S. Department of Defense in revised directive 3000.09, “Autonomy in Weapons Systems” (see ACT, March 2023), the declaration calls on states to employ AI and autonomous systems in a safe and principled manner.

“[The] military use of AI can and should be ethical, responsible, and enhance international security,” the declaration affirms. To advance this outcome, states are urged to adopt best practices in the design, development, testing, and fielding of AI-enabled systems, including measures to ensure compliance with international humanitarian law and to “minimize unintended bias in military AI capabilities.” Such endeavors are entirely voluntary and subject to domestic laws alone, where they exist, the declaration states.

The competing approaches were given an extensive airing at a meeting of the group of governmental experts (GGE) convened under the auspices of the Convention on Certain Conventional Weapons (CCW) in Geneva on March 6–10.

For several years, the GGE has been considering proposals for an additional protocol to the CCW that would prohibit or strictly regulate the deployment of autonomous weapons systems. As the GGE operates by consensus and some states-parties to the CCW, including Russia and the United States, oppose such a measure, the group has been unable to forward a draft protocol to the full CCW membership. Nevertheless, GGE meetings have provided an important forum for proponents of contending approaches to articulate and defend their positions, and the March meeting was no exception.

The United States, joined by Australia, Canada, Japan, and the UK, submitted a draft proposal that draws heavily on the political declaration released by Jenkins. It asserts that the use of autonomous weapons systems should be deemed lawful as long as the states using them have taken effective measures to ensure that their use will not result in violations of international humanitarian law. If employed in such a manner, the joint proposal states, “these technologies could be used to improve the protection of civilians.”

An entirely different approach was put forward in papers submitted by Austria, Pakistan, member states of the Non-Aligned Movement, and representatives of civil society. These participants disputed the notion that autonomous weapons systems can be employed in accordance with humanitarian law and prove useful in protecting civilians. They said such systems pose an inescapable risk of causing battlefield calamities and harming civilians unnecessarily.

For states that adhere to this view, nothing is acceptable short of a complete ban on autonomous weapons systems or a set of binding regulations that would severely circumscribe their use. As noted in the Austrian paper, autonomous weapons systems that are not under effective human control at all times and that “select and engage persons as targets in a manner that violates the dignity and worth of the human person” must be considered unacceptable and must be prohibited.

Given the wide gap between these two contending approaches, it is unlikely that a common strategy will be devised at the next GGE meeting, scheduled for Geneva in May, and so no draft proposal for a legally binding ban or set of regulatory controls on autonomous weapons systems is likely to be submitted to the CCW state-parties when they next meet.

A stalemate on the issue gives autonomous weapons developers time to hone new technologies and commercialize them. It also could lead to a dual approach to controlling such devices, with some states adopting voluntary rules and others pursuing the adoption of a legally binding measure outside the CCW process. One possible venue for the latter option is the UN General Assembly, where a majority vote, rather than a consensus decision, would be required for passage.

The international debate over controlling artificial intelligence and autonomous weapons systems is dividing into two camps. 

Pentagon Seeks to Facilitate Autonomous Weapons Deployment


March 2023
By Michael Klare

The U.S. Defense Department released an updated version of its directive on developing and fielding autonomous weapons systems that seems designed to facilitate the integration of such devices into the military arsenal.

The Sea Hunter, a prototype submarine-hunting drone ship that can cross the open seas without a human crew for months at a time, is among the autonomous weapons systems being tested by the U.S. Navy. (U.S. Navy photo)The original version of directive 3000.09, “Autonomy in Weapons Systems,” was published in 2012. Since then, the Pentagon has made considerable progress in using artificial intelligence (AI) to endow unmanned combat platforms with the capacity to operate autonomously and now seems keen to accelerate their deployment.

The new version of the directive was released on Jan. 25 and appears intended to make it easier to advance such efforts by clarifying the review process that proposed autonomous weapons systems must undergo before winning approval for battlefield use.

“Given the dramatic advances in technology happening all around us, the update to our autonomy in weapon systems directive will help ensure we remain the global leader of not only developing and deploying new systems, but also safety,” said Deputy Secretary of Defense Kathleen Hicks in announcing the new version.

When the original version was released 10 years ago, the development of autonomous weapons was just getting under way, and few domestic or international rules governed their use. Accordingly, that version broke new ground just by establishing policies for autonomous weapons systems testing, assessment, and employment.

Chief among these instructions was the mandate that proposed autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” In consonance with this edict, the directive decreed that any proposed system be subjected to a rigorous review process intended to test its compliance with that overarching principle and to ensure that the system’s software was free of any glitches that might hamper its performance or cause it to act in an improper manner.

The meaning of “appropriate levels of human judgment” was not defined in the 2012 version, but its promulgation has allowed senior U.S. officials to insist over the years that the United States is not building self-governing lethal devices, or “killer robots” as they are termed by opponents.

In 2012, those requirements seemed a reasonable basis for regulating the development of proposed autonomous weapons systems. But much has occurred since then, including a revolt by Google workers against the company’s involvement in military-related AI research. (See ACT, July/August 2018.) In addition, there have been efforts by some states-parties to the Convention on Certain Conventional Weapons to impose an international ban on lethal autonomous weapons systems. (See ACT, January/February 2022.)

Such developments have fueled concerns within academia, industry, and the military about the ethical implications of weaponizing AI. Questions have also arisen about the reliability of weapons systems using AI, especially given the propensity of many AI-empowered devices to exhibit racial and gender biases in their operation or to behave in unpredictable, unexplainable, and sometimes perilous ways.

To overcome these concerns, the Defense Department in February 2020 adopted a set of ethical principles governing AI use, including one requirement that the department take “deliberate steps to minimize unintended bias in AI capabilities” and another mandating that AI-empowered systems possess “the ability to detect and avoid unintended consequences.” (See ACT, May 2020.) With these principles in place, the Pentagon then undertook to revise the directive.

At first reading, the new version appears remarkably similar to the first. The overarching policy remains the same, that proposed autonomous weapons systems must allow their operators “to exercise appropriate levels of human judgment over the use of force,” while again omitting any clarification of the term “appropriate levels of human judgment.” As with the original directive, the new text mandates a high-level review of proposed weapons systems and specifies the criteria for surviving that review.

But on closer reading, significant differences emerge. The new version incorporates the ethical principles adopted by the Defense Department in 2020 and decrees that the use of AI capabilities in autonomous weapons systems “will be consistent with” those principles. It also establishes a working group to oversee the review process and ensure that proposed systems comply with the directive’s requirements.

The new text might lead to the conclusion that the Pentagon stiffened the requirements for deploying autonomous weapons systems, which in some sense is true, given the inclusion of the ethical principles. Another conclusion is equally valid: that by clarifying the requirements for receiving high-level approval and better organizing the bureaucratic machinery for such reviews, it lays out a road map for succeeding at this process and thus facilitates autonomous weapons systems development.

This interpretation is suggested by the statement that full compliance with the directive’s requirements will “provide sufficient confidence” that such devices will work as intended, an expression appearing six times in the new text and nowhere in the original. The message, it would seem, is that weapons designers can proceed with development of autonomous weapons systems and ensure their approval for deployment so long as they methodically check off the directive’s requirements, a process facilitated by a flow chart incorporated into the new version.

A new directive lays out a road map for putting these new weapons into the field. 

U.S. Plan for "Responsible Military Use of AI" Constructive but Inadequate

Sections:

Body: 

For Immediate Release: Feb. 16, 2023

Media Contacts: Michael Klare, senior visiting fellow, [email protected]; Shannon Bugos, senior policy analyst, [email protected]

WASHINGTON, DC— Today, the United States proposed a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy" during a conference on the issue in Europe.

While a positive signal, the declaration ultimately proves an inadequate response to the militarization of AI and the risks posed by lethal autonomous weapons, according to experts at the independent, nongovernmental Arms Control Association (ACA).

“The motivation for the U.S. framework stems from the deliberations at the expert group meetings convened by the Convention on Certain Conventional Weapons (CCW), where a significant number of states have voiced support for a binding international ban on autonomous weapons capable of killing humans," notes Shannon Bugos, a senior policy analyst at ACA.

In October 2022, the United States joined a diverse, cross-regional group of United Nations member states, led by Austria, on a joint declaration that expressed concern about “new technological applications, such as those related to autonomy in weapons systems.”

"However, the United States and other states with technologically advanced militaries have resisted negotiations on a legally binding instrument to regulate behavior at the CCW, which operates by consensus,” Bugos notes. “Many other states–including Austria, Argentina, Brazil, Mexico, New Zealand, and Spain–have proposed negotiations on a legally binding, enforceable agreement to ban lethal autonomous weapons altogether.”

Michael T. Klare, a senior fellow with ACA, concluded that "The U.S. principles on responsible behavior, however comprehensive and commendable, do not make up formal rules or regulations, and are therefore not readily enforceable. This means that any state (including the United States) can endorse the declaration and claim to be abiding by its principles, but then violate them with impunity.”

Klare is the author of the new ACA report Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability that assesses the risks and dangers of new military technologies, including AI and autonomous weapons. The report also provides a framework strategy for curtailing the indiscriminate weaponization of emerging technologies. 

"Principles are nice in theory but will not adequately protect us from the deployment and use of autonomous weapons systems capable of killing humans, possibly in an abusive and indiscriminate manner," Klare argues.

"Given the risks posed by autonomous weapons systems and AI, we continue to urge the United States to act more responsibly and call upon all governments represented at the CCW to support the initiation of negotiations on autonomous weapons, and to help craft an outcome ensuring continued human control over weapons of war and decisions to employ lethal force," said Daryl G. Kimball, executive director of the Arms Control Association.

 

Description: 

While a positive signal, the U.S.-proposed "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy" ultimately proves an inadequate response to the militarization of AI and the risks posed by lethal autonomous weapons, according to experts.

Subject Resources:

Pages

Subscribe to RSS - Michael Klare