Login/Logout

*
*  
“It will take all of us working together – government officials, and diplomats, academic experts, and scientists, activists, and organizers – to come up with new and innovative approaches to strengthen transparency and predictability, reduce risk, and forge the next generation of arms control agreements.”
– Wendy Sherman
U.S. Deputy Secretary of State
June 2, 2022
Michael Klare

Diplomatic Debate Over Autonomous Weapons Heats Up


April 2024
By Michael T. Klare

Diplomatic activity concerning the regulation of autonomous weapons systems is accelerating. The United States convened a conference on the subject in March, Austria has scheduled one for April, and the UN General Assembly plans a debate on the topic at its fall meeting.

Alexander Kmentt, Austria’s director of disarmament, arms control, and nonproliferation, briefs Vienna-based diplomats in March about his government’s plan for a conference on autonomous weapons systems, in April.  (Photo courtesy of Alexander Kmentt) The quickening diplomacy reflects growing worldwide concern over the faulty or unsupervised use of artificial intelligence (AI) and autonomous weapons in combat, possibly resulting in unintended atrocities or conflict escalation, and differing opinions over how best to prevent such perils.

The intensifying concern over the deployment of autonomous weapons is perhaps best exemplified by the lopsided Dec. 22 vote on UN General Assembly Resolution 78/241, calling for a rigorous study of the topic. Some 152 states voted in favor of the resolution, with only Belarus, India, Mali, and Russia voting no. Another 12 states abstained.

Acknowledging unease over “the possible negative consequences and impact of autonomous weapon systems on global security and regional and international stability,” the resolution calls for a comprehensive review of the subject at the next UN General Assembly, scheduled to begin Sept. 10. To ensure that such an assessment is conducted in a thoroughly informed manner, the resolution directs the secretary-general to prepare a comprehensive report on the issue, incorporating the views of all key stakeholders.

Although there is widespread agreement about the potential risks posed by autonomous weapons systems, especially when they are deployed without adequate human oversight, there is considerable international debate over the best way to regulate them. Some nations, led by the United States, advocate the adoption of voluntary constraints. Another group, led by Austria, favors a legally binding prohibition on the deployment of fully autonomous weapons systems. To promote their contending perspectives, these key actors decided to organize separate international meetings.

The first of these dueling assemblies was convened by the U.S. State Department on March 19-20 at the University of Maryland. Without much fanfare, the plenary brought together some 150 participants from nearly all of the 52 countries that have signed the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” The declaration is a set of voluntary constraints on the use of autonomous weapons systems first released by the State Department in February 2023 and then rereleased, with slightly altered language, last November. (See ACT, April 2023.)

The declaration affirms that autonomous weapons systems can play positive as well as negative roles in warfare. It also asserts that states must adopt strict guidelines on their use in order to prevent negative outcomes. For example, the declaration posits that states “should take appropriate steps, such as legal reviews, to ensure that their military AI capabilities will be used consistent with their respective obligations under international law.” But this measure and others enunciated in the declaration are purely voluntary steps, entailing no legal obligation by signatory states to abide by them and carrying no penalties if they fail to do so.

Nevertheless, organizers of the U.S. event insisted that by convening representatives of signatory states and sharing experiences, they are helping to bolster international norms against the misuse of autonomous weapons systems. “We look forward to continuing to share lessons learned and best practices to build our collective capacities to implement these responsible measures,” Assistant Secretary of State Mallory Stewart told Arms Control Today. She said that participating states agreed to form working groups to discuss implementation of specific measures in the political declaration and that the entire group will meet again in annual plenaries such as the one held in Maryland.

By contrast, the assembly being organized by Austria, officially called the Vienna Conference on Autonomous Weapons Systems and the Challenge of Regulation, will consider legally binding measures along with voluntary ones.

To be held April 29-30, it will include representation from governmental and nongovernmental entities. Its aim, according to the official announcement, is “to increase international awareness of the topic of [autonomous weapons systems] and their legal, moral, ethical, and security policy challenges,” as well as to “build momentum…for the creation of an international legal and normative framework.”

Alexander Kmentt, director of disarmament, arms control, and nonproliferation at the Austrian Foreign Ministry, said the Vienna meeting is aimed particularly at stimulating international interest in UN General Assembly deliberations on autonomous weapons systems.

In addition to awareness-raising and momentum-building for the future regulation of autonomous weapons systems, the conference is linked to the report that UN Secretary-General António Guterres has been mandated to produce, Kmentt told Arms Control Today. The conference agenda is designed to achieve this outcome by soliciting “relevant substantive input by experts” and “by stimulating states to submit their views to the [secretary-general] as input for this report,” he added.

The groups assembled by the United States and Austria have many similar concerns about the battlefield deployment of autonomous weapons systems, but also have differences about the best approach to regulating these systems. These are sure to become more pronounced as states prepare for the General Assembly’s review.

The United States convened a conference on autonomous weapons in March, Austria has set one for April and the UN General Assembly plans a debate on the topic at its fall meeting.

Swarming Our World

News Date: 
February 20, 2024 -05:00

UN to Address Autonomous Weapons Systems


December 2023
By Michael T. Klare

The First Committee of the UN General Assembly, which is responsible for international security and disarmament affairs, has adopted a draft resolution calling for the secretary-general to conduct a comprehensive study of lethal autonomous weapons systems.

Austrian diplomat Alexander Kmentt says that in calling for a study of lethal autonomous weapons systems, the First Committee of the UN General Assembly is hoping to lay the ground for regulating these systems. (Photo by Alex Halada/AFP via Getty Images)The measure was approved on Oct. 12 by an overwhelming 164-5 vote, suggesting that it will be adopted by the full assembly before it adjourns in December. Eight UN member states abstained.

The committee action marked the first time that the UN has addressed the issue of lethal autonomous weapons systems, which are governed by artificial intelligence (AI) rather than human operators.

In conducting the study, the secretary-general is instructed to consult the views of member states and civil society “on ways to address the related challenges and concerns they raise [regarding the use of autonomous weapons] from humanitarian, legal, security, technological and ethical perspectives.”

A final report is to be readied for the 2024 session of the General Assembly, where further action on these systems
is expected.

“The objective is obviously to move forward on regulating autonomous weapons systems,” Alexander Kmentt, director of disarmament, arms control, and nonproliferation in the Austrian Foreign Affairs Ministry, told Arms Control Today in an email. “The resolution makes it clear that the overwhelming majority of states wants to address this issue with urgency.” Austria was one of the lead sponsors of the proposed measure.

In calling for the study, the resolution notes that considerable disquiet has arisen among UN member states over the ethical, legal, and humanitarian implications of deploying machines with the capacity to take human lives. Concerns also have emerged over the “impact of autonomous weapon systems on global security and regional and international stability,” the resolution states. In seeking the views of member states and civil society on the use of such systems, the secretary-general is specifically instructed to solicit feedback on those concerns.

Although the resolution would not impose any specific limitations on the use of these systems, as some governments and civil society organizations have demanded, it demonstrates the desire of many states to create options for more vigorous UN action on the topic.

Until now, international efforts to control the development and deployment of autonomous weapons systems have centered largely around negotiations in Geneva to ban such systems in accordance with the Convention on Certain Conventional Weapons (CCW). That treaty is designed to prohibit or restrict the use of munitions that cause unnecessary or unjustifiable suffering to combatants or indiscriminately affect civilians.

Civil society organizations, including the International Committee of the Red Cross and the Campaign to Stop Killer Robots, have joined with representatives of Austria, Brazil, Chile, Mexico, and numerous other governments to press for the adoption of an “additional protocol” under the CCW restricting the use of lethal autonomous weapons systems or banning them altogether. But because decisions at meetings of the treaty’s state-parties are made by consensus, Russian and U.S. opposition to binding measures in this area has stymied these efforts. (See ACT, April 2023.)

In light of this impasse, proponents of a ban or restrictions on these systems have turned to the General Assembly as a potential arena for achieving progress on the issue because decisions there are made by majority vote, not consensus, and support for such measures appears to be strong, given the lopsided vote in favor of the Oct. 12 resolution.

“Unfortunately, some states seem intent on continuing discussions in Geneva but not to allow progress towards negotiations of a legally binding instrument,” Kmentt observed. “Even if we can’t reflect any substantive progress in the discussions in Geneva, UN member states now have this other avenue to clearly reflect and express what they think ought to be done on this extremely crucial issue.”

Kmentt also noted that the resolution calls for a wider discussion of lethal autonomous weapons systems and the risks they pose than has been conducted at the negotiations in Geneva. “Humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans but made on the basis of pre-programmed algorithms, [raising] fundamental ethical issues,” he wrote in his email. “The resolution and the mandated report will hopefully broaden the international debate.”

The First Committee of the UN General Assembly has called for a comprehensive study of lethal autonomous weapons systems, which some see as a first step to international regulations.

Pentagon Struggles to Exploit Advances in AI


December 2023
By Michael T. Klare

The U.S. Defense Department has announced several initiatives designed to accelerate the military’s appropriation of private sector advances in artificial intelligence (AI) while still adhering to its commitments regarding the responsible and ethical utilization of these technologies.

U.S. Deputy Defense Secretary Kathleen Hicks arrives at a classified briefing on artificial intelligence for the Senate at the U.S. Capitol Building in July.  (Photo by Anna Moneymaker/Getty Images)Senior Pentagon officials are keen to exploit recent progress in AI in order to gain a combat advantage over China and Russia, considered the most capable potential U.S. adversaries.

But they recognize that the large language models used to power ChatGPT and other such generative AI programs have been found to produce false or misleading outcomes, termed “hallucinations” by computer experts, that make them unsuitable for battlefield use. Overcoming this technical challenge and allowing for the rapid utilization of the new technologies have become major Pentagon priorities. The Defense Department took one step toward that goal on Nov. 2 with the release of an updated “Data, Analytics, and Artificial Intelligence Adoption Strategy,” which will govern the military’s use of AI and related technology in the years ahead.

Pentagon officials said the strategy, which updates earlier versions from 2018 and 2020, is needed to take advantage of the enormous advances in AI achieved by private firms over the past few years while complying with the department’s stated principles on the safe, ethical use of AI.

“We’ve worked tirelessly for over a decade to be a global leader in the fast and responsible development and use of AI technologies in the military sphere,” Deputy Defense Secretary Kathleen Hicks told a Nov. 2 briefing on the new strategy. Nevertheless, she said, “safety is critical because unsafe systems are ineffective systems.”

Although the new strategy claims to balance the two overarching objectives of speed and safety in utilizing the new technologies, the overwhelming emphasis is on speed. “The latest advancements in data, analytics, and artificial intelligence technologies enable leaders to make better decisions faster, from the boardroom to the battlefield,” the strategy states. “Therefore, accelerating the adoption of these technologies presents an unprecedented opportunity to equip leaders at all levels of the Department with the data they need.”

The emphasis on speed is undergirded by what appears to be an arms racing mindset. “[China] and other strategic competitors…have widely communicated their intentions to field AI for military advantage,” the strategy asserts. “Accelerating adoption of data, analytics, and AI technologies will enable enduring decision advantage, allowing [Defense Department] leaders to…deploy continuous advancements in technological capabilities to creatively address complex national security challenges in this decisive decade.”

To ensure that the U.S. military will continue to lead China and other competitors in applying AI to warfare, the updated strategy calls for the decentralization of AI product acquisition and utilization by defense agencies and the military services. Rather than having all decisions regarding the procurement of AI software be made by a central office in the Pentagon, they can now be made by designated officials at the command or agency level, as long as these officials abide by safety and ethical guidelines now being developed by a new Pentagon group called Task Force Lima.

Such decentralization will accelerate the military’s utilization of commercial advances in AI by allowing for local initiative and reducing the risk of bureaucratic inertia at the top, explained the Pentagon’s chief digital and AI officer, Craig Martell, at the Nov. 2 press briefing.

“Our view now,” he said, is to “let any component use whichever [AI program] pipeline they need, as long as they’re abiding by the patterns of behavior that we need them to abide by.”

But some senior Pentagon officials acknowledge that decentralization on this scale will diminish their ability to ensure that products acquired for military use meet the department’s standards for safety and ethics.

“Candidly, most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use,” Hicks said. But she insisted that they could be made compliant over time through rigorous testing, examination, and oversight.

Overall responsibility for ensuring compliance with the department’s safety and ethical standards has been assigned to Task Force Lima, a team of some 400 specialists working under Martell’s supervision.

The task force was established to “develop, evaluate, recommend, and monitor the implementation of generative AI technologies across [the Defense Department] to ensure the department is able to design, deploy, and use generative AI technologies responsibly and securely,” Hicks said on Aug. 2 when announcing its launch.

As she and other senior officials explained, the task force’s primary initial mission will be to formulate the guidelines within which the various military commands can employ commercial AI tools for military use.

Navy Capt. Manuel Xavier Lugo, the task force commander, said the project will examine various generative AI models “in order for us to find the actual areas of [potential] employment of the technology so that we can go ahead and then start writing specific frameworks and guardrails for those particular areas of employment.”

 

The Defense Department announced initiatives to appropriate private sector advances in artificial intelligence while still using AI responsibly.

Biden Issues Executive Order on AI Safety


December 2023
By Michael T. Klare

Responding to growing public anxiety over the potential dangers posed by the expanding use of artificial intelligence (AI), President Joe Biden issued an executive order on Oct. 30 intended to ensure the “safe, secure, and trustworthy” application of the powerful technology.

With Vice President Kamala Harris (R) looking on, U.S. President Joe Biden signs an executive order on advancing the safe, secure, and trustworthy development and use of artificial intelligence at the White House on October 30.  (Photo by Brendan Smialowski/AFP via Getty Images)The order followed the public release of ChatGPT and other generative AI programs that are able to create text, images, and computer code comparable to that produced by humans. On occasion, these programs have suffused those materials with false and fabricated content, provoking widespread unease about their safety and reliability.

Other AI-enabled products used to identify possible criminal suspects also have been shown to produce inaccurate outcomes, raising concerns about racial and gender biases introduced when the systems were being “trained” by computer technicians.

To overcome such anxieties, the executive order mandates a wide variety of measures intended to bolster governmental oversight of the computer technology industry and to better protect workers, consumers, and minority groups against the misuse of AI. Most of these apply to domestic industries and institutions, but some have a significant bearing on national security and arms control.

One of the order’s most consequential measures is a requirement that major tech firms such as Google, Microsoft, and OpenAI notify the federal government when developing any “foundational model”—a complex AI program such as the one powering ChatGPT—“that poses a serious risk to national security, national economic security, or national public health.” They must also share the results of all “red team” tests, programs designed to probe newly developed AI products and identify any hidden flaws or weaknesses, conducted by those firms.

Although the Oct. 30 order does not empower the government to block the commercialization of programs found to be deeply flawed in these tests, it might deter major institutional clients, including the U.S. Defense Department, from procuring such products, thereby prompting industry to place greater emphasis on safety and reliability.

Along similar lines, the order calls on the National Institute of Standards and Technology to establish rigorous standards for red-team testing of major AI programs before their release to the public. Compliance is not obligatory, but such standards are likely to be widely adopted within the industry. The same standards also will be applied by the departments of Energy and Homeland Security in overcoming potential AI system contributions to “chemical, biological, radiological, nuclear, and cybersecurity risks.”

More closely related to national security and arms control is a measure intended to prevent the use of AI in engineering dangerous biological materials, a significant concern for those who fear the utilization of AI in the production of new, more potent biological weapons. Under the Biden order, strong new standards will be established for biological synthesis screening, and any agency that conducts life science research will have to abide by them as a condition of future federal funding.

Several other key provisions bear on national security in one way or another, but in recognition of the issue’s complexity, the order defers full consideration of AI’s impact on these issues to a separate national security memorandum to be developed by the White House National Security Council staff in the coming months. Once completed, this document will dictate how the U.S. military and intelligence communities “use AI safely, ethically, and effectively in their missions.”

The President acted to ensure the “safe, secure, and trustworthy” application of artificial intelligence in response to growing public anxiety over AI’s potential dangers.

Pentagon Plans Mass Autonomous Weapons Deployment


October 2023
By Michael T. Klare

The United States is unable to rely exclusively on existing human-operated weapons systems to prevail in a future war with China and will need to field vast numbers of autonomous weapons systems controlled by artificial intelligence (AI) to meet the challenge, according to Deputy Secretary of Defense Kathleen Hicks.

A XQ-58 Valkyrie aircraft launches for a test mission Aug. 22 at Eglin Air Force Base, Fla. According to the U.S. Air Force, the mission successfully tested components that greatly reduce the risk of large scale crewed and uncrewed autonomous systems.  (U.S. Air Force photo by 2nd Lt. Rebecca Abordo)To ensure that sufficient numbers of these platforms, including drone ships, planes, and ground vehicles, will soon be available for battlefield use, Hicks on Aug. 28 announced a new Pentagon initiative, dubbed “Replicator,” to field “multiple thousands” of such systems “within the next 18 to 24 months.”

“Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she said in a speech to the National Defense Industrial Association. By deploying thousands of autonomous weapons systems, the United States will “counter the [People’s Liberation Army’s] mass with mass of our own…[with] platforms that are small, smart, cheap, and many,” she said.

The new initiative, Hicks explained, represents a shift from the Pentagon’s historic emphasis on the acquisition of giant vessels and other major platforms that are “large, exquisite, expensive, and few.” Such systems are still needed, but must be augmented by hordes of “attritable,” or expendable, autonomous weapons, she said.

Pressed by reporters to provide more details about the new approach, Hicks gave a second speech about Replicator on Sept. 6 at the Defense News conference.

“Let me give you a window into the possibilities of all-domain, attritable autonomy,” she began, referring to technology she identified as ADA2. “Imagine distributed pods of self-propelled ADA2 systems afloat…packed with sensors aplenty…. Imagine constellations of ADA2 systems in orbit, flung into space scores at a time…. Imagine flocks of ADA2 systems flying at all sorts of altitudes, doing a range of missions.”

Some of these systems, Hicks said, will be designed for surveillance and intelligence gathering alone, and others will be armed in some fashion and designed for combat missions. She cautioned that, at least initially, this would not entail entirely new weapons projects, but rather the acceleration of programs already under development by the various military services.

Of these, the project that is furthest along in development and most likely to be designated a program of record, or established budget item, is the Air Force’s “collaborative combat aircraft.” Envisioned as a high-performance combat drone with substantial autonomous capabilities, this aircraft is intended to accompany manned aircraft on high-risk missions in contested airspace over or near Chinese or Russian territory.

The Air Force has been testing a project model, the XQ-58A Valkyrie, at Eglin Air Force Base in Florida. Built by Kratos, a San Diego-based maker of unmanned aircraft, the Valkyrie has been flown autonomously in simulated combat missions while under close human supervision. Future tests, scheduled for later this year, will involve increasing degrees of autonomous operation.

The Air Force requested $392 million for development of the collaborative combat aircraft in its fiscal year 2024 budget submission and expects to spend an additional $5.4 billion on its development over the next four years. No plans have yet been announced for serial production of the proposed aircraft, but this is one experimental project that might be accelerated under the Replicator initiative.

Other projects that are likely to receive Pentagon attention are the Navy’s plans for procurement of both large and medium-sized unmanned surface vessels. According to the Navy, these vessels will be used to help locate enemy ships and submarines for attack by manned vessels. (See ACT, May 2021.) Development of the vessels has proceeded slowly even though they were deemed a major service priority. Some $757 million was requested for their development during fiscal years 2022-2024, with no funding for procurement of operational vessels.

The slow, steady approach of the Air Force and Navy regarding autonomous weapons systems development and the similar approach being pursued by the Army conflict with Hicks’ pledge to field thousands of such devices by 2025. Without congressional approval of billions of dollars in additional spending and the adoption of a more rapid development timeline, industry commentators said it is difficult to imagine how these existing programs can be readied for combat in such a short time.

Many analysts worry that the software needed to drive these proposed autonomous weapons systems is not yet fully developed and, if rushed into use, could lead to catastrophic accidents. Trying to design the software for the collaborative combat aircraft while the aircraft itself has yet to be constructed is “dangerous,” said Brett Darcey, vice president of Shield AI, which makes aerial drones. Even when the software is designed, he added, “we must still test it enough to make sure that we trust it” and that it works seamlessly with the drone aircraft. “These things have to arrive at the same time, and we’re still years away there.”

Such doubts fuel concerns within the arms control and human rights communities that the rapid deployment of autonomous weapons systems, as proposed by Hicks, could lead to the loss of human control over battlefield operations and to unintended attacks on civilians. “You’re stepping over a moral line by outsourcing killing to machines, by allowing computer sensors rather than humans to take human life,” Mary Wareham of Human Rights Watch told The New York Times on Aug. 27. Her organization is pushing for international limits on autonomous weapons systems.

 

The United States, unable to rely exclusively on human-operated weapons systems to prevail in a future war with China, must field autonomous weapons systems controlled by artificial intelligence, a senior defense official says.   

 

Pages

Subscribe to RSS - Michael Klare