"I find hope in the work of long-established groups such as the Arms Control Association...[and] I find hope in younger anti-nuclear activists and the movement around the world to formally ban the bomb."
Guterres Calls for Autonomous Weapons Controls
October 2024
By Michael T. Klare and Xiaodon Liang
UN Secretary-General António Guterres reaffirmed the need for urgent action to preserve human control over the use of force in a new report that he said reflects “widespread recognition” by UN member-states of the potential harmful effects of lethal autonomous weapons systems.
Released on Aug. 6, the report summarizes the views of 73 states and 33 civil society organizations, including Human Rights Watch and the Arms Control Association, on the challenges posed by autonomous weapons systems. It was mandated by a UN General Assembly resolution passed in December. (See ACT, December 2023.)
“There is widespread concern that those weapons systems have the potential to change warfare significantly and may strain or even erode existing legal frameworks,” Guterres said in the conclusion section of the report.
He also found “widespread recognition” that human control is essential to ensure accountability, responsibility, and compliance with international law and a “strong sense that time is running out for the international community to take preventive action on this issue.”
Guterres stressed “the need to act urgently to preserve human control over the use of force” and called for the conclusion by 2026 of a “legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law.”
“The autonomous targeting of humans by machines is a moral line that must not be crossed,” he said.
Although Guterres emphasized common ground, there are signs of continued international division over potential responses to the lethal autonomous weapons challenge. In a section of the report summarizing state submissions, Guterres highlighted differences between states that favored continuing discussion of such responses solely within the framework of the Convention on Certain Conventional Weapons (CCW) and states that preferred advancing discussions toward a legally binding treaty in other venues, such as the UN General Assembly.
In its analysis of the report, the Stop Killer Robots advocacy organization said that, of the 58 submissions from states or groups of states, 47 states expressed support for some form of prohibition or regulation of lethal autonomous weapons systems. But many prominent countries remain opposed to or are undecided on a potential treaty regulating these weapons, including China, India, Israel, Japan, Russia, the United Kingdom, and the United States, it said.
Many of the positions identified by Guterres also were debated at two recent international gatherings on lethal autonomous weapons systems.
The first of these was a session of the CCW group of governmental experts held in Geneva on Aug. 26-30. It was convened in accordance with a 2023 decision by the CCW states-parties instructing the experts group to extend its work until 2026 “to further consider and formulate, by consensus, a set of elements of an instrument…to address emerging technologies in the area of lethal autonomous weapon systems.”
At the Geneva meeting, delegations debated elements of a legally binding measure to ban or restrict the use of lethal autonomous weapons systems. Much of the debate concerned terminology, particularly the “characterization,” or definition, of the systems and the degree of human oversight required to prevent such devices from violating international humanitarian law.
Many delegations expressed support for legally binding restrictions on the deployment of these systems, but several states, notably India, Israel, Russia, and the United States, opposed overly restrictive language on their use. Because decisions at the experts group meetings are made by consensus, very little was agreed at the August session, prompting many delegations to express support for considering a binding instrument in the UN General Assembly.
The consensus procedure at the experts group “should not be an excuse to defeat the purpose of regulating the issue of [lethal autonomous weapons systems] through delays or procrastinations,” the Brazilian delegation said. Instead, “all avenues should be explored in order to achieve the best possible instrument, with the largest number of adherents, and that is to be able to be put into force the soonest.”
The debate continued Sept. 9-10 in Seoul at the REAIM (Responsible AI in the Military Domain) summit. Organized by South Korea, the meeting drew representatives from 90 countries and academics, scientists, and other specialists. This was the second REAIM summit, following an initial event at The Hague in February 2023.
The REAIM summit process differs from the CCW experts group meetings because it addresses the full range of artificial intelligence (AI) applications to military operations, not just its use in lethal autonomous weapons systems, and because it entertains nonbinding, voluntary measures only. (See ACT, April 2023.)
At the summit, panelists considered the risks and benefits of employing AI in the military domain, with some speakers arguing that AI could be used to reduce, as well as increase, the risks of humanitarian harm. Panel discussions also focused on possible strategies for minimizing the negative consequences of applying AI to military use, especially with respect to the proliferation or unintended use of weapons of mass destruction.
Following a ministerial roundtable on the second day of the summit, 61 countries signed a “blueprint for action” to guide future efforts in this field.
Claiming that AI can be used for benign as well as malicious purposes, the blueprint calls on states “to establish national strategies, principles, standards and norms, policies and frameworks and legislation as appropriate to ensure responsible AI applications in the military domain.”
Such measures could include, for example, “appropriate safeguards to reduce the risk of malfunctions or unintended consequences, including from data, algorithmic, and other biases.”
Despite the nonbinding nature of the blueprint and its relatively unrestrictive proposals, 30 countries, including China, did not sign the document. Chinese Foreign Ministry spokesperson Mao Ning did not explain Beijing’s reasons for abstaining, but said that “China will remain open and constructive in working with other parties” on the issue of AI regulation.
The next major venue for addressing international constraints on the development and deployment of lethal autonomous weapons system will be the First Committee of the UN General Assembly in October in New York.