“We continue to count on the valuable contributions of the Arms Control Association.”
U.S., Russia Oppose UN Resolutions on Military Use of AI
December 2025
By Michael T. Klare
Members of the UN General Assembly’s First Committee voted overwhelmingly to approve two resolutions calling for greater international scrutiny of the risks posed by the military use of AI, with Russia and the United States in notable opposition.

The two resolutions adopted Nov. 6 constitute pioneering efforts by UN member states to better comprehend the consequences of using AI for military purposes, especially in the nuclear realm.
One resolution, on “artificial intelligence in the military domain and its implications for international peace and security,” first introduced in 2024, addresses the larger picture of AI weaponization and was approved 166-5 with five abstentions; the other, on “possible risks of integration of artificial intelligence into command, control and communication systems of nuclear weapons,” focuses on that particular aspect of the problem. It was approved 115-8 with 44 abstentions.
The measures have been submitted to the full General Assembly, which is almost certain to approve them by the end of the year.
The adoption of these proposals by the First Committee, which is responsible for security and disarmament affairs in the General Assembly, reflects growing international concern over the exploitation of AI for military purposes. (See ACT, November 2025.) Many experts have warned that without effective safeguards, AI-enabled systems could bypass or eliminate human control over the use of force, causing substantial death and destruction and possibly triggering the use of nuclear weapons. (See ACT, June 2024.) This, in turn, has sparked calls for the imposition of international controls on the use of AI in combat systems.
“Humanity’s fate cannot be left to an algorithm,” UN Secretary-General António Guterres told a Sept. 24 session of the UN Security Council devoted to AI and international security. “Humans must always retain authority over life-and-death decisions,” Guterres added.
The two resolutions incorporate these concerns and call on member states to work together in identifying the dangers posed by the military use of AI and in devising safeguards to avert those perils.
Resolution A/80/46 urges states to “pursue national, regional, subregional and global efforts to address the opportunities and challenges, including from humanitarian, legal, security, technological and ethical perspectives, related to the application of artificial intelligence in the military domain.” It also authorizes the secretary-general to organize a three-day gathering of member states in 2026 to exchange views on this topic and to consider next steps to address the dangers involved. The UN Office for Disarmament Affairs is charged with preparing a summary of these deliberations for consideration at the next General Assembly meeting, in fall 2026.
The United States voted in favor of a similar resolution in 2024, but this time voted “no,” along with Burundi, Israel, North Korea, and Russia. Explaining its vote, the U.S. delegation claimed that the resolution “risks starting down the unwelcome and unhelpful path of creating a global governance regime designed to institute centralized control over a critical technology.”
This outlook is consistent with President Donald Trump’s call for U.S. victory in what he has termed a “race” to achieve “global dominance in artificial intelligence.” It also reflects claims by leaders of the commercial tech industry that international controls on AI, however mild, pose a threat to their unbridled development of advanced AI models.
Consistent with this outlook, the United States also voted “no” on the second resolution, A/80/56, concerning the potential dangers arising from the integration of AI into nuclear command-control-and-communications (NC3) systems.
This measure represents the General Assembly’s first attempt to address these risks. Many experts, including former military officials, have warned that the unrestrained integration of AI into NC3 could result in the “poisoning” of nuclear decision-making systems by false or corrupted data, leading to hasty or misguided nuclear launch decisions. (See ACT, September 2025.)
The resolution seeks to diminish this risk by encouraging member states to jointly explore the unique dangers created by the integration of AI into NC3 systems. It also calls on the nuclear-armed states to take immediate steps to ensure that humans, not machines, exercise ultimate control over the use of nuclear weapons.