Login/Logout

*
*  

“We continue to count on the valuable contributions of the Arms Control Association.”

– President Joe Biden
June 2, 2022
Michael Klare

AI Commission Warns of Escalatory Dangers


March 2021
By Michael T. Klare

For the past two years, the National Security Commission on Artificial Intelligence (NSCAI), established by Congress, has been laboring to develop strategies for the rapid integration of artificial intelligence (AI) into U.S. military operations.

Inside the National Security Agency (NSA) and U.S. Cyber Command Integrated Cyber Center and Joint Operations Center. (Photo credit: NSA) On Mar. 1, the commission is poised to deliver its final report to Congress and the White House. From the very start, this effort has been deemed an essential drive to ensure U.S. leadership in what is viewed as a competitive struggle with potential adversaries, presumably China and Russia, to weaponize advances in AI.

According to its charter, embedded in the National Defense Authorization Act of 2019, the NSCAI was enjoined to consider the “means and methods for the United States to maintain a technological advantage in artificial intelligence, machine learning, and other associated technologies related to national security and defense.”

To allow the public a final opportunity to weigh in on its findings, the NSCAI released a draft of its final report at the beginning of January and discussed it at a virtual plenary meeting Jan. 25. Three main themes emerge from the draft report and the public comments of the commissioners: (1) AI constitutes a “breakthrough” technology that will transform all aspects of human endeavor, including warfare; (2) the United States risks losing out to China and Russia in the competitive struggle to harness AI for military purposes, putting the nation’s security at risk; and (3) as a consequence, the federal government must play a far more assertive role in mobilizing the nation’s scientific and technical talent to accelerate the utilization of AI by the military.

The report exudes a distinct Cold War character in the degree to which it portrays AI as the determining factor in the outcome of future conflicts. Whereas competition involving nuclear-armed ballistic missiles was the issue of the U.S.-Soviet Cold War era, the NSCAI warns that a potential adversary—in this case, China—could overtake the United States in mastering the application of AI for military purposes.

“In the future, warfare will pit algorithm against algorithm,” the report states. “The sources of battlefield advantage will shift from traditional factors like force size and levels of armaments, to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.”

Although the United States enjoys some advantages in this new mode of warfare, the report argues, it risks losing out to China over the long run. “China is already an AI peer, and it is more technically advanced in some applications,” it asserts. “Within the next decade, China could surpass the United States as the world’s AI superpower.”

To prevent this from happening, the NSCAI report argues the United States must accelerate its efforts to exploit advances in computing science for military purposes. As most of the nation’s computing expertise is concentrated in academia and the private sector, much of the report is devoted to proposals for harnessing that talent for military purposes. But it also addresses several issues of deep concern to the arms control community, notably autonomous weapons and nuclear escalation.

Claiming that autonomy will play a critical role in future military operations and that Russia and China, unlike the United States, cannot be relied on to follow ethical standards in the use of autonomous weapon systems on the battlefield, the commission rules out U.S. adherence to any binding international prohibition on the deployment of such systems.

In contrast to those in the human rights and arms control community who warn that fully autonomous weapons cannot be trusted to comply with the laws of war and international humanitarian law, the final report affirms that “properly designed and tested AI-enabled and autonomous weapon systems have been and can continue to be used in ways which are consistent” with international humanitarian law. A treaty banning the use of such systems, the NSCAI report contends, would deny the United States and its allies the benefit of employing such systems in future conflicts while having zero impact on its adversaries, as “commitments from states such as Russia or China likely would be empty ones.”

But in one area—the use of AI in battlefield decision-making—the report does express concern about the implications of its rapid weaponization.

“While the [c]ommission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit,” it states, “the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability.”

This section of the report echoes statements to the commission made by a group of arms control experts in an informal dialogue with commission representatives organized by the Arms Control Association on Nov. 24, 2020.

On that occasion, the arms control experts argued that excessive reliance on AI-enabled command-and-control systems in the heat of battle could result in precipitous escalatory actions, possibly leading to the early and unintended use of nuclear weapons, a danger likely to be compounded when commanders on both sides relied on such systems for combat decision-making and the resulting velocity of battle exceeded human ability to comprehend the action and avert bad outcomes.

To prevent this from happening, the arms control experts insisted on the importance of retaining human control over all decisions involving nuclear weapons and called for the insertion of automated “tripwires” in advanced command-and-control systems to disallow escalatory moves without human approval.

Recognizing that these escalatory dangers are just as likely to arise from the automation of Chinese and Russian command-and-control systems, the arms control experts proposed that the United States and Russia discuss these risks in their future strategic security dialogues and that such talks be conducted with China.

All of these recommendations were incorporated into the commission’s final report in one form or another.

Advisory panel pushes the U.S. military to accelerate work on AI-enabled systems but also calls for restraint measures.

U.S. Emerging Technologies Gain Support


January/February 2021

Reflecting a bipartisan consensus, U.S. lawmakers have authorized the Defense Department to accelerate the weaponization of emerging technologies, especially artificial intelligence (AI) and robotic, autonomous, and hypersonic weapons systems. The 2021 National Defense Authorization Act (NDAA) was enacted after Congress overrode a Dec. 23 presidential veto of the bill.

Marine Corps Gen. Michael Groen leads the Pentagon's Joint Artificial Intelligence Center, which saw its status upgraded by the 2021 defense authorization bill. (Photo: Cuong Le/U.S. Marine Corps)To speed the utilization of AI by the military, for example, it upgrades the status of the Pentagon’s Joint Artificial Intelligence Center by bringing it under the deputy secretary of defense and by investing the center’s director with authority for acquisition decisions. The act also calls on the Air Force to speed development of its Low-Cost Attributable Aircraft Technology program, intended to create an armed drone, or Skyborg, that can accompany piloted aircraft on high-risk missions over enemy territory. The “attributable,” in this case, means unmanned aircraft that can be attrited, or sacrificed in large numbers, to help defend piloted aircraft. The omnibus appropriations bill, when passed in late December, did not accede to all of the spending measures in the NDAA, but did allocate substantial sums for the continuing development of cutting-edge systems, including $136 million for ground robotics, $259 million for large and medium-sized unmanned surface vehicles, and $1.2 billion for hypersonic missiles.—MICHAEL T. KLARE

U.S. Emerging Technologies Gain Support

A Strategy for Reducing the Escalatory Dangers of Emerging Technologies


December 2020
By Michael T. Klare

Throughout human history, military forces have sought to exploit innovations in science and technology to achieve success on the battlefield, very often fielding new technologies before societies could weigh the risks of doing so and impose controls on their use.

U.S. Air Force Chief of Staff Gen. David Goldfein speaks to the Air Force Association's Air, Space and Cyber Conference in September 2019. As great-power competition in cyberwarfare pushes the technology forward, there are risks that potential escalatory consequences are being ignored. (Photo: Wayne Clark/U.S. Air Force)During World War I, for example, Germany exploited advances in chemical production to develop asphyxiating gases for military use, provoking widespread public outrage and prompting postwar efforts to ban such munitions. During World War II, the United States exploited advances in nuclear science to create the atomic weapons used against Japan, again generating public outrage and prompting postwar control efforts. Today, rapid advances in a range of scientific and technological fields—artificial intelligence (AI), robotics, cyberspace, remote sensing, and microelectronics—are again being exploited for military use. And, as before, control efforts are lagging far behind the process of weaponization.

As in those historical examples, the current lack of progress in devising control measures reflects the challenge of grappling with new and unfamiliar technologies. Some of these innovations, such as AI and cyberoperations, are said to pose a particularly severe challenge to arms control because they cannot be as easily quantified and monitored as other weapons limited by arms control agreements, such as intercontinental ballistic missiles (ICBMs). However, this deficiency also represents a failure to grasp the unique ways in which the weaponization of cutting-edge technologies can imperil international peace and stability. To avoid a new round of global catastrophes, it is essential to identify the distinctive risks posed by the military use of destabilizing technologies and overcome the obstacles to their effective control.

Before considering such measures, it is necessary to examine the geopolitical and strategic setting in which the weaponization of these technologies is taking place, as well as the novel ways in which this process endangers international stability.

The Pursuit of Technological Superiority

The level of risk associated with the military exploitation of cutting-edge technologies cannot be separated from the geopolitical context in which this process is occurring, given that the principal enablers of such weaponization—China, Russia, and the United States—perceive themselves to be engaged in a competitive struggle for military advantage at a time when war among them is deemed entirely possible. Under these conditions, all three countries are enhancing their capacity for what the Pentagon calls “high end” warfare, or all-out combat among the modern, well-equipped forces of their adversaries—combat that is expected to make use of every advance in military technology.

The U.S. military leadership first described this evolving environment in its National Defense Strategy of February 2018. “We face an ever more lethal and disruptive battlefield, combined across domains, and conducted at increasing speed and reach,” it stated. “The security environment is also affected by rapid technological advancements and the changing character of war. The drive to develop new technologies is relentless…and moving at accelerating speed.”1

If the United States is to retain its technological edge and prevail in future wars, the leadership asserted, it must master these new technologies and incorporate them into its major military systems.

A very similar outlook regarding the strategic environment is embedded in Chinese and Russian military doctrines. In language strikingly similar to that of the U.S. strategy, but in mirror image, China’s July 2019 white paper on national defense asserts that the United States “has provoked and intensified competition among major countries, significantly increased its defense expenditure, pushed for additional capacity in nuclear, outer space, cyber, and missile defense, and undermined global strategic stability.” If Chinese forces are to prevail in this environment, it states, “greater efforts have to be invested in military modernization to meet national security demands.”2 Russian doctrine makes similar claims and places equal emphasis on the utilization of emerging technologies to ensure success on the battlefield.3

The modernization and enhancement of front-line conventional forces are common themes in the military doctrines of all three countries, but so also is the modernization of strategic nuclear forces. All three are engaged in costly upgrades to their nuclear delivery systems, in some cases involving the replacement of older ICBMs, bombers, and missile-carrying nuclear submarines with newer, more capable versions. More worrisome still, all three are developing nuclear warheads for use in nonstrategic scenarios, for example, to defeat an overpowering conventional assault by an adversary. This is an explicit goal of the Nuclear Posture Review adopted by the Trump administration in February 20184 and is believed to figure in Russian military doctrine. China is less transparent about its nuclear weapons policies, but is known to have developed nuclear warheads for its medium- and intermediate-range ballistic missiles designed for use against U.S. and allied forces in the Asia-Pacific region.

The Eroding Nuclear Firebreak

In light of these developments, many analysts believe that the barriers to nuclear weapons use have been substantially eroded in recent years. Most of these obstacles were erected during the Cold War era, when leaders of the United States and the Soviet Union came to realize that any nuclear conflict between them would result in their mutual annihilation, impelling them to devise a variety of measures intended to prevent a conventional war from escalating across the “firebreak” separating non-nuclear from nuclear combat. These measures included the “hotline” agreement of 1963; successive limitations on the size of each other’s nuclear arsenals, beginning with Strategic Arms Limitation Talks agreement in 1972; and the Intermediate-Range Nuclear Forces Treaty of 1987. In the language of the time, these measures were designed to preserve “strategic stability” by eliminating the risk of accidental, inadvertent, or unintended escalation across the nuclear firebreak.

In today’s strategic environment, however, analysts fear that strategic stability is being undermined by changes in the nuclear doctrines of the major powers and by the introduction of increasingly capable non-nuclear weapons. These developments include, on one hand, the adoption of policies envisioning the use of “tactical” or “nonstrategic” nuclear arms in response to overwhelming non-nuclear attack by an adversary, and, on the other, the deployment of sophisticated cyber and conventional weapons thought capable of locating and destroying an adversary's nuclear combat capabilities, especially its nuclear command, control, communications, and intelligence (C3I) systems. Also contributing to this environment of instability, analysts warn, is the dissolution of the arms control regime established by the two superpowers during the Cold War era and the emergence of India and Pakistan as major nuclear weapons powers.5

None of these countries would deliberately choose to initiate a nuclear exchange, recognizing that the costs of doing so in terms of homeland devastation would be so high. Yet, they have adopted military doctrines that emphasize non-nuclear attacks on their adversary’s critical military assets—radars, missile batteries, command centers, and so on—at the very onset of a conflict. In most cases, these assets are primarily intended for conventional operations, but some also house nuclear C3I facilities or perform dual-use functions, both conventional and nuclear—a situation described by James M. Acton as “entanglement.” If these dual-use or co-located facilities come under attack, the target state might conclude this was the prelude to a nuclear strike and decide to launch its own nuclear munitions before they could be destroyed by its adversary’s incoming weapons. “Entanglement,” says Acton, “could lead to escalation because both sides in a U.S.-Chinese or U.S.-Russian conflict could have strong incentives to attack the adversaries dual-use C3I capabilities to undermine its non-nuclear operations.”6

With all these countries fielding ever more capable conventional weapons and embracing nuclear policies that authorize the use of nuclear weapons in response to severe non-nuclear threats, the risk of such scenarios is bound to increase under any circumstances. Worse still, these dangers are being further amplified by the utilization of emerging technologies for military use. Such technologies pose an added threat to the durability of the nuclear firebreak by multiplying the types of non-nuclear attacks that can be launched on critical enemy assets and by increasing the vulnerability of nuclear C3I systems to non-nuclear attack.

The Risk of Nuclear Escalation

The pathways in which militarized emerging technologies could increase the risk of nuclear escalation can be summarized in four areas.

First, increasingly capable air and naval autonomous weapons systems equipped with advanced sensors and AI processors could be deployed in self-directed “swarms” to find and destroy key enemy assets, such as surface ships and submarines, air defense radars, anti-air and anti-ship missiles, and major C3I facilities. To an adversary, such attacks could be interpreted as the prelude to a nuclear first strike, especially if they result in the destruction of nuclear C3I systems co-located with non-nuclear C3I facilities, prompting it to launch its own nuclear weapons for fear of losing them to enemy weapons.7

A Russian MiG-31 aircraft carries a Kinzhal hypersonic over Moscow's Victory Day parade in 2018. High-speed weapons like this, capable of carrying conventional or nuclear warheads, risk escalating conflicts as decision makers have little time to assess an ambiguous threat. (Photo: Kremlin.ru) Second, multiple strikes by hypersonic missiles could be used early in a conflict to destroy key enemy assets like those described above, again causing the target state to fear that a nuclear strike is imminent and cause it to launch its own nuclear arms. This danger is multiplied by the fact that the flight time of hypersonic missiles is extremely brief and that many of these weapons now being developed by the major powers are designed to carry a nuclear or a conventional warhead, leaving a target country in doubt as to an attacker’s ultimate intentions, especially if key C3I facilities are degraded, preventing senior leaders from knowing the nature of the attack and inclining them to assume the worst.8

Third, just before or at the very onset of a conflict, a belligerent could launch a cyberattack on its adversary’s early-warning and C3I systems, hoping thereby to degrade that country’s ability to resist a full-scale assault by conventional forces. Because many of these systems are also used to warn of a nuclear attack and to communicate with nuclear as well as conventional forces, the target country’s leaders might conclude they are facing an imminent nuclear attack and order the immediate launch of their own nuclear weapons.9

Fourth, as the speed and complexity of warfare increases, the major powers are coming to rely ever more heavily on AI-empowered machines to sort through sensor data on enemy movements, calculate enemy intentions, and select optimal responses. This increases the danger that humans will cede key combat decision-making tasks to machines that lack a capacity to gauge social and political context in their calculations and are vulnerable to hacking, spoofing, and other failures, possibly leading them to propose extreme military responses to ambiguous signals and thereby cause inadvertent escalation. With machines controlling the action on both sides, this danger can only grow worse.10

These are some of the major pathways to escalation that are being created by the weaponization of emerging technologies, but other pathways of a similar nature have been identified in the academic literature and are likely to arise as these technologies are pressed into military service.11

How to Control Destabilizing Technologies

Until now, efforts to control the military use of emerging technologies have largely focused on three aspects of the problem: eliminating the danger that autonomous weapons systems will prove incapable of distinguishing between combatants and noncombatants in contested urban areas, leading to unnecessary harm to the latter; ensuring that cyberspace is not used for attacks on critical military and civilian infrastructure; and guarantying the reliability, safety, and unbiased nature of AI-empowered military systems.

These endeavors, each valuable in its own way, have resulted in some important if modest gains. Efforts to curb the deployment of autonomous weapons systems, also called “killer robots,” have yet to result in the adoption of a legally binding international ban on such munitions under the auspices of the Convention on Certain Conventional Weapons (CCW); however some two dozen states are now calling for negotiations leading to such a ban outside of the CCW framework.12 UN discussions on rules governing cyberspace have produced agreement on certain bedrock principles of noninterference in a state’s critical cyberinfrastructure, but no binding obligations.13 Finally, concerns over the military use of AI have spurred the U.S. Department of Defense to adopt a set of principles for the ethical and responsible use of AI-empowered systems,14 but other countries have yet to follow suit, and it is unclear how the Pentagon principles will be implemented.

None of these measures, valuable as they are, addresses the additive role of emerging technologies in increasing the risk of nuclear escalation. If this, the most critical aspect of the military-technological revolution, is to be brought under effective international control, a more targeted set of measures will be required. These must focus specifically on those applications of emerging technologies that increase the risk that a conventional conflict results in the accidental or unintended use of nuclear weapons by one side or another.

A focused strategy must span a variety of technologies and require many components and so cannot be encompassed in a single agreement. Rather, what is needed is a framework strategy aimed at restricting the military use of those technologies deemed most threatening to strategic stability. Recognizing that implementing all the components of such a strategy will prove difficult in the current political environment, the framework must envision a succession of steps aimed at imposing increasingly specific and meaningful restrictions on destabilizing technologies. Drawing on the toolbox of measures developed by arms control practitioners over decades of experience and experimentation, as well as proposals advanced by other experts in the field,15 such a strategy should be composed of the following elements, in an approximate order of implementation.

Awareness-Building. This would include efforts to highlight the additive risks to nuclear stability posed by the weaponization of emerging technologies. Some important research has already been conducted on these dangers, but more work is needed to identify the escalatory risks inherent in the weaponization of emerging technologies and to make the results widely known.

Additional effort is needed to bring these findings to the attention of policymakers. An important start to such endeavors was made by the German Foreign Ministry in November 2020 with its virtual conference titled “Capturing Technology, Rethinking Arms Control.” At the conclusion of this event, the foreign ministers of the Czech Republic, Finland, Germany, the Netherlands, and Sweden issued a joint proclamation expressing their concern over the “mounting risks for international peace and stability created by the potential misuse of new technologies.”16 More such events, involving a wider spectrum of nations, would help raise awareness of these dangers. In the United States, for example, Congress should be encouraged to hold hearings on the destabilizing impacts of certain emerging technologies.

Track 2 and Track 1.5 Diplomacy. Government officials from China, Russia, and the United States are barely speaking to each other about strategic nuclear matters, let alone about the dangers posed by the weaponization of emerging technologies. In the absence of such official discourse, it is imperative that scientists, technicians, and arms control experts from these countries meet in neutral settings to assess the additive risks to nuclear stability posed by the weaponization of these technologies and to devise practical measures for their regulation and control. Building on the experience of the Pugwash organization in assembling arms control experts from many nations, such meetings could, for example, evaluate measures for controlling or limiting the deployment of hypersonic missiles or the use of cyberspace for attacks on enemy C3I systems.

Ideally, such Track 2 (nongovernmental) consultations can be followed by Track 1.5 meetings, in which government advisers and former government officials also participate, lending them greater authority and helping to ensure that any proposals developed at such gatherings will be given consideration at higher levels and form the basis for future formal arrangements.

Strategic Stability Talks. Before governments can even begin to consider formal arrangements to curb the deployment of destabilizing technologies, senior officials must become more familiar with the nature of these technologies and the significant risks they pose; even more essential, officials on all sides must come to understand how their adversaries view these risks. The best way to do this, many experts agree, is to convene a series of “strategic stability talks,” composed of government officials, military officers, and technical experts, who together can build on the work begun by under Track 2 and 1.5 diplomacy by further assessing the dangers posed by the weaponization of destabilizing technologies and devising measures to restrict or control the technologies in question.

Some preliminary efforts of this sort have occurred under the auspices of the strategic security dialogue conducted by U.S. and Russian officials in recent years, albeit without achieving any concrete results.17 With a new, more arms control-friendly administration about to take office in Washington, one can hope that these talks will resume in a more serious and productive atmosphere, resulting in a thorough discussion of the mutual risks posed by the weaponization of emerging technologies and leading over time to concrete proposals for their regulation and control. Proposals have also been made to expand these bilateral talks to include Chinese participants or to organize a separate strategic security dialogue between the United States and China. Hopefully, this too can now be undertaken.

Unilateral Measures. Given the current state of international affairs, it could prove difficult for the United States and Russia, the United States and China, or all three to agree on formal measures for the control of especially destabilizing technologies. Yet, it may be possible for these states to adopt unilateral measures in the hope that they will induce parallel steps by their adversaries and eventually lead to binding bilateral and multilateral agreements. Experts in the field have suggested several areas where this would be desirable and practical. In the cyberspace realm, for example, Acton has called on governments to adopt a “risk-averse” policy under which they insert barriers against the inadvertent or precipitous initiation of attacks on an enemy’s nuclear C3I.18 A similar approach could be extended to AI-empowered command decision-support systems to limit the risk of mutual interference and inadvertent escalation.

Greater effort could also be made by the Pentagon and the military organizations of other states to adopt, refine, and enforce guidelines for the safe and ethical utilization of AI for military purposes. The Defense Department took an important first step in this direction with its February 2020 announcement of a set of principles governing the military use of AI, but additional measures are needed to ensure that these principles are fully implemented. The militaries of other countries should also adopt principles of this sort and ensure full compliance with them. As part of these endeavors, military contractors must be made aware of their obligation to comply with such principles and military officers will have to be trained to use AI-empowered weapons in a safe and ethical manner.

Bilateral and Multilateral Arrangements. Once the leaders of the major powers come to appreciate the escalatory risks posed by the weaponization of emerging technologies, it may be possible for them to reach accord on bilateral and multilateral arrangements intended to minimize these risks. Such accords could begin with nonbinding agreements of various sorts and, as trust grows, could be followed by binding treaties and arrangements. To help build trust, moreover, the major powers could engage in confidence-building measures of various sorts, such as exchanges of information on ethical standards and protocols for delegating decision-making authority to machines.19

As an example of a useful first step, the leaders of the major nuclear powers could jointly pledge to eschew cyberattacks against each other’s nuclear C3I systems. “While such an agreement would not be verifiable in the traditional sense,” says Acton, it would be “enforceable” in that each state would possess the ability to detect and retaliate against such an intrusion.20 Some analysts have also proposed that states agree to abide by a code of conduct governing the military use of AI, incorporating many of the principles contained in the Defense Department’s roster of principles. In particular, such measures should require that humans retain ultimate control over all instruments of war, including autonomous weapons systems and computer-assisted combat decision-support devices.21

If the major powers are prepared to discuss binding restrictions on the military use of destabilizing technologies, certain priorities take precedence. The first would be an agreement or agreements prohibiting attacks on the nuclear C3I systems of another state by cyberspace means or via missile strikes, especially hypersonic strikes. Another top priority would be measures aimed at preventing swarm attacks by autonomous weapons on another state’s missile submarines, mobile ICBMs, and other second-strike retaliatory systems. Strict limitations should be imposed on the use of automated decision-support systems with the capacity to inform or initiate major battlefield decisions, including a requirement that humans exercise ultimate control over such devices. In negotiations for these agreements, progress made in earlier stages of this progression, from Track 2 and 1.5 diplomacy to strategic stability talks and nonbinding measures, will allow policymakers to devise practical agreements to achieve these ends.

Without the adoption of measures such as these, cutting-edge technologies will be converted into military systems at an ever-increasing tempo, and the dangers to world security will grow apace. These perils are inseparable from the larger context of mutual antagonisms and arms racing among the major powers: the weaponization of emerging technologies is being rushed because these states seek every possible advantage in any war that might arise among them, and only a relaxation in these great-power tensions will make it possible to address the full spectrum of nuclear dangers. A more thorough understanding of the distinctive threats to strategic stability posed by certain destabilizing technologies and the imposition of restraints on their military use would go a long way toward reducing the risks of Armageddon.

 

ENDNOTES

1. “Summary of the 2018 National Defense Strategy of the United States,” U.S. Department of Defense, n.d., https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf (emphasis in original).

2. State Council Information Office of the People’s Republic of China, “China’s National Defense in the New Era,” July 2019, http://english.www.gov.cn/atts/stream/files/5d3943eec6d0a15c923d2036.

3. See Roger McDermott, “Russia’s Military Scientists and Future Warfare,” Eurasia Daily Monitor, June 5, 2019, https://jamestown.org/program/russias-military-scientists-and-future-warfare/.

4. U.S. Department of Defense, “Nuclear Posture Review 2018,” February 2018, https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF.

5. See Steven E. Miller, “A Nuclear World Transformed: The Rise of Multilateral Disorder,” Dædalus, Vol. 149, No. 2 (Spring 2020): 17-36.

6. See James M. Acton, “Escalation Through Entanglement,” International Security, Vol. 43, No. 1 (Summer 2018): 56-99.

7. Michael T. Klare, “Autonomous Weapons Systems and the Laws of War,” Arms Control Today, March 2019, pp. 6–12.

8. Michael T. Klare, “An ‘Arms Race in Speed’: Hypersonic Weapons and the Changing Calculus of Battle,” Arms Control Today, June 2019, pp. 6–13.

9. Michael T. Klare, “Cyber Battles, Nuclear Outcomes? Dangerous New Pathways to Escalation,” Arms Control Today, November 2019, pp. 6–13.

10. Michael T. Klare, “‘Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation,” Arms Control Today, April 2020, pp. 10–15.

11. See, for example, Christopher F. Chyba, “New Technologies & Strategic Stability,” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 150–70.

12. See Human Rights Watch (HRW), “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control,” August 10, 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.

13. See UN General Assembly (UNGA) Report A/70/174, Report of the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, July 22, 2015, which was adopted by the full UNGA in Resolution 70/237 of December 23, 2015, Developments in the Field of Information and Telecommunications in the Context of International Security.

14. U.S. Department of Defense, “DOD Adopts Ethical Principles for Artificial Intelligence,” press release, February 24, 2020, https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

15. For a review of the arms control “toolbox” and other proposals for controlling destabilizing technologies, see Jon Brook Wolfsthal, “Why Arms Control?” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 101–15. See also Giacomo Persi Paoli, Kerstin Vignard, David Danks, and Paul Meyer, Modernizing Arms Control: Exploring Responses to the Use of AI in Military Decision-Making (Geneva, Switzerland: UN Institute for Disarmament Research, 2020).

16. “Minister’s Declaration at the Occasion of the Conference ‘Capturing Technology, Rethinking Arms Control,’” November 6, 2020, https://rethinkingarmscontrol.de/wp-content/uploads/2020/11/Ministerial-Declaration-RAC2020.pdf.

17. Kingston Reif and Shannon Bugos, “No Progress Toward Extending New START,” Arms Control Today, July/August 2020, pp. 31–32.

18. James M. Acton, “Cyber Warfare & Inadvertent Escalation,” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 143–44.

19. See Michael C. Horowitz, Lauren Kahn, and Casey Mahoney, “The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? Orbis, Fall 2020, pp. 527–43.

20. Acton, “Cyber Warfare & Inadvertent Escalation,” p. 145.

21. See, for example, Vincent Boulanin, Kolja Brockmann, and Luke Richards, Responsible Artificial Intelligence Research and Innovation for International Peace and Security (Stockholm: Stockholm International Peace Research Institute, 2020).


Michael T. Klare is a professor emeritus of peace and world security studies at Hampshire College and senior visiting fellow at the Arms Control Association. This article follows his four-part “Arms Control Tomorrow” series published in Arms Control Today in 2019 and 2020.

Reducing the risks of militarized emerging technologies will require technologically advanced nations to adopt a sequential, multipart framework strategy.

Esper Envisions ‘Killer Robot’ Navy


November 2020
By Michael T. Klare

The U.S. Navy of the future will be comprised of as many unmanned, robotic ships as of conventional vessels with human crews, Defense Secretary Mark Esper announced in an Oct. 6 address. Outlining his vision in a presentation of “Battle Force 2046,” the Navy’s projected fleet of a quarter-century from now, he said the naval lineup will consist of about 500 combat ships, of which up to 240 will be unmanned surface and subsurface vessels.

The prototype autonomous ship Sea Hunter is moored at Pearl Harbor in 2018. U.S. Defense Secretary Mark Esper recently announced a long-term vision for the Navy deploying a 500-ship fleet, nearly half of which could be autonomous ships. (Photo: Nathan Laird/U.S. Navy)These robot ships “will perform a wide range of missions, from resupply to surveillance, to mine-laying and missile strikes,” said Esper at the Center for Strategic and Budgetary Assessment. They will do so, moreover, “at an affordable cost in terms both of sailors and dollars.”

Traditionally, U.S. aircraft carriers and their accompanying ensemble of cruisers and destroyers manned by large crews and fliers have symbolized U.S. military might, but such large capital ships have become increasingly costly to build and operate. Furthermore, in this new era of great-power competition and tension, carrier-centric flotillas are becoming dangerously vulnerable to enemy anti-ship missiles. To address these challenges, the Navy envisions a force comprised of small numbers of large manned vessels accompanied by large numbers of small unmanned ships. Such a fleet, it is argued, will be far less costly than one composed exclusively of manned vessels and a fleet that can be deployed in highly contested areas with less concern about the loss of any individual ship.

To make this dream possible, the Navy plans to invest billions of dollars in the development and procurement of three types of unmanned warships: a Medium Unmanned Surface Vessel (MUSV), a Large Unmanned Surface Vessel (LUSV), and an Extra-Large Unmanned Undersea Vessel (XLUUV). The MUSV is intended as a combat-ready variant of the Sea Hunter prototype first put to sea in 2016. The LUSV, thought to be a militarized version of a commercial oil rig servicing vessel, is being developed by the Pentagon’s Strategic Capabilities Office. The XLUUV, derived from the Echo Voyager diesel-electric submersible, is being built by Boeing. In its budget request for fiscal year 2021, the Defense Department requested $580 million for development work on all three systems. It expects to spend $4.2 billion over the next five years to complete development work and begin procurement of combat-ready vessels.

The Navy hopes to save money in this mammoth undertaking by using commercial technology when designing the hulls and propulsion systems for these new types of warships. But it still faces a mammoth challenge in equipping the ships with automated command-and-control systems, which would allow them to operate autonomously for long periods of time and carry out complex military functions with little or no human oversight.

The artificial intelligence systems needed to make this possible have yet to be perfected, and many analysts worry that, in a highly contested environment with extensive electronic jamming, such ships could “go rogue” and initiate combat operations that have not been authorized by human commanders, with unforeseen but dangerous consequences. (See ACT, March 2018.)

Autonomous ships could someday compose half of the U.S. Navy, raising concerns over adequate human oversight.

U.S., Russia Boost Shows of Force


July/August 2020
By Michael Klare

As tensions between the United States and Russia have intensified, both nations have engaged in airborne “show of force” operations intended to demonstrate their intent to resist intimidation and defend their territories. Such operations can prove hazardous when the aircraft of one antagonist come perilously close to those of another, a phenomenon that has occurred on numerous occasions over the past few years. The recent maneuvers, however, appear to have raised the stakes, as the two rivals have increased their use of nuclear-capable aircraft in such operations and have staged them in militarily sensitive areas.

A U.S. F-22 aircraft accompanies a Russian Tu-95 "Bear" bomber during an intercept near Alaska on June 16. (Photo: North American Aerospace Defense Command)The pace and extent of recent air operations have exceeded anything since the end of the Cold War. The United States has flown a number of missions near Russia, sometimes going places for the first time with strategic bombers. These include (1) two missions in March and June by U.S. B-2 stealth bombers above the Arctic Circle in exercises intended to demonstrate NATO’s ability to attack Russian military forces located on the Kola Peninsula in Russia’s far north; (2) a first-time U.S. B-1B bomber flight on May 21 over the Sea of Okhotsk, a bay-like body of water surrounded by Russia’s far eastern territory on three sides; (3) a May 29 flight by two B-1B bombers across Ukrainian-controlled airspace for the first time, coming close to Russian-controlled airspace over Crimea; (4) a June 15 mission by two U.S. B-52 bombers over the Baltic Sea in support of a NATO exercise then under way, coming close to Russian airspace and prompting menacing flights by Russian interceptors in the area; and (5) a June 18 flight by two U.S. B-52 bombers over the Sea of Okhotsk, a first appearance there by that type of aircraft, again prompting Russia to scramble fighter aircraft to escort the U.S. bombers away from the area.

For its part, Russia conducted a March 12 flight of two nuclear-capable Tu-160 “Blackjack” bombers over Atlantic waters near Scotland, Ireland, and France from their base on the Kola Peninsula in Russia’s far north, prompting France and the United Kingdom to scramble interceptor aircraft. In addition, nuclear-capable Tu-95 “Bear” bombers, accompanied by Su-35 fighter jets, flew twice in June within a few dozen miles of the Alaskan coastline before being escorted away by U.S. fighter aircraft.

In conducting these operations, U.S. and Russian military leaders appear to be delivering two messages to their counterparts. First, despite any perceived reductions in military readiness caused by the coronavirus pandemic, they are fully prepared to conduct all-out combat operations against the other. Second, any such engagements could include a nuclear component at an early stage of the fighting.

“We have the capability and capacity to provide long-range fires anywhere, anytime, and can bring overwhelming firepower, even during the pandemic,” said Gen. Timothy Ray, commander of the U.S. Air Force Global Strike Command, the unit responsible for deploying nuclear bombers on long-range missions of this sort. Without saying as much, Russia has behaved in a similar manner. From his post as commander of U.S. air forces in Europe, Gen. Jeffrey Harrigian observed, “Russia has not scaled back air operations in Europe since the start of the coronavirus pandemic, and the number of intercepts of Russian aircraft [by NATO forces] has remained roughly stable.”

Leaders on both sides have been more reticent when it comes to the nuclear implications of these maneuvers, but there is no doubt that such considerations are on their minds. Ray’s talk of “overwhelming force” and “long-range fires” could be interpreted as involving highly destructive conventional weapons, but when the aircraft involved are primarily intended for delivering nuclear weapons, it can have another meaning altogether.

Equally suggestive is Harrigian’s comment, made in conjunction with the B-52 flights over the Baltic Sea on June 15, that “long-range strategic missions to the Baltic region are a visible demonstration of our capability to extend deterrence globally,” again signaling to Moscow that any NATO-Russian engagement in the Baltic region could escalate swiftly to the nuclear level.

Russian generals have not uttered similar statements, but the dispatch of Tu-95 bombers to within a few dozen miles of Alaska, which houses several major U.S. military installations, is a loud enough message in itself.

Although receiving scant media attention in the U.S. and international press, these maneuvers represent a dangerous escalation of U.S.-Russian military interactions and could set the stage for a dangerous incident involving armed combat between aircraft of the opposing sides. This by itself could precipitate a major crisis and possible escalation. Just as worrisome is the strategic implications of these operations, suggesting a commitment to the early use of nuclear weapons in future major-power engagements.

The nuclear adversaries have recently increased flights of strategic bombers near each other’s borders.

Pages

Subscribe to RSS - Michael Klare