Login/Logout

*
*  

ACA’s journal, Arms Control Today, remains the best in the market. Well focused. Solidly researched. Prudent.

– Hans Blix,
former IAEA Director-General

Michael Klare

Autonomous Weapons Systems and the Laws of War

Reducing human oversight of weapons systems offers attractive advantages to world military powers, but it also raises unsettling moral, ethical, and legal concerns.


March 2019
By Michael T. Klare

It may have been the strangest christening in the history of modern shipbuilding. In April 2016, the U.S. Navy and the Defense Advanced Research Projects Agency (DARPA) celebrated the initial launch of Sea Hunter, a sleek, 132-foot-long trimaran that one observer aptly described as “a Klingon bird of prey.” More unusual than its appearance, however, is the size of the its permanent crew: zero.

The Anti-Submarine Warfare Continuous Trail Unmanned Vehicle (ACTUV), was christened Sea Hunter April 7, 2016, in Portland, Ore. The Defense Advanced Research Projects Agency ordered the prototype vessel as part of the agency's efforts to develop autonomous, unmanned weapon systems. (Photo: DARPA)Originally designated by DARPA as the Anti-Submarine Warfare Continuous Trail Unmanned Vehicle (ACTUV), Sea Hunter is designed to travel the oceans for months at a time with no onboard crew, searching for enemy submarines and reporting their location and findings to remote human operators. If this concept proves viable (Sea Hunter recently completed a round trip from San Diego, Calif., to Pearl Harbor, Hawaii, with no crew) swarms of ACTUVs may be deployed worldwide, some capable of attacking submarines on their own, in accordance with sophisticated algorithms.

The launching of Sea Hunter and the development of software and hardware allowing it to operate autonomously on the high seas for long stretches of time are the product of a sustained drive by senior Navy and Pentagon officials to reimagine the future of naval operations. Rather than deploy combat fleets composed of large, well-equipped, and extremely expensive major warships, the Navy will move toward deploying smaller numbers of crewed vessels accompanied by large numbers of unmanned ships. “ACTUV represents a new vision of naval surface warfare that trades small numbers of very capable, high-value assets for large numbers of commoditized, simpler platforms that are more capable in the aggregate,” said Fred Kennedy, director of DARPA’s Tactical Technology Office. “The U.S. military has talked about the strategic importance of replacing ‘king’ and ‘queen’ pieces on the maritime chessboard with lots of ‘pawns,’ and ACTUV is a first step toward doing exactly that.”

The Navy is not alone in exploring future battle formations involving various combinations of crewed systems and swarms of autonomous and semiautonomous robotic weapons. The Air Force is testing software to enable fighter pilots to guide accompanying unmanned aircraft toward enemy positions, whereupon the drones will seek and destroy air defense radars and other key targets on their own. The Army is testing an unarmed robotic ground vehicle, the Squad Multipurpose Equipment Transport (SMET) and has undertaken development of a Robotic Combat Vehicle (RCV). These systems, once fielded, would accompany ground troops and crewed vehicles in combat, trying to reduce U.S. soldiers’ exposure to enemy fire. Similar endeavors are under way in China, Russia, and a score of other countries.1

For advocates of such scenarios, the development and deployment of autonomous weapons systems, or “killer robots,” as they are often called, offer undeniable advantages in combat. Comparatively cheap and able to operate 24 hours a day without tiring, the robotic warriors could help reduce U.S. casualties. When equipped with advanced sensors and artificial intelligence (AI), moreover, autonomous weapons could be trained to operate in coordinated swarms, or “wolfpacks,” overwhelming enemy defenders and affording a speedy U.S. victory. “Imagine anti-submarine warfare wolfpacks,” said former Deputy Secretary of Defense Robert Work at the christening of Sea Hunter. “Imagine mine warfare flotillas, distributed surface-warfare action groups, deception vessels, electronic warfare vessels”—all unmanned and operating autonomously.

Although the rapid deployment of such systems appears highly desirable to Work and other proponents of robotic systems, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of the laws of war, and could weaken barriers that restrain escalation from conventional to nuclear war. For example, would the Army’s proposed RCV be able to distinguish between enemy combatants and civilian bystanders in a crowded urban battle space, as required by international law? Might a wolfpack of sub hunters, hot on the trail of an enemy submarine carrying nuclear-armed ballistic missiles, provoke the captain of that vessel to launch its weapons to avoid losing them to a presumptive U.S. pre-emptive strike?

These and other such questions have sparked a far-ranging inquiry into the legality, morality, and wisdom of deploying fully autonomous weapons systems. This debate has gained momentum as the United States, Russia, and several other countries have accelerated their development of such weapons, each claiming they must do so to prevent their adversaries from gaining an advantage in these new modes of warfare. Concerned by these developments, some governments and a coalition of nongovernmental organizations, under the banner of the Campaign to Stop Killer Robots, have sought to ban their deployment altogether.

Ever-Increasing Degrees of Autonomy

Autonomous weapons systems are lethal devices that have been empowered by their human creators to survey their surroundings, identify potential enemy targets, and independently choose to attack those targets on the basis of sophisticated algorithms. Such systems require the integration of several core elements: a mobile combat platform, such as a drone aircraft, ship, or ground vehicle; sensors of various types to scrutinize the platform’s surroundings; processing systems to classify objects discovered by the sensors; and algorithms directing the platform to initiate attack when an allowable target is detected. The U.S. Department of Defense describes an autonomous weapons system as a “weapons system that, once activated, can select and engage targets without further intervention by a human operator.”2

The U.S. Army is testing the Squad Multipurpose Equipment Transport vehicle, designed to unburden infantry personnel from carrying supplies.  Future versions may feature more autonomy and front-line capabilities. (Image: U.S. Army)Few weapons in active service presently exhibit all of these characteristics. Many militaries employ close-in naval defense weapons such as the U.S. Phalanx gun system that can fire autonomously when a ship is under attack by enemy planes or missiles. Yet, such systems cannot independently search for and strike enemy assets on their own, and human operators are always present to assume control if needed.3 Many air-to-air and air-to-ground missiles are able to attack human-selected targets, such as planes or tanks, but cannot hover or loiter to identify potential threats. One of the few systems to possess this capability is Israel’s Harpy airborne anti-radiation drone, which can loiter for several hours over a certain area to search for and destroy enemy radars.4

Autonomy, then, is a matter of degree, with machines receiving ever-increasing capacity to assess their surroundings and decide what to strike and when. As described by the U.S. Congressional Research Service, autonomy is “the level of independence that humans grant a system to execute a given task.” Autonomy “refers to a spectrum of automation in which independent decision-making can be tailored for a specific mission.” Put differently, autonomy refers to the degree to which humans are taken “out of the loop” of decision-making, with AI-empowered machines assuming ever-greater responsibility for critical combat decisions.

This emphasis on the “spectrum of automation” is important because, for the most part, nations have yet to deploy fully autonomous weapon systems on the battlefield. Under prevailing U.S. policy, as enshrined in a November 2012 Defense Department directive, “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Yet, this country, like others, evidently is developing and testing weapons that would allow for ever-diminishing degrees of human control over their future use.

The U.S. Army has devised a long-term strategy for the development of robotic and autonomous systems (RAS) and their integration into the combat force. To start, the Army envisions an evolutionary process under which it will first deploy unarmed, unmanned utility vehicles and trucks, followed by the introduction of armed robotic vehicles with ever-increasing degrees of autonomy. “The process to improve RAS autonomy,” the Army explained in 2017, “takes a progressive approach that begins with tethered systems, followed by wireless remote control, teleoperation, semi-autonomous functions, and then fully autonomous systems.”5

Toward this end, the Army is proceeding to acquire the SMET, an unmanned vehicle designed to carry infantry combat supplies for up to 60 miles over a 72-hour period. In May 2018, the Army announced that it would begin field-testing four prototype SMET systems, with an eye to procuring one such design in large numbers. It will then undertake development of an RCV for performing dangerous missions at the front edge of the battlefield.6

Similarly, the U.S. Navy is pursuing prototype systems such as Sea Hunter and the software allowing them to operate autonomously for extended periods. DARPA is also testing unmanned underwater vehicles (UUVs)—miniature submarines that could operate for long periods of time, searching for enemy vessels and attacking them under certain predefined conditions. The Air Force is developing advanced combat drones capable of operating autonomously if communications with human operators are lost when flying in high-threat areas.

Other nations also are pursuing these technologies. Russia, for example, has unveiled several unmanned ground vehicles, including the Uran-9 small robotic tank and the Vikhr heavy tank; each can carry an assortment of guns and missiles and operate with some degree of autonomy. China reportedly is working on a range of autonomous and semiautonomous unmanned air-, ground-, and sea-based systems. Both countries have announced plans to invest in these systems with ever-increasing autonomy as time goes on.

An Arms Race in Autonomy?

In developing and deploying these weapons systems, the United States and other countries appear to be motivated largely by the aspirations of their own military forces, which see various compelling reasons for acquiring robotic weapons. For the U.S. Navy, it is evident that cost and vulnerability calculations are leading the drive to acquire UUVs and unmanned surface vessels. Naval analysts believe that it might be possible to acquire hundreds of robotic vessels for the price of just one modern destroyer, and large capital ships are bound to be prime targets for enemy forces in any future military clash; while a swarm of robot ships would be more difficult to target and losing even a dozen of them would have a lesser effect on the outcome of combat.7

The Army appears to be thinking along similar lines, seeking to substitute robots for dismounted soldiers and crewed vehicles in highly exposed front-line engagements.

These institutional considerations, however, are not the only drivers for developing autonomous weapons systems. Military planners around the world are fully aware of the robotic ambitions of their competitors and are determined to prevail in what might be called an “autonomy race.” For example, the U.S. Army’s 2017 Robotic and Autonomous Systems Strategy states, “Because enemies will attempt to avoid our strengths, disrupt advanced capabilities, emulate technological advantages, and expand efforts beyond physical battlegrounds…the Army must continuously assess RAS efforts and adapt.” Likewise, senior Russian officials, including President Vladimir Putin, have emphasized the importance of achieving pre-eminence in AI and autonomous weapons systems.

Arms racing behavior is a perennial concern for the great powers, because efforts by competing states to gain a technological advantage over their rivals, or to avoid falling behind, often lead to excessive and destabilizing arms buildups. A race in autonomy poses a particular danger because the consequences of investing machines with increased intelligence and decision-making authority are largely unknown and could prove catastrophic. In their haste to match the presumed progress of likely adversaries, states might field robotic weapons with considerable autonomy well before their abilities and limitations have been fully determined, resulting in unintended fatalities or uncontrolled escalation.

Supposedly, those risks would be minimized by maintaining some degree of human control over all such machines, but the race to field increasingly capable robotic weapons could result in ever-diminishing oversight. “Despite [the Defense Department’s] insistence that a ‘man in the loop’ capability will always be part of RAS systems,” the CRS noted in 2018, “it is possible if not likely, that the U.S. military could feel compelled to develop…fully autonomous weapon systems in response to comparable enemy ground systems or other advanced threat systems that make any sort of ‘man in the loop’ role impractical.”8

Assessing the Risks

Given the likelihood that China, Russia, the United States, and other nations will deploy increasingly autonomous robotic weapons in the years ahead, policymakers must identify and weigh the potential risks of such deployments. These include not only the potential for accident and unintended escalation, as would be the case with any new weapons that are unleashed on the battlefield, but also a wide array of moral, ethical, and legal concerns arising from the diminishing role of humans in life-and-death decision-making.

The potential dangers associated with the deployment of AI-empowered robotic weapons begin with the fact that much of the technology involved is new and untested under the conditions of actual combat, where unpredictable outcomes are the norm. For example, it is one thing to test self-driving cars under controlled conditions with human oversight; it is another to let such vehicles loose on busy highways. If that self-driving vehicle is covered with armor, equipped with a gun, and released on a modern battlefield, algorithms can never anticipate all the hazards and mutations of combat, no matter how well “trained” the algorithms governing the vehicle’s actions may be. In war, accidents and mishaps, some potentially catastrophic, are almost inevitable.

Extensive testing of AI image-classification algorithms has shown that such systems can easily be fooled by slight deviations from standardized representations—in one experiment, a turtle was repeatedly identified as
a rifle9—and are vulnerable to trickery, or “spoofing,” as well as hacking by adversaries.

Former Navy Secretary Richard Danzig, who has studied the dangers of employing untested technologies on the battlefield, has been particularly outspoken in cautioning against the premature deployment of AI-empowered weaponry. “Unfortunately, the uncertainties surrounding the use and interaction of new military technologies are not subject to confident calculation or control,” he wrote in 2018.10

This danger is all the more acute because, on the current path, autonomous weapons systems will be accorded ever-greater authority to make decisions on the use of lethal force in battle. Although U.S. authorities insist that human operators will always be involved when life-and-death decisions are made by armed robots, the trajectory of technology is leading to an ever-diminishing human role in that capacity, heading eventually to a time when humans are uninvolved entirely. This could occur as a deliberate decision, such as when a drone is set free to attack targets fitting a specified appearance (“adult male armed with gun”), or as a conditional matter, as when drones are commanded to fire at their discretion if they lose contact with human controllers. A human operator is somehow involved, by launching the drones on those missions, but no human is ordering the specific lethal attack.

Maintaining Ethical Norms

This poses obvious challenges because virtually all human ethical and religious systems view the taking of a human life, whether in warfare or not, as a supremely moral act requiring some valid justification. Humans, however imperfect, are expected to abide by this principle, and most societies punish those who fail to do so. Faced with the horrors of war, humans have sought to limit the conduct of belligerents in wartime, aiming to prevent cruel and excessive violence. Beginning with the Hague Convention of 1898 and in subsequent agreements forged in Geneva after World War I, international jurists have devised a range of rules, collectively, the laws of war, proscribing certain behaviors in armed conflict, such as the use of poisonous gas. Following World War II and revelations of the Holocaust, diplomats adopted additional protocols to the Hague and Geneva conventions intended to better define the obligations of belligerents in sparing civilians from the ravages of war, measures generally known as international humanitarian law. So long as humans remain in control of weapons, in theory they can be held accountable under the laws of war and international humanitarian law for any violations committed when using those devices. What happens when a machine makes the decision to take a life and questions arise over the legitimacy of that action? Who is accountable for any crimes found to occur, and how can a chain of responsibility be determined?

These questions arise with particular significance regarding two key aspects of international humanitarian law, the requirement for distinction and proportionality in the use of force against hostile groups interspersed with civilian communities. Distinction requires warring parties to discriminate between military and civilian objects and personnel during the course of combat and spare the latter from harm to the greatest extent possible. Proportionality requires militaries to apply no more force than needed to achieve the intended objective, while sparing civilian personnel and property from unnecessary collateral damage.11

Jody Williams (left), a Nobel Peace Laureate, and Noel Sharkey, the chair of the International Committee for Robot Arms Control, called for a ban on fully autonomous weapons in Parliament Square in London on April 23, 2013. The 'Campaign to Stop Killer Robots' is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention. (Photo: Oli Scarff/Getty Images)These principles pose a particular challenge to fully autonomous weapons systems because they require a capacity to make fine distinctions in the heat of battle. It may be relatively easy in a large tank-on-tank battle, for example, to distinguish military from civilian vehicles; but in many recent conflicts, enemy combatants have armed ordinary pickup trucks and covered them with a tarpaulins, making them almost indistinguishable from civilian vehicles. Perhaps a hardened veteran could spot the difference, but an intelligent robot? Unlikely. Similarly, how does one gauge proportionality when attempting to attack enemy snipers firing from civilian-occupied tenement buildings? For robots, this could prove an insurmountable challenge.

Advocates and critics of autonomous weaponry disagree over whether such systems can be equipped with algorithms sufficiently adept to distinguish between targets to satisfy the laws of war. “Humans possess the unique capacity to identify with other human beings and are thus equipped to understand the nuances of unforeseen behavior in ways that machines, which must be programmed in advance, simply cannot,” analysts from Human Rights Watch (HRW) and the International Human Rights Clinic of Harvard Law School wrote in 2016.12

Another danger arises from the speed with which automated systems operate, along with plans for deploying autonomous weapons systems in coordinated groups, or swarms. The Pentagon envisions a time when large numbers of drone ships and aircraft are released to search for enemy missile-launching submarines and other critical assets, including mobile ballistic missile launchers. At present, U.S. adversaries rely on those missile systems to serve as an invulnerable second-strike deterrent to a U.S. disarming first strike. Should Russia or China ever perceive that swarming U.S. drones threaten the survival of their second-strike systems, those countries could feel pressured to launch their missiles when such swarms are detected, lest they lose their missiles to a feared U.S. first strike.

Strategies for Control

Ambassador Amandeep Singh Gill (center), chair of the Governmental Group of Experts on Lethal Autonomous Weapons Systems, speaks at a press conference in Geneva August 27, 2018.  The group was established by the Convention on Certain Conventional Weapons to evaluate the risks of autonomous weapons systems and to develop regulatory strategies. (Photo: Violaine Martin/United Nations)Since it first became evident that strides in AI would permit the deployment of increasingly autonomous weapons systems and that the major powers were seeking to exploit those breakthroughs for military advantage, analysts in the arms control and human rights communities, joined by sympathetic diplomats and others, have sought to devise strategies for regulating such systems or banning them entirely.

As part of that effort, parties to the Convention on Certain Conventional Weapons (CCW), a 1980 treaty that restricts or prohibits the use of particular types of weapons that are deemed to cause unnecessary suffering to combatants or to harm civilians indiscriminately, established a group of governmental experts to assess the dangers posed by fully autonomous weapons systems and to consider possible control mechanisms. Some governments also have sought to address these questions independently, while elements of civil society have entered the fray.

Out of this process, some clear strategies for limiting these systems have emerged. The first and most unequivocal would be the adoption under the CCW of a legally binding international ban on the development, deployment, or use of fully autonomous weapons systems. Such a ban could come in the form a new CCW protocol, a tool used to address weapon types not envisioned in the original treaty, as has happened with a 1995 ban on blinding laser weapons and a 1996 measure restricting the use of mines, booby traps, and other such devices.13 Two dozen states, backed by civil society groups such as the Campaign to Stop Killer Robots, have called for negotiating an additional CCW protocol banning fully autonomous weapons systems altogether.

Proponents of such a measure say it is the only way to avoid inevitable violations of international humanitarian law and that a total ban would help prevent the unintended escalation of conflict. Opponents argue that autonomous weapons systems can be made intelligent enough to overcome concerns regarding international humanitarian law, so no barriers should be placed on their continued development. As deliberations by CCW member states are governed by consensus, a few states with advanced robotic projects, notably Russia, the United Kingdom, and the United States, have so far blocked consideration of such a protocol.

Another proposal, advanced by representatives of France and Germany at the experts’ meetings, is the adoption of a political declaration affirming the principle of human control over weapons of war accompanied by a nonbinding code of conduct. Such a measure, possibly in the form of a UN General Assembly resolution, would require human responsibility over fully autonomous weapons systems at all times to ensure compliance with the laws of war and international humanitarian law and would entail certain assurances to this end. The code could establish accountability for states committing any misdeeds with fully autonomous weapons systems in battle and require that these weapons retain human oversight to disable the device if it malfunctions. States could be required to subject proposed robotic systems to predeployment testing, in a thoroughly transparent fashion, to ensure they were compliant with these constraints.14

Those who favor a legally binding ban under the CCW claim this alternative would fail to halt the arms race in fully autonomous weapons systems and would allow some states to field weapons with dangerous and unpredictable capabilities. Others say a total ban may not be achievable and argue that a nonbinding measure of this sort is the best option available.

Yet another approach gaining attention is a concentrated focus on the ethical dimensions of fielding fully autonomous weapons systems. This outlook holds that international law and common standards of ethical practice ordain that only humans possess the moral capacity to justify taking another human’s life and that machines can never be vested with that power. Proponents of this approach point to the Martens clause of the Hague Convention of 1899, also inscribed in Additional Protocol I of the Geneva Conventions, stating that even when not covered by other laws and treaties, civilians and combatants “remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of human conscience.” Opponents of fully autonomous weapons systems claim that such weapons, by removing humans from life-and-death decision-making, are inherently contradicting principles of humanity and dictates of human conscience and so should be banned. Reflecting awareness of this issue, the Defense Department has reportedly begun to develop a set of guiding principles for the “safe, ethical, and responsible use” of AI and autonomous weapons systems by the military services.

Today, very few truly autonomous robotic weapons are in active combat use, but many countries are developing and testing a wide range of machines possessing high degrees of autonomy. Nations are determined to field these weapons quickly, lest their competitors outpace them in an arms race in autonomy. Diplomats and policymakers must seize this moment before fully autonomous weapons systems become widely deployed to weigh the advantages of a total ban and consider other measures to ensure they will never be used to commit unlawful acts or trigger catastrophic escalation.
 

ENDNOTES

1. For a summary of such efforts, see Congressional Research Service (CRS), “U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence: Considerations for Congress,” R45392, November 20, 2018.

2. U.S. Department of Defense, “Autonomy in Weapons Systems,” directive no. 3000.09 (November 21, 2012).

3. For more information on the Aegis Combat System, see Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018).

4. For more information on the Harpy drone, see ibid.

5. U.S. Army Training and Doctrine Command, “The U.S. Army Robotic and Autonomous Systems Strategy,” March 2017, p. 3, https://www.tradoc.army.mil/Portals/14/Documents/RAS_Strategy.pdf.

6. Mark Mazzara, “Army Ground Robotics Overview: OSD Joint Technology Exchange Group,” April 24, 2018, https://jteg.ncms.org/wp-content/uploads/2018/04/02-PM-FP-Robotics-Overview-JTEG.pdf. See James Langford, “Lockheed Wins Army Contract for Self-Driving Military Convoy Systems,” Washington Examiner, July 30, 2018.

7. See David B. Larter, “U.S. Navy Moves Toward Unleashing Killer Robot Ships on the World’s Oceans,” Defense News, January 15, 2019.

8. CRS, “U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence.”

9. Anish Athalye et al., “Fooling Neural Networks in the Physical World With 3D Adversarial Objects,” LabSix, October 31, 2017, https://www.labsix.org/physical-objects-that-fool-neural-nets/.

10. Richard Danzig, “Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,” Center for a New American Security, June 2018, p. 5, https://s3.amazonaws.com/files.cnas.org/documents/CNASReport-Technology-Roulette-DoSproof2v2.pdf.

11. See CRS, “Lethal Autonomous Weapon Systems: Issues for Congress,” R44466,
April 14, 2016.

12. Human Rights Watch and International Human Rights Clinic, “Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban,” December 2016, p. 5, https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf.

13. UN Office at Geneva, “The Convention on Certain Conventional Weapons,” n.d., https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30 (accessed 9 February 2019).

14. See Group of Governmental Experts Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS), “Emerging Commonalities, Conclusions and Recommendations,” August 2018, https://www.unog.ch/unog/website/assets.nsf/7a4a66408b19932180256ee8003f6114/eb4ec9367d3b63b1c12582fd0057a9a4/$FILE/GGE%20LAWS%20August_EC,%20C%20and%20Rs_final.pdf

 


Michael T. Klare is a professor emeritus of peace and world security studies at Hampshire College and senior visiting fellow at the Arms Control Association. This is the second in the “Arms Control Tomorrow” series, in which he considers disruptive emerging technologies and their implications for war-fighting and arms control. This installment provides an assessment of autonomous weapons systems development and prospects, the dangers they pose, and possible strategies for their control.

Posted: March 4, 2019

AI Arms Race Gains Speed

The U.S. and other world power militaries are committed to seeking a competitive advantage in artificial intelligence.


March 2019
By Michael T. Klare

The U.S. Defense Department plans to apply artificial intelligence (AI) to virtually every aspect of its operations “to ensure an enduring competitive military advantage against those who threaten our security and safety,” according to strategy document released Feb. 12.

A U.S. Air Force crew operates a Predator drone from a Middle Eastern air base.  The Defense Department is seeking to use artificial intelligence to analyze drone-collected imagery. (Photo: John Moore/Getty Images)In the unclassified version of its Artificial Intelligence Strategy, the Defense Department outlines many areas in which Pentagon officials believe AI can enhance the effectiveness of U.S. forces, particularly in battlefield logistics, equipment maintenance, target acquisition, and combat decision-making. In general, the emphasis is on relieving combat soldiers of onerous and time-consuming tasks, such as hauling heavy equipment and poring over drone-supplied video feeds in search of enemy combatants. “We will prioritize the fielding of AI systems that augment the capabilities of our personnel by offloading tedious cognitive or physical tasks and introduce new ways of working,” according to the paper (see ACT, this issue).

Underlying the Defense Department’s strategy is its belief that U.S. rivals are speeding ahead with AI initiatives of their own, requiring a redoubled U.S. effort to avoid being left behind in a rapidly emerging AI arms race. “Other nations, particularly China and Russia, are making significant investments in AI for military purposes,” the new strategy asserts. “These investments threaten to erode our technological and operational advantages.” It is imperative, the strategy document adds, that the United States “adopt AI to maintain its strategic position [and] prevail on future battlefields.” The notion that the United States must seize the lead in AI development or lose a strategic advantage appears to be a driving force in the Defense Department’s push to devise and deploy new AI-empowered technologies.

That the United States and its rivals are now engaged in an AI arms race was given further reinforcement with the Feb. 6 release of “Understanding China’s AI Strategy,” a report by Gregory Allen of the Center for a New American Security. China’s leaders, Allen argues, believe that AI mastery will prove essential for economic and military power in the decades ahead and that China must acquire self-sufficiency in this field. “China’s leadership,” Allen says, “believes that China should pursue global leadership in AI technology and reduce its vulnerable dependence on imports of international technology.”

China’s leaders are aware, Allen notes, that their drive to attain global leadership in AI applications will provoke alarm in Washington and fuel the emerging AI arms race. Nevertheless, China views “increased military usage of AI as inevitable” and is accelerating its efforts to devise and deploy advanced AI-empowered systems. Allen cites Chinese Maj. Gen. Ding Xiangrong of the Central Military Commission, who asserted China’s intent to “narrow the gap between the Chinese military and global advanced powers” by taking advantage of the “ongoing military revolution…centered on information technology and intelligent technology.”

Recent developments in Russia suggest a similar mind-set. Russian President Vladimir Putin issued a Jan. 15 directive to craft a national AI strategy intended to better coordinate domestic efforts in the field and accelerate the development of AI technologies. As in China, mastery of AI is said by top Russian officials to be essential for Russia’s future economic and military predominance. These moves by Russia and China will only add to the impression of a burgeoning AI arms race.

Posted: March 4, 2019

Pentagon Seeks ‘Ethical Principles’ for AI Use

Pentagon Seeks ‘Ethical Principles’ for AI Use


Hoping to encourage artificial intelligence (AI) experts to support U.S. military programs, the U.S. Defense Department is pursuing plans to develop “ethical principles” for AI use in warfare, Defense One first reported in January. Defense Department leaders asked the Defense Innovation Board, an advisory group that includes Silicon Valley executives, to deliver a set of recommendations in June.

The effort to develop principles follows the expression of concerns by AI specialists over how their expertise would be used in defense programs. In May 2018, for example, more than 4,000 Google employees signed a petition urging the company to discontinue its work on Project Maven, a Pentagon-funded AI effort to evaluate drone footage of suspected terrorists and their hideouts. The employees expressed concerns that their work in the civilian sector would be used in a military manner.

Google subsequently announced that it would not renew the Maven contract and promised never to develop AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Google’s actions have raised concerns at the Defense Department, where senior officials plan to enlist top U.S. software engineers in the design of AI-enhanced weapons and other military systems.

The Defense Innovation Board, an independent federal advisory committee established in 2016 to assist the secretary of defense, is chaired by Eric Schmidt, former executive chairman of Alphabet, Google’s parent company. The board has begun a series of public and private meetings around the country with scientists, academics, legal experts, and others to collect a range of views on the subject.—MICHAEL T. KLARE

Posted: March 4, 2019

Russia Blocks Move on Killer Robots Ban

Russia Blocks Move on Killer Robots Ban

 

Parties to the Convention on Certain Conventional Weapons (CCW), meeting Nov. 21–23 in Geneva, failed to advance consideration of lethal autonomous weapons systems to a higher level of international discussion, mainly due to opposition from Russia and a few other countries. For the past several years, a group of governmental experts, largely drawn from CCW signatory states’ delegations, has been considering the implications of such weapons, particularly with respect to their potential violations of the laws of war and international humanitarian law. The experts group has also weighed the possibility of negotiating within the CCW framework a binding prohibition on the development and use of fully autonomous weapons.

Several dozen states, along with nongovernmental organizations, have sought negotiations on a ban on so-called killer robots, but face opposition from countries such as Israel, Russia, and the United States. (See ACT, September 2018.) Ban advocates had hoped that the Geneva talks would open the way for negotiations this year, but Russia blocked the required consensus. Instead, delegates decided that the experts group will meet this year for further discussions. Ban supporters, expressing disappointment, said they will continue working within the CCW while seeking other options. “Russia demonstrated conclusively that the CCW is unlikely to make any meaningful progress on this issue,” said Stephen Goose of the Campaign to Stop Killer Robots.—MICHAEL KLARE

Posted: January 8, 2019

The Challenges of Emerging Technologies

Tackling the arms control implications of emerging technologies is becoming a matter of ever-increasing urgency as the pace of their development accelerates and their potential applications to warfare multiply.


December 2018
By Michael T. Klare

In every other generation, it seems, humans develop new technologies that alter the nature of warfare and pose fresh challenges for those seeking to reduce the frequency, destructiveness, and sheer misery of violent conflict.

More than 800 service members and civilians took part in Cyber Shield 18, an Army National Guard training exercise at Camp Atterbury, Indiana from May 6 to 18. (Photo: Staff Sgt. Jeremiah Runser/U.S. Army Cyber Command)During World War I, advances in chemical processing were utilized to develop poisonous gases for battlefield use, causing massive casualties; after the war, horrified publics pushed diplomats to sign the Geneva Protocol of 1925, prohibiting the use in war of asphyxiating, poisonous, and other lethal gases. World War II witnessed the tragic application of nuclear technology to warfare, and much of postwar diplomacy entailed efforts to prevent the proliferation and use of atomic munitions.

Today, a whole new array of technologies—artificial intelligence (AI), robotics, hypersonics, and cybertechnology, among others—is being applied to military use, with potentially far-ranging consequences. Although the risks and ramifications of these weapons are not yet widely recognized, policymakers will be compelled to address the dangers posed by innovative weapons technologies and to devise international arrangements to regulate or curb their use. Although some early efforts have been undertaken in this direction, most notably, in attempting to prohibit the deployment of fully autonomous weapons systems, far more work is needed to gauge the impacts of these technologies and to forge new or revised control mechanisms as deemed appropriate.

Tackling the arms control implications of emerging technologies now is becoming a matter of ever-increasing urgency as the pace of their development is accelerating and their potential applications to warfare are multiplying. Many analysts believe that the utilization of AI and robotics will utterly revolutionize warfare, much as the introduction of tanks, airplanes, and nuclear weapons transformed the battlefields of each world war. “We are in the midst of an ever accelerating and expanding global revolution in [AI] and machine learning, with enormous implications for future economic and military competitiveness,” declared former U.S. Deputy Secretary of Defense Robert Work, a prominent advocate for Pentagon utilization of the new technologies.1

The Department of Defense is spending billions of dollars on AI, robotics, and other cutting-edge technologies, contending that the United States must maintain leadership in the development and utilization of those technologies lest its rivals use them to secure a future military advantage. China and Russia are assumed to be spending equivalent sums, indicating the initiation of a vigorous arms race in emerging technologies. “Our adversaries are presenting us today with a renewed challenge of a sophisticated, evolving threat,” Michael Griffin, U.S. undersecretary of defense for research and engineering, told Congress in April. “We are in turn preparing to meet that challenge and to restore the technical overmatch of the United States armed forces that we have traditionally held.”2

In accordance with this dynamic, the United States and its rivals are pursuing multiple weapons systems employing various combinations of AI, autonomy, and other emerging technologies. These include, for example, unmanned aerial vehicles (UAVs) and unmanned surface and subsurface naval vessels capable of being assembled in swarms, or “wolfpacks,” to locate enemy assets such as tanks, missile launchers, submarines and, if communications are lost with their human operators, decide to strike them on their own. The Defense Department also has funded the development of two advanced weapons systems employing hypersonic technology: a hypersonic air-launched cruise missile and the Tactical Boost Glide (TBG) system, encompassing a hypersonic rocket for initial momentum and an unpowered payload that glides to its destination. In the cyberspace realm, a variety of offensive and retaliatory cyberweapons are being developed by the U.S. Cyber Command for use against hostile states found to be using cyberspace to endanger U.S. national security.

The introduction of these and other such weapons on future battlefields will transform every aspect of combat and raise a host of challenges for advocates of responsible arms control. The use of fully autonomous weapons in combat, for example, automatically raises questions about the military’s ability to comply with the laws of war and international humanitarian law, which require belligerents to distinguish between enemy combatants and civilian bystanders. It is on this basis that opponents of such systems are seeking to negotiate a binding international ban on their deployment.

In his annual state of the nation address on March 1, Russian President Vladimir Putin announced a "new generation of missiles.” Large video screens displayed an artist rendering of the nuclear-capable Avangard hypersonic glide vehicle being developed by Russia. The Russian news agency Tass subsequently reported the system is expected to ready to enter service in 2019. (Photo: C-SPAN)

 

Even more worrisome, some of the weapons now in development, such as unmanned anti-submarine wolfpacks and the TBG system, could theoretically endanger the current equilibrium in nuclear relations among the major powers, which rests on the threat of assured retaliation by invulnerable second-strike forces, by opening or seeming to open various first-strike options. Warfare in cyberspace could also threaten nuclear stability by exposing critical early-warning and communications systems to paralyzing attacks and prompting anxious leaders to authorize the early launch of nuclear weapons.

These are only some of the challenges to global security and arms control that are likely to be posed by the weaponization of new technologies. Observers of these developments, including many who have studied them closely, warn that the development and weaponization of AI and other emerging technologies is occurring faster than efforts to understand their impacts or devise appropriate safeguards. “Unfortunately,” said former U.S. Secretary of the Navy Richard Danzig, “the uncertainties surrounding the use and interaction of new military technologies are not subject to confident calculation or control.”3 Given the enormity of the risks involved, this lack of attention and oversight must be overcome.

Mapping out the implications of the new technologies for warfare and arms control and devising effective mechanisms for their control are a mammoth undertaking that requires the efforts of many analysts and policymakers around the world. This piece, an overview of the issues, is the first in a series for Arms Control Today (ACT) that will assess some of the most disruptive emerging technologies and their war-fighting and arms control implications. Future installments will look in greater depth at four especially problematic technologies: AI, autonomous weaponry, hypersonics, and cyberwarfare. These four have been chosen for close examination because, at this time, they appear to be the furthest along in terms of conversion into military systems and pose immediate challenges for international peace and stability.

Artificial Intelligence

AI is a generic term used to describe a variety of techniques for investing machines with an ability to monitor their surroundings in the physical world or cyberspace and to take independent action in response to various stimuli. To invest machines with these capacities, engineers have developed complex algorithms, or computer-based sets of rules, to govern their operations. An AI-equipped aerial drone, for example, could be equipped with sensors to distinguish enemy tanks from other vehicles on a crowded battlefield and, when some are spotted, choose on its own to fire at them with its onboard missiles. AI can also be employed in cyberspace, for example to watch for enemy cyberattacks and counter them with a barrage of counterstrikes. In the future, AI-invested machines may be empowered to determine if a nuclear attack is underway and, if so, initiate a retaliatory strike.4 In this sense, AI is an “omni-use” technology, with multiple implications for war-fighting and arms control.5

U.S. Army Brigadier General Joseph P. McGee, Army Cyber Command deputy commanding general for operations, talks with audience members about global cyber operations at the 2017 Association of the U.S. Army annual meeting in Washington.  (Photo: U.S. Army Cyber Command)Many analysts believe that AI will revolutionize warfare by allowing military commanders to bolster or, in some cases, replace their personnel with a wide variety of “smart” machines. Intelligent systems are prized for the speed with which they can detect a potential threat and their ability to calculate the best course of action to neutralize that peril. As warfare among the major powers grows increasingly rapid and multidimensional, including in the cyberspace and outer space domains, commanders may choose to place ever-greater reliance on intelligent machines for monitoring enemy actions and initiating appropriate countermeasures. This could provide an advantage on the battlefield, where rapid and informed action could prove the key to success, but also raises numerous concerns, especially regarding nuclear “crisis stability.”

Analysts worry that machines will accelerate the pace of fighting beyond human comprehension and possibly take actions that result in the unintended escalation of hostilities, even leading to use of nuclear weapons. Not only are AI-equipped machines vulnerable to error and sabotage, they lack an ability to assess the context of events and may initiate inappropriate or unjustified escalatory steps that occur too rapidly for humans to correct. “Even if everything functioned properly, policymakers could nevertheless effectively lose the ability to control escalation as the speed of action on the battlefield begins to eclipse their speed of decision-making,” writes Paul Scharre, who is director of the technology and national security program at the Center for a New American Security.6

As AI-equipped machines assume an ever-growing number and range of military functions, policymakers will have to determine what safeguards are needed to prevent unintended, possibly catastrophic consequences of the sort suggested by Scharre and many others. Conceivably, AI could bolster nuclear stability by providing enhanced intelligence about enemy intentions and reducing the risk of misperception and miscalculation; such options also deserve attention. In the near term, however, control efforts will largely be focused on one particular application of AI: fully autonomous weapons systems.

Autonomous Weapons Systems

Autonomous weapons systems, sometimes called lethal autonomous weapons systems, or “killer robots,” combine AI and drone technology in machines equipped to identify, track, and attack enemy assets on their own. As defined by the U.S. Defense Department, such a device is “a weapons system that, once activated, can select and engage targets without further intervention by a human operator.”7

The Chinese-made Wing Loong II unmanned aerial vehicle (UAV) is displayed during the November 2017 Dubai Airshow. (Photo: Karim Sahib/AFP/Getty Images)Some such systems have already been put to military use. The Navy’s Aegis air defense system, for example, is empowered to track enemy planes and missiles within a certain radius of a ship at sea and, if it identifies an imminent threat, to fire missiles against it. Similarly, Israel’s Harpy UAV can search for enemy radar systems over a designated area and, when it locates one, strike it on its own. Many other such munitions are now in development, including undersea drones intended for anti-submarine warfare and entire fleets of UAVs designed for use in “swarms,” or flocks of armed drones that twist and turn above the battlefield in coordinated maneuvers that are difficult to follow.8

The deployment of fully autonomous weapons systems poses numerous challenges to international security and arms control, beginning with a potentially insuperable threat to the laws of war and international humanitarian law. Under these norms, armed belligerents are obligated to distinguish between enemy combatants and civilians on the battlefield and to avoid unnecessary harm to the latter. In addition, any civilian casualties that do occur in battle should not be disproportionate to the military necessity of attacking that position. Opponents of lethal autonomous weapons systems argue that only humans possess the necessary judgment to make such fine distinctions in the heat of battle and that machines will never be made intelligent enough to do so and thus should be banned from deployment.9

At this point, some 25 countries have endorsed steps to enact such a ban in the form of a protocol to the Convention on Certain Conventional Weapons (CCW). Several other nations, including the United States and Russia, oppose a ban on lethal autonomous weapons systems, saying they can be made compliant with international humanitarian law.10

Looking further into the future, autonomous weapons systems could pose a potential threat to nuclear stability by investing their owners with a capacity to detect, track, and destroy enemy submarines and mobile missile launchers. Today’s stability, which can be seen as an uneasy nuclear balance of terror, rests on the belief that each major power possesses at least some devastating second-strike, or retaliatory, capability, whether mobile launchers for intercontinental ballistic missiles (ICBMs), submarine-launched ballistic missiles (SLBMs), or both, that are immune to real-time detection and safe from a first strike. Yet, a nuclear-armed belligerent might someday undermine the deterrence equation by employing undersea drones to pursue and destroy enemy ballistic missile submarines along with swarms of UAVs to hunt and attack enemy mobile ICBM launchers.

Even the mere existence of such weapons could jeopardize stability by encouraging an opponent in a crisis to launch a nuclear first strike rather than risk losing its deterrent capability to an enemy attack. Such an environment would erode the underlying logic of today’s strategic nuclear arms control measures, that is, the preservation of deterrence and stability with ever-diminishing numbers of warheads and launchers, and would require new or revised approaches to war prevention and disarmament.11

Hypersonic Weapons

Proposed hypersonic weapons, which can travel at a speed of more than five time the speed of sound, or more than 5,000 kilometers per hour, generally fall into two categories: hypersonic glide vehicles and hypersonic cruise missiles, either of which could be armed with nuclear or conventional warheads. With hypersonic glide vehicle systems, a rocket carries the unpowered glide vehicle into space, where it detaches and flies to its target by gliding along the upper atmosphere. Hypersonic cruise missiles are self-powered missiles, utilizing advanced rocket technology to achieve extraordinary speed and maneuverability.

No such munitions currently exist, but China, Russia, and the United States are developing hypersonic weapons of various types. The U.S. Defense Department, for example, is testing the components of a hypersonic glide vehicle system under its Tactical Boost Glide project and recently awarded a $928 million contract to Lockheed Martin Corp. for the full-scale development of a hypersonic air-launched cruise missile, tentatively called the Hypersonic Conventional Strike Weapon.12 Russia, for its part, is developing a hypersonic glide vehicle it calls the Avangard, which it claims will be ready for deployment by the end of 2019, and China in August announced a successful test of the Starry Sky-2 hypersonic glide vehicle described as capable of carrying a nuclear weapon.13

Whether armed with conventional or nuclear warheads, hypersonic weapons pose a variety of challenges to international stability and arms control. At the heart of such concerns is these weapons’ exceptional speed and agility. Anti-missile systems that may work against existing threats might not be able to track and engage hypersonic vehicles, potentially allowing an aggressor to contemplate first-strike disarming attacks on nuclear or conventional forces while impelling vulnerable defenders to adopt a launch-on-warning policy.14 Some analysts warn that the mere acquisition of such weapons could “increase the expectation of a disarming attack.” Such expectations “encourage the threatened nations to take such actions as devolution of command-and-control of strategic forces, wider dispersion of such forces, a launch-on-warning posture, or a policy of preemption during a crisis.” In short, “hypersonic threats encourage hair-trigger tactics that would increase crisis instability.”15

The development of hypersonic weaponry poses a significant threat to the core principle of assured retaliation, on which today’s nuclear strategies and arms control measures largely rest. Overcoming that danger will require commitments on the part of the major powers jointly to consider the risks posed by such weapons and what steps might be necessary to curb their destabilizing effects.

The development of hypersonic munitions also introduces added problems of proliferation. Although the bulk of research on such weapons is now being conducted by China, Russia, and the United States, other nations are exploring the technologies involved and eventually could produce such munitions on their own eventually. In a world of widely disseminated hypersonic weapons, vulnerable states would fear being attacked with little or no warning time, possibly impelling them to conduct pre-emptive strikes on enemy capabilities or to commence hostilities at the earliest indication of an incoming missile. Accordingly, the adoption of fresh nonproliferation measures also belongs on the agenda of major world leaders.16

Cyberattack

Secure operations in cyberspace, the global web of information streams tied to the internet, has become essential for the continued functioning of the international economy and much else besides. An extraordinary tool for many purposes, the internet is also vulnerable to attack by hostile intruders, whether to spread misinformation, disrupt vital infrastructure, or steal valuable data. Most of those malicious activities are conducted by individuals or groups of individuals seeking to enrich themselves or sway public opinion. It is increasingly evident, however, that governmental bodies, often working in conjunction with some of those individuals, are employing cyberweapons to weaken their enemies by sowing distrust or sabotaging key institutions or to bolster their own defenses by stealing militarily relevant technological know-how.

Moreover, in the event of a crisis or approaching hostilities, cyberattacks could be launched on an adversary’s early-warning, communications, and command and control systems, significantly impairing its response capabilities.17 For all these reasons, cybersecurity, or the protection of cyberspace from malicious attack, has become a major national security priority.18

Cybersecurity, as perceived by U.S. leaders, can take two forms: defensive action aimed at protecting one’s own information infrastructure against attack; and offensive action intended to punish, or retaliate against, an attacker by severely disrupting its systems, or to deter such attack by holding out the prospect of such punishment. The U.S. Cyber Command, elevated by President Donald Trump in August 2017 to a full-fledged Unified Combatant Command, is empowered to conduct both types of operations.

In many respects then, the cyber domain is coming to resemble the strategic nuclear realm, with notions of defense, deterrence, and assured retaliation initially devised for nuclear scenarios now being applied to conflict in cyberspace. Although battles in this domain are said to fall below the threshold of armed combat (so long, of course, as no one is killed as a result), it is not difficult to conceive of skirmishes in cyberspace that erupt into violent conflict, for example if cyberattacks result in the collapse of critical infrastructure, such as the electric grid or the banking system.

Considered solely as a domain of its own, cyberspace is a fertile area for the introduction of regulatory measures that might be said to resemble arms control, although referring to cyberweapons rather than nuclear or conventional munitions. This is not a new challenge but one that has grown more pressing as the technology advances.19 At what point, for example, might it be worthwhile to impose formal impediments to the cyber equivalent of a disarming first strike, a digital attack that would paralyze a rival’s key information systems? A group of governmental experts was convened by the UN General Assembly to investigate the adoption of norms and rules for international behavior in cyberspace, but failed to reach agreement on measures that would satisfy all major powers.20

More importantly, it is essential to consider how combat in cyberspace might spill over into the physical world, triggering armed combat and possibly hastening the pace of escalation. This danger was brought into bold relief in February 2018, when the Defense Department released its latest Nuclear Posture Review report, spelling out the Trump administration’s approach to nuclear weapons and their use. Stating that an enemy cyberattack on U.S. strategic command and control systems could pose a critical threat to U.S. national security, the new policy holds out the prospect of a nuclear response to such attacks. The United States, it affirmed, would only consider using nuclear weapons in “extreme circumstances,” which could include attacks “on U.S. or allied nuclear forces, their command and control, or warning and attack assessment capabilities.”21

The policy of other states in this regard is not so clearly stated, but similar protocols undoubtedly exist. Accordingly, management of this spillover effect from cyber- to conventional or even nuclear conflict will become a major concern of international policymakers in the years to come.

The Evolving Arms Control Agenda

To be sure, policymakers and arms control advocates will have their hands full in the coming months and years just preserving existing accords and patching them up where needed. At present, several key agreements, including the 1987 Intermediate-Range Nuclear Forces Treaty and the 2015 Iran nuclear accord are at significant risk, and there are serious doubts as to whether the United States and Russia will extend the 2010 New Strategic Arms Reduction Treaty before it expires in February 2021. Addressing these and other critical concerns will occupy much of the energy of key figures in the field for some time to come.

As time goes on, however, policymakers will be compelled to devote ever-increasing attention to the military and arms control implications of the technologies identified above and others that may emerge in the years ahead. Diplomatically, these issues logically could be addressed bilaterally, such as through the currently stalled U.S.-Russian nuclear stability talks, and when appropriate in various multilateral forums.

Developing all the needed responses to the new technologies will take time and considerable effort, involving the contributions of many individuals and organizations. Some of this is already underway, in part due to a special grant program on new threats to nuclear security initiated by the Carnegie Corporation of New York.22 Far more attention to these challenges will be needed in the years ahead. More detailed discussions of possible approaches for regulating the military use of these four technologies will be explored subsequently in ACT, but here are some preliminary thoughts on what will be needed.

To begin, it will be essential to consider how the new technologies affect existing arms control and nonproliferation measures and ask what modifications, if any, are needed to ensure their continued validity in the face of unforeseen challenges. The introduction of hypersonic delivery systems, for example, could alter the mutual force calculations underlying existing strategic nuclear arms limitation agreements and require additional protocols to any future iteration of those accords. At the same time, research should be conducted on the possible contribution of AI technologies to the strengthening of existing measures, such as the nuclear Nonproliferation Treaty, which rely on the constant monitoring of participating states’ military and military-related activities.

As the weaponization of the pivotal technologies proceeds, it will also be useful to consider how existing agreements might be used as the basis for added measures intended to control entirely novel types of munitions. As indicated earlier, the CCW can be used as a framework on which to adopt additional measures in the form of protocols controlling or banning the use of armaments, such as autonomous weapons systems, not imagined at the time of the treaty’s initial signing in 1980. Some analysts have suggested that the Missile Technology Control Regime could be used as a model for a mechanism intended to prevent the proliferation of hypersonic weapons technology.23

Finally, as the above discussion suggests, it will be necessary to devise entirely new approaches to arms control that are designed to overcome dangers of an unprecedented sort. Addressing the weaponization of AI, for example, will prove exceedingly difficult because regulating something as inherently insubstantial as algorithms will defy the precise labeling and stockpile oversight features of most existing control measures. Many of the other systems described above, including autonomous and hypersonic weapons, span the divide between conventional and nuclear munitions and raise a whole other set of regulatory problems.

Addressing these challenges will not be easy, but just as previous generations of policymakers found ways of controlling new and dangerous technologies, so too will current and future generations contrive novel solutions to new perils.

 

ENDNOTES

1. Robert O. Work, “Preface,” in Artificial Intelligence: What Every Policymaker Needs to Know, ed. Paul Scharre and Michael C. Horowitz (Washington, DC: Center for a New American Security, June 2018), p. 2.

2. Paul McLeary, “USAF Announces Major New Hypersonic Weapon Contract,” Breaking Defense, April 18, 2018, https://breakingdefense.com/2018/04/usaf-announces-major-new-hypersonic-weapon-contract/.

3. Richard Danzig, Technology Roulette: Managing Loss of Control as Militaries Pursue Technological Superiority (Washington, DC: Center for a New American Security, 2018), p. 5.

4. For discussion of such scenarios, see Edward Geist and Andrew J. Lohn, How Might Artificial Intelligence Affect the Risk of Nuclear War? (Santa Monica, CA: RAND Corp., 2018), https://www.rand.org/pubs/perspectives/PE296.html.

5. For a thorough briefing on artificial intelligence and its military applications, see Daniel S. Hoadley and Nathan J. Lucas, “Artificial Intelligence and National Security,” CRS Report for Congress, R45178, April 26, 2018, https://fas.org/sgp/crs/natsec/R45178.pdf.

6. Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018), p. 305.

7. U.S. Department of Defense, “Autonomy in Weapon Systems,” no. 3000.09, November 21, 2012, http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf (directive).

8. For background on these systems, see Scharre, Army of None.

9. For a thorough explication of this position, see Human Rights Watch and International Human Rights Clinic, “Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban,” 2016, https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf

10. See “U.S., Russia Impede Steps to Ban ‘Killer Robots,’” Arms Control Today, October 2018,
pp. 31–33.

11. For discussion of this risk, see Geist and Lohn, How Might Artificial Intelligence Affect the Risk of Nuclear War?

12. Paul McLeary, “USAF Announces Major New Hypersonic Weapon Contract,” Breaking Defense, April 18, 2018, https://breakingdefense.com/2018/04/usaf-announces-major-new-hypersonic-weapon-contract/.

13. For more information on the Avangard system, see Dave Majumdar, “We Now Know How Russia's New Avangard Hypersonic Boost-Glide Weapon Will Launch,” The National Interest, March 20, 2018, https://nationalinterest.org/blog/the-buzz/we-now-know-how-russias-new-avangard-hypersonic-boost-glide-25003.

14. Kingston Rief, “Hypersonic Advances Spark Concern,” Arms Control Today, January/February 2018, pp. 29–30.

15. Richard H. Speier et al., Hypersonic Missile Nonproliferation: Hindering the Spread of a New Class of Weapons (Santa Monica, CA: RAND Corp., 2017), p. xiii.

16. For a discussion of possible measures of this sort, see ibid., pp. 35–46.

17. Andrew Futter, “The Dangers of Using Cyberattacks to Counter Nuclear Threats,” Arms Control Today, July/August 2016.

18. For a thorough briefing on cyberwarfare and cybersecurity, see Chris Jaikaran, “Cybersecurity: Selected Issues for the 115th Congress,” CRS Report for Congress, R45127, March 9, 2018, https://fas.org/sgp/crs/misc/R45127.pdf.

19. David Elliott, “Weighing the Case for a Convention to Limit Cyberwarfare, Arms Control Today, November 2009.

20. See Elaine Korzak, “UN GGE on Cybersecurity: The End of an Era?” The Diplomat, July 31, 2017, https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/.

21. Office of the Secretary of Defense, “Nuclear Posture Review,” February 2018, p. 21, https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF.

22. Celeste Ford, “Eight Grants to Address Emerging Threats in Nuclear Security,” September 25, 2017, https://www.carnegie.org/news/articles/eight-grants-address-emerging-threats-nuclear-security/.

23. Speier et al., Hypersonic Missile Nonproliferation, pp. 42–44.

 


Michael T. Klare is a professor emeritus of peace and world security studies at Hampshire College and senior visiting fellow at the Arms Control Association. This is the first in a series he is writing for Arms Control Today on the most disruptive emerging technologies and their implications for war-fighting and arms control.

 

 

Posted: December 1, 2018

This Is Not Your Mother’s Cold War

News Source: 
The Nation
News Date: 
October 30, 2018 -04:00

Posted: November 9, 2018

The New Global Tinderbox

News Source: 
Lobe Log
News Date: 
November 1, 2018 -04:00

Posted: November 9, 2018

The New Cold War Is a Lot More Dangerous Than the Old

News Source: 
Foreign Policy in Focus
News Date: 
November 1, 2018 -04:00

Posted: November 9, 2018

Trump is putting us On The Road to World War III

News Source: 
Tikkun
News Date: 
October 30, 2018 -04:00

Posted: October 30, 2018

U.S., Russia Impede Steps to Ban ‘Killer Robots’

The impasse reflects the tensions over advancing technologies for systems capable of autonomously identifying and attacking targets.


October 2018
By Michael Klare

The latest effort toward imposing binding international restrictions on so-called killer robots was thwarted by the United States and Russia, pushing off the divisive issue to a November meeting of states-parties to the Convention on Certain Conventional Weapons (CCW).

Participants at a Geneva meeting in August on lethal autonomous weapons systems, held under the auspices of the Convention on Certain Conventional Weapons, called for future talks after failing to reach consensus on imposing international restrictions. (Photo: United Nations Office at Geneva)For five years, officials representing member-states of the CCW, a 1990 treaty that seeks to outlaw the use of especially injurious or pernicious weapons, have been investigating whether to adopt a ban on lethal autonomous weapons systems. In late August, a group of governmental experts established by the CCW met to assess the issue, but it failed to reach consensus at a Geneva meeting and called instead for further discussions.

The impasse reflects the tensions over an advancing set of technologies, including artificial intelligence and robotics, that will make possible systems capable of identifying targets and attacking them without human intervention.

Opponents insist that such weapons can never be made intelligent enough to comply with the laws of war and international humanitarian law. Advocates say autonomous weapons, as they develop, can play a useful role in warfare without violating those laws.

Concern over the potential battlefield use of fully autonomous weapons systems has been growing rapidly in recent years as the pace of their development has accelerated and the legal and humanitarian consequences of using them in combat have become more apparent. Such systems typically combine advanced sensors and kill mechanisms with unmanned ships, planes, or ground vehicles.

Theoretically, fully autonomous weapons of this sort can be programmed to search within a predesignated area for certain types of threats—tanks, radars, ships, aircraft, and individual combatants—and engage them with onboard guns, bombs, and missiles on their own if communications are lost with their human operators. This prospect has raised the question whether these weapons, if used in a fully autonomous manner, will be able to distinguish between legitimate targets, such as armed combatants, and noncombatant civilians trapped in the zone of battle. Likewise, will they be able to distinguish between enemy combatants still posing a threat and those no longer capable of fighting because of injury or illness?

Humans possess the innate capacity to make such distinctions on a split-second basis, but many analysts doubt that machines can ever be programmed to make such fine distinctions and so should be banned from use.

Under the terms of the CCW, the 120 signatory states, which include China, Russia, and the United States, can negotiate additional protocols prohibiting certain specific classes of weapons. So far, five such protocols have been signed, including measures banning landmines, incendiary weapons, and blinding lasers.

Starting in 2014, some member states have sought to initiate negotiations leading to a similar protocol that would ban the development and use of fully autonomous lethal weapons. Others were resistant to moving directly toward negotiations, but agreed to a high-level investigation of the issue. For that purpose, CCW member states established the experts group, comprised largely of officials from those states, to assess the implications of fielding autonomous weapons and whether starting negotiations on a protocol was justified.

In the discussions that followed, several distinctive positions emerged. About two dozen countries, including Argentina, Austria, Brazil, Chile, China, Egypt, and Mexico, advocated for a legally binding prohibition on use of such weapons. A number of civil society organizations, loosely allied through the Campaign to Stop Killer Robots, also urged such a measure.

Another group of states led by France and Germany, while opposing a legally binding measure, support a political declaration stating the necessity of maintaining human control over the use of deadly force.

Wherever they stand on the issue of a binding measure, nearly every country represented in the experts group at the August meeting expressed opposition to the deployment of fully autonomous weapons. Nevertheless, a small group of countries, including Israel, Russia, South Korea, and the United States, rejected a legal prohibition and a political declaration, saying more research and discussion is necessary.

For the United States, the resistance to a declaration or binding measure on autonomous weapons can be read as instinctive hostility toward any international measure that might constrain U.S. freedom of maneuver, a stance visible in the Trump administration’s animosity towardother multilateral agreements, such as the Iran nuclear deal.

Further, U.S. opposition stems from another impulse: many senior U.S. officials believe that leadership in advanced technology, especially artificial intelligence, cyberoperations, hypersonics, and robotics, is essential for ensuring U.S. success in a geopolitical contest with China and Russia. “Long-term strategic competition, not terrorism, is now the primary focus of U.S. national security,” Defense Secretary Jim Mattis told the Senate Armed Services Committee on April 26.

“Our military remains capable, but our competitive edge has eroded in every domain of warfare,” he said. To reclaim that edge, the United States must restore its advantage in all areas of military competency, including through “research into advanced autonomous systems, artificial intelligence, and hypersonics.”

U.S. policy requires that a human operator be “in the loop” when making decisions before a weapons system, such as a missile-carrying drone, fires at a target.

Still, the determination to ensure U.S. dominance in artificial intelligence and robotics virtually guaranteed U.S. opposition to any outcome of the experts group that may hinder progress in developing military applications of machine autonomy. “We believe it is premature to enter into negotiations on a legally binding instrument, a political declaration, a code of conduct, or other similar instrument, and we cannot support a mandate to enter into such negotiations,” Joshua Dorosin, deputy legal adviser at the State Department, said at the experts group meeting Aug. 29.

Because decisions of the group are made by consensus, U.S. opposition, mirrored by Russia and a few other countries, prevented it from reaching any conclusion at its meeting other than a recommendation to keep talking.

Follow-up steps will be determined by CCW states-parties. They are due to meet in Geneva on Nov. 21–23, although it is unlikely they will reach consensus on anything beyond continuing discussions.

Member organizations of the Campaign to Stop Killer Robots are lobbying participating delegations to act more vigorously and to consider a variety of other pathways to banning the development of fully autonomous weapons systems, perhaps outside the CCW framework.

Posted: October 1, 2018

Pages

Subscribe to RSS - Michael Klare