Login/Logout

*
*  

“We continue to count on the valuable contributions of the Arms Control Association.”

– President Joe Biden
June 2, 2022
Army of None: Autonomous Weapons and the Future of War
Share this


November 2018
Reviewed by Michael Klare

Army of None: Autonomous Weapons and the Future of War
By Paul Scharre
W.W. Norton, 2018, 436 pp.

What are the consequences—military, political, moral, and legal—of giving machines the capacity to select targets and destroy them without direct human guidance? This is the profound question Paul Scharre addresses in his informative and thought-provoking book, Army of None: Autonomous Weapons and the Future of War.

Part a primer on the fast-advancing technology of artificial intelligence (AI) and part a soul-searching reflection on its potential application to the conduct of war, Scharre’s book is essential reading for anyone seeking to grasp the monumental changes occurring in the realm of military technology.

Scharre, a former Army Ranger who now heads the Technology and National Security Program at the Center for a New American Security, begins by describing the essence of autonomy in warfare and the technologies bringing it closer to widespread use. Autonomy is the ability of machines to perform a task or function on their own, without human supervision. Autonomous weapons, by extension, are machines with the capacity to perform military operations in this manner; as he puts it, they are defined “by their ability to complete the engagement cycle—searching for, deciding to engage, and engaging targets—on their own.”

Some weapons possessing this capacity are already in service. These include, for example, the Aegis Combat System, a complex array of radars, missiles, and computers intended to protect U.S. warships against enemy planes and missiles. When set in the “auto-special” mode, Aegis can automatically launch its missiles against what it has identified as hostile planes or missiles. Human operators have the ability to abort a strike, but otherwise Aegis operates on its own. The Israeli Harpy armed drone is another example. Once launched, it can hover over a designated area, hunt for enemy radars, and strike them on its own volition.

According to Scharre, these are just harbingers of what he expects will be a massive intrusion of autonomous weapons systems on the battlefield of the future, with such devices assuming ever-greater responsibility for life-and-death decisions. The U.S. military, along with those of many other countries, are pouring vast sums into the development of such systems, ensuring that future commanders will have many more autonomous weapons, with an expanding range of capabilities, to incorporate into their arsenals. “As machine intelligence advances,” he writes, “militaries will be able to create ever more autonomous robots capable of carrying out more complex missions in more challenging environments independent from human control.”

The Development Race

Military leaders are keen to increase the role of autonomous weapons systems for the same reason they are likely to embrace any major technological innovation: they anticipate autonomy will offer an advantage in combat. Such systems, it is widely believed, will assist in the identification and elimination of enemy assets, whether personnel or equipment, in high-intensity engagements. “Robots have many battlefield advantages over traditional human-inhabited vehicles,” Scharre asserts. Unmanned vehicles “can be made smaller, lighter, faster, and more maneuverable.” They can stay on the battlefield longer without rest and “can take more risk, opening up tactical opportunities for dangerous or even suicidal missions without risking [friendly] human lives.”

The U.S. Army showcased “maneuver robotics and autonomous systems” at a live-fire event August 22, 2017 at Fort Benning, Ga. (U.S. Army photo/Patrick Albright, Maneuver Center of Excellence, Fort Benning Public Affairs)Aside from this obvious lure, several other factors are driving the race to develop and deploy autonomous weapons systems. These include the incredible progress being made on AI in the civilian sphere, visions of a future battlespace in which the speed and complexity of action exceeds human comprehension and the ability to respond, and fears of an arms race in AI and autonomous weapons systems, with the United States potentially being left behind.

AI is crucial to the further development of autonomous weapons systems as it invests them with a cognitive ability until now only exercised by humans, including the capacity to distinguish potential targets from background clutter and independently choose to eradicate the threat. In a striking shift from current military procurement practices, most of the critical advances in AI and machine learning are coming not from the established arms industry but from startups located in technology centers such as Silicon Valley and Cambridge, Massachusetts.

These tech firms are responsible for the image-identification systems and other technologies that make feasible self-driving cars; and once you have built a self-driving car, it is not a great leap to make a self-driving, self-firing tank or plane. Needless to say, this has military planners almost giddy imagining all the possible battlefield applications.

Pentagon officials are especially keen to explore these potentialities because they envision a future battlespace characterized by extremely fast-paced, intense combat among well-equipped adversaries. This reflects the onward march of technology and the assessment, spelled out in the December 2017 “National Security Strategy of the United States of America,” that we have entered a new phase of history in which geopolitical competition with China and Russia has replaced terrorism as the principal threat to U.S. security.

Any future military engagement with one or both of these countries would, it is widely assumed, entail the simultaneous use of countless planes, missiles, tanks, and ships in a highly contested battle zone. In such encounters, human operators may not be able to keep track of and destroy all potential targets within their portion of the battlespace, and so the temptation to let machines assume those critical tasks can only grow.

This leads to the next key factor, a fear of an arms race in AI, with U.S. adversaries conceivably jumping ahead in the burgeoning contest to deploy autonomous weapons systems on the battlefield. As Scharre ruefully indicates, the United States is not the only country that possesses the tech centers capable of generating AI advances and of applying them to military use. In fact, other countries, including China, Russia, the United Kingdom, and Israel, are moving swiftly in this direction.

Just how far they have proceeded is a matter of considerable speculation, with some analysts claiming that China and Russia, in particular, have achieved great strides. Yet, even if those assessments are exaggerated, as may be the case, they are sufficient to undergird Pentagon assertions that more must be done to ensure U.S. leadership in the AI and autonomous weapons field. At this point, Scharre observes, “the main rationale for building fully autonomous weapons seems to be the assumption that others might do so,” putting the United States at a disadvantage. This, he writes, risks becoming a “self-fulfilling prophecy.”

Moral and Legal Dimensions

As Scharre makes very clear, the deployment and use of fully autonomous weapons systems on the battlefield will entail a revolutionary transformation in the conduct of warfare, with machines conceivably being granted the ability to decide on their own to take human life. There is, of course, some uncertainty as to how much autonomy future weapons will be granted and whether they will ever be fully “untethered” from human supervision.

The U.S. Navy is continuing research on the Sea Hunter, a prototype for what could become a new class of surface-warfare vessels able to travel thousands of miles over open seas for months at a time without a single crew member aboard. The experimental anti-submarine drone warship developed by the Pentagon’s Defense Advanced Research Projects Agency (DARPA). (Photo: DARPA)As Scharre demonstrates, however, the technology to empower killing machines with an ability to operate independently is emerging rapidly, and the use of this technology in warfare appears almost inevitable. This sparks important moral and legal questions: moral in the sense that investing machines with the capacity to take a human life potentially absolves their operators’ responsibility for any injustices that might occur and legal in that the use of lethal autonomous weapons could violate international humanitarian law.

In addressing the moral dimensions, Scharre draws on his extensive experience as a U.S. Army Ranger in Iraq and Afghanistan. In one of his most arresting passages, he describes an incident in which he and some fellow soldiers, while positioned atop a mountain ridge on the Afghan-Pakistani border, observed a girl perhaps five or six years old, herding goats nearby. In the stillness of the mountain air, they could hear her talking on a radio—a clear indication she was scouting their position for a Taliban force hiding nearby. Under the rules of war, Scharre explains, the young girl was an enemy combatant, putting his unit at risk, and so could have been shot. Yet, he chose not to, acting out of an innate moral impulse. “My fellow soldiers and I knew killing her would be morally wrong. We didn’t even discuss it.” Could machines ever be trained to make this distinction? Scharre is highly doubtful.

War is an ugly, brutal activity; and humans, despite numerous efforts over the centuries, have failed to prevent its regular recurrence. Yet, humans have sought to impose some limits on killing, believing that basic morality or religious principle forbids bloodletting of certain kinds, such as the killing of unarmed civilians or wounded enemy soldiers. Efforts have been made to formalize these natural inhibitions in law or religious scripture, but it has often proved difficult to inscribe precisely what is deemed acceptable and what is not. Yet, as Scharre notes, there are situations in which it is self-evident to humans that certain behaviors should not be allowed to occur. However smart the machines are made, he argues, they are never likely to acquire the capacity to make such judgments in the heat of battle and so require some human oversight.

This same conundrum applies to the legal dimensions. International humanitarian law, as codified in the Geneva Protocols of 1949, decrees that parties to war must distinguish between enemy combatants and civilians when conducting combat operations and not deliberately target the latter. It also affirms that any civilian casualties that do occur in battle not be disproportionate to the military necessity of attacking that position. Opponents of the deployment of lethal autonomous weapons systems argue that only humans possess the necessary judgment to make such fine distinctions in the heat of battle and that machines will never be capable of possessing such critical judgment and so should be banned entirely.1

Scharre says it is theoretically possible to design machines smart enough to comply with international humanitarian law, but acknowledges that the risk of misjudgment will always be present when machines make life-and-death decisions, hence invalidating their use without human supervision.

The Risk of Escalation

Most of Scharre’s discussion concerns the potential use of lethal autonomous weapons systems on the conventional battlefield, with robot tanks and planes fighting alongside human-occupied combat systems. His principal concern in these settings is that the robots will behave like rogue soldiers, failing to distinguish between civilians and combatants in heavily contested urban battlegrounds or even firing on friendly forces, mistaking them for the enemy. Scharre is also aware of the danger that greater autonomy will further boost the speed of future engagements and reduce human oversight of the fighting, possibly increasing the danger of unintended escalation, including nuclear escalation.

Two aspects of increased autonomy appear to have particular relevance for nuclear escalation and arms control: the temptation to endow machines with greater authority to make launch decisions of intercontinental ballistic missiles (ICBMs) or other nuclear munitions in the event of a major great-power crisis and the potential use of AI-empowered systems to suss out the location of ballistic missile submarines and mobile ICBM launchers, hence boosting the risk of a first-strike attack in such a situation.

An illustration of the Israeli-produced Harpy unmanned aerial vehicle, a “fire and forget” autonomous system that detects, identifies, and attacks enemy radars. The manufacturer, Israel Aerospace Industries, says the Harpy provides a “continuous, persistent lethal threat to enemy air defense systems.” (Illustration: Israel Aerospace Industries)As the speed of military engagements accelerates, Scharre writes, it will become ever more difficult for humans to keep track of all the combat systems, enemy and friendly, on the battlefield, increasing the temptation to give machines more control over maneuvering and firing decisions. Highly intelligent machines could help relieve them of this pressure by monitoring all that is occurring and taking action when deemed necessary, in accordance with previously inserted computer protocols, to ensure a successful outcome.

Machines can make mistakes nevetheless, and the protocols may not account for unexpected turns on the battlefield, leading to erroneous and disastrous outcomes, such as a decision to employ nuclear munitions. Just as worrisome, machines would never know when to slow the pace of fighting to allow negotiations or even to call a halt. “Unlike humans,” Scharre writes, “autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war.”

Another worry is that dramatic increases in AI-driven image identification will be combined with improved drone technology to create autonomous systems capable of searching for and conceivably destroying ground-based mobile missile launchers and submerged submarines carrying ballistic missiles. Most major nuclear powers rely on mobile missile systems to ensure their ability to retaliate in the event of an enemy first strike, thereby bolstering deterrence of just such an attack. With existing technology, it is almost impossible to monitor the location of an adversary’s ground-based mobile launchers and missile-carrying submarines in real time, making a completely disarming first strike nearly impossible.

Some analysts, including Scharre, worry that future AI-powered drones (ships, aircraft, and submersibles) will possess the capacity to achieve such monitoring, making a first strike of this sort theoretically possible. Indeed, Scharre describes several projects now underway, such as the Pentagon’s Sea Hunter vessel, that could lead in this direction. Even if such systems do not prove entirely reliable, their future deployment could lead national leaders to fear an enemy first strike in a crisis and so launch their own weapons before they can be destroyed. Alternatively, the other party, fearing precisely such a response, may fire first to avoid such an outcome.

As Scharre laments, policymakers have devoted far too little attention to these potentially escalatory consequences of fielding increasingly capable autonomous weapons systems. Although the record of attempts to control emerging technologies through international agreements is decidedly mixed, he argues that some constraints are essential to ensure continued human supervision of critical battlefield decisions. Humans, he concludes, act as an essential “fail-safe” to prevent catastrophic outcomes.

No one who reads Army of None carefully can come away without concluding that the global battlespace is being transformed in multiple ways by the introduction of AI-powered autonomous weapons systems and that the pace of transformation is bound to increase as these machines become ever more capable.

As Scharre persuasively demonstrates in this important new book, progress in autonomous weaponry is occurring much faster than attempts to understand or regulate such devices. Unless there is a concerted effort to grapple with the potential impacts of these new technologies and develop appropriate safeguards, we could face a future in which machines make momentous decisions we come to regret.

ENDNOTE

1. For a comprehensive summary of these arguments, see Human Rights Watch, Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban, December 9, 2016, https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban.

 


Michael Klare is a professor emeritus of peace and world security studies at Hampshire College and senior visiting fellow at the Arms Control Association.