Login/Logout

*
*  

"Though we have acheived progress, our work is not over. That is why I support the mission of the Arms Control Association. It is, quite simply, the most effective and important organization working in the field today." 

– Larry Weiler
Former U.S.-Russian arms control negotiator
August 7, 2018
DOCUMENT: Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?
Share this

Latest ACA Resources


July/August 2018

The following is the executive summary of a paper submitted by the International Committee of the Red Cross to the April 9–13 meeting in Geneva of the Convention on Conventional Weapons Group of Governmental Experts. The group is considering issues involving so-called killer robots, including calls from some countries and advocacy groups for a ban on such weapons.

In the view of the International Committee of the Red Cross (ICRC), human control must be maintained over weapon systems and the use of force to ensure compliance with international law and to satisfy ethical concerns, and States must work urgently to establish limits on autonomy in weapon systems.

In August 2017, the ICRC convened a round-table meeting with independent experts to explore the ethical issues raised by autonomous weapon systems and the ethical dimension of the requirement for human control. This report summarizes discussions and highlights the ICRC’s main conclusions.

The fundamental ethical question is whether the principles of humanity and the dictates of the public conscience can allow human decision-making on the use of force to be effectively substituted with computer-controlled processes, and life-and-death decisions to be ceded to machines.

It is clear that ethical decisions by States, and by society at large, have preceded and motivated the development of new international legal constraints in warfare, including constraints on weapons that cause unacceptable harm. In international humanitarian law, notions of humanity and public conscience are drawn from the Martens Clause. As a potential marker of the public conscience, opinion polls to date suggest a general opposition to autonomous weapon systems—with autonomy eliciting a stronger response than remote-controlled systems.

Ethical issues are at the heart of the debate about the acceptability of autonomous weapon systems. It is precisely anxiety about the loss of human control over weapon systems and the use of force that goes beyond questions of the compatibility of autonomous weapon systems with our laws to encompass fundamental questions of acceptability to our values. A prominent aspect of the ethical debate has been a focus on autonomous weapon systems that are designed to kill or injure humans, rather than those that destroy or damage objects, which are already employed to a limited extent.

The primary ethical argument for autonomous weapon systems has been results-oriented: that their potential precision and reliability might enable better respect for both international law and human ethical values, resulting in fewer adverse humanitarian consequences. As with other weapons, such characteristics would depend on both the design-dependent effects and the way the weapons were used. A secondary argument is that they would help fulfil the duty of militaries to protect their own forces—a quality not unique to autonomous weapon systems.

While there are concerns regarding the technical capacity of autonomous weapons systems to function within legal and ethical constraints, the enduring ethical arguments against these weapons are those that transcend context—whether during armed conflict or in peacetime—and transcend technology—whether simple or sophisticated.

The importance of retaining human agency—and intent—in decisions to use force, is one of the central ethical arguments for limits on autonomy in weapon systems. Many take the view that decisions to kill, injure and destroy must not be delegated to machines, and that humans must be present in this decision-making process sufficiently to preserve a direct link between the intention of the human and the eventual operation of the weapon system.

Closely linked are concerns about a loss of human dignity. In other words, it matters not just if a person is killed or injured but how they are killed or injured, including the process by which these decisions are made. It is argued that, if human agency is lacking to the extent that machines have effectively, and functionally, been delegated these decisions, then it undermines the human dignity of those combatants targeted, and of civilians that are put at risk as a consequence of legitimate attacks on military targets.

The need for human agency is also linked to moral responsibility and accountability for decisions to use force. These are human responsibilities (both ethical and legal), which cannot be transferred to inanimate machines, or computer algorithms.

Predictability and reliability in using an autonomous weapon system are ways of connecting human agency and intent to the eventual consequences of an attack. However, as weapons that self-initiate attacks, autonomous weapon systems all raise questions about predictability, owing to varying degrees of uncertainty as to exactly when, where and/or why a resulting attack will take place. The application of AI and machine learning to targeting functions raises fundamental questions of inherent unpredictability.

Context also affects ethical assessments. Constraints on the time-frame of operation and scope of movement over an area are key factors, as are the task for which the weapon is used and the operating environment. However, perhaps the most important factor is the type of target, since core ethical concerns about human agency, human dignity and moral responsibility are most acute in relation to the notion of anti-personnel autonomous weapon systems that target humans directly.

From the ICRC’s perspective, ethical considerations parallel the requirement for a minimum level of human control over weapon systems and the use of force to ensure legal compliance. From an ethical viewpoint, “meaningful”, “effective” or “appropriate” human control would be the type and degree of control that preserves human agency and upholds moral responsibility in decisions to use force. This requires a sufficiently direct and close connection to be maintained between the human intent of the user and the eventual consequences of the operation of the weapon system in a specific attack.

Ethical and legal considerations may demand some similar constraints on autonomy in weapon systems, so that meaningful human control is maintained—in particular, with respect to: human supervision and the ability to intervene and deactivate; technical requirements for predictability and reliability (including in the algorithms used); and operational constraints on the task for which the weapon is used, the type of target, the operating environment, the timeframe of operation and the scope of movement over an area.

However, the combined and interconnected ethical concerns about loss of human agency in decisions to use force, diffusion of moral responsibility and loss of human dignity could have the most far-reaching consequences, perhaps precluding the development and use of anti-personnel autonomous weapon systems, and even limiting the applications of anti-materiel systems, depending on the risks that destroying materiel targets present for human life.

U.S. Statement on Lethal Autonomous Weapons Systems

The following is an excerpt of the U.S. statement on lethal autonomous weapons systems (LAWS) presented April 9 to the Convention on Conventional Weapons Group of Governmental Experts.

It is clear that many governments, including that of the United States, are still trying to understand more fully the ways that autonomy will be used by their societies, including by their militaries. There remains a lack of common understanding on various issues related to LAWS, including their characteristics and elements.

The United States believes that [international humanitarian law (IHL)] provides a robust and appropriate framework for the regulation of all weapons—including those with autonomous functions—in relation to armed conflict, and any development or use of LAWS must be fully consistent with IHL, including the principles of military necessity, humanity, distinction, and proportionality. For this reason, the United States places great importance on the weapon review process in the development and acquisition of new weapon systems. This is a critical measure in ensuring that weapon systems can dependably be used in a manner that is consistent with IHL.

The United States also continues to believe that advances in autonomy and machine learning can facilitate and enhance the implementation of IHL, including the principles of distinction and proportionality. One of our goals is to understand more fully how this technology can continue to be used to reduce the risk to civilians and friendly forces in armed conflict.

The issues presented by LAWS are complex and evolving, as new technologies and their applications continue to be developed. We therefore must be cautious not to make hasty judgments about the value or likely effects of emerging or future technologies. As history shows, our views of new technologies may change over time as we find new uses and ways to benefit from advances in technology. We therefore do not support the negotiation of a political or legally binding document. Rather, we believe we should continue to proceed collectively—with deliberation and patience.