Login/Logout

*
*  
“It will take all of us working together – government officials, and diplomats, academic experts, and scientists, activists, and organizers – to come up with new and innovative approaches to strengthen transparency and predictability, reduce risk, and forge the next generation of arms control agreements.”
– Wendy Sherman
U.S. Deputy Secretary of State
June 2, 2022
Google Renounces AI Work on Weapons
Share this

Latest ACA Resources


July/August 2018
By Trushaa Castelino

Google decided not to renew its Defense Department contract for Project Maven, a Pentagon program that seeks to advance artificial intelligence (AI) capabilities that could be used to improve targeting of drone strikes.

Google CEO Sundar Pichai delivers the keynote address at the 2018 Google I/O conference May 8 in Mountain View, Calif. (Photo: Justin Sullivan/Getty Images)The decision, in response to internal protest against the project, is the latest chapter in the growing debate about the development and use of autonomous weapons systems, sometimes called “killer robots.”

In April, about 4,000 Google employees signed a letter petitioning CEO Sundar Pichai to end the company’s involvement in Project Maven because “Google should not be in the business of war,” the letter read. The letter was followed by a dozen resignations and a second petition from more than 1,000 academics.

As part of Project Maven, Google was responsible for using AI to help analyze footage captured by U.S. military drones, a process heavily dependent on computer vision for detecting and identifying objects. Expected to free up human analysts for other tasks, Google’s AI work could help set the stage for fully autonomous drone strikes.

Drones are a contentious component of U.S. military action in places such as Afghanistan, Iraq, and other locations outside formal U.S. war zones, such as Yemen. Critics cite a lack of transparency in U.S. policy and actions, as well as evidence of larger-than-reported numbers of civilian casualties from drone strikes.

What differentiates the rise of weaponized computer vision technology is a movement in the direction of weapons systems capable of and even allowed to act on their own, without sufficient legal or political restraints governing their use. Military officials have said the United States will not use lethal weapons systems that leave human judgment out of the decision loop, but have not foreclosed use of such capability at some unspecified future time.

Forays into machine learning and other technological advances have to be “navigated carefully and deliberately,” Pichai acknowledged in his keynote address at the 2018 Google I/O developers conference on May 8, adding that Google feels “a deep sense of responsibility to get this right.” A Google executive reportedly told employees on June 1 that the company would not seek to extend its work on Project Maven.

Google declared its ethical principles relating to the pursuit and innovation of machine learning technology in a blogpost June 7. AI developed by Google, Pichai explained, would be socially beneficial, avoid the creation or reinforcement of bias, be built and tested for safety, accountable to people, uphold high standards of scientific excellence, and only make technology available for use in accordance with Google’s principles.

The post pledged that Google would not pursue technologies that are “likely to cause overall harm” or “cause or directly facilitate injury to people.” It simultaneously maintained that the company would continue working with militaries and governments in other ways. Similar to the boycott of the Korea Advanced Institute of Science and Technology by academics and researchers over reports of collaboration with a major arms company, this episode indicated that Google’s workforce has the ability to influence its priorities.

It is unclear whether Google’s new principles are adequate to hold the company accountable and prevent it from pursuing other programs similar to Project Maven, but many activists are encouraged.

Mary Wareham of Human Rights Watch, global coordinator of the Campaign to Stop Killer Robots, an international coalition working to pre-emptively ban fully autonomous weapons systems, has said the campaign “welcomes Google’s pledge.” The campaign has nudged Google to express public support for a treaty to ban fully autonomous weapons systems and to invite other technology companies such as Amazon, Microsoft, and Oracle to do so as well.

Multilateral attempts at grappling with military systems lacking meaningful human control are underway. In 2016 the Convention on Certain Conventional Weapons established a governmental experts group tasked to look into emerging technologies such as lethal autonomous weapons systems. This year, the group met in Geneva on April 9–13, discussing among multiple issues the widening gap between technological advances and international legal constraints.

“Technological developments that remove or reduce direct human control over weapon systems are threatening to outpace international deliberations,” said the International Committee of the Red Cross in its statement to the experts group, urging states to act with more urgency.

Following the meeting, the Campaign to Stop Killer Robots reported a total of 26 nations calling for a ban on lethal autonomous weapons systems, even as five countries—France, Israel, Russia, the United Kingdom, and the United States—explicitly rejected negotiating new international law on these systems.

Google has been ambiguous about where it would draw the line, but momentum from Google’s announcement is a win for treaty advocates. Amazon employees this month wrote to CEO Jeff Bezos criticizing an Amazon facial recognition program to be used by governments.

The experts group is scheduled to meet Aug. 27–31. Google will continue working on Project Maven until its contract expires next year.