By Michael T. Klare
Only a handful of global technology firms have adopted explicit policies to prevent their products from being used in lethal autonomous weapons systems, also called “killer robots,” according to a survey published in August. Such weapons have become highly controversial because they are gaining a capacity to identify and attack targets without human supervision.
Just seven of 50 companies surveyed in 12 nations were rated as following the best practices of ensuring their technology would not be used in these systems. The survey, called “Don’t Be Evil?” was conducted by PAX, a Dutch advocacy group.
“Don’t Be Evil” was once the official motto of Google, where thousands of workers signed an open letter in April 2018 calling on the company to cancel its involvement with Project Maven, a Pentagon-funded initiative aimed at harnessing artificial intelligence (AI) for the interpretation of video images that would potentially enable lethal attacks by autonomous weapons systems. Google’s management chose not to renew the contract when it came up for renewal in June of that year, promising that the company would not help develop AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
The PAX survey queried major firms known to be developing technologies relevant to autonomous weaponry, such as AI software and systems integration, pattern recognition, aerial drones and swarming, and ground robotics. The list included many household names (Amazon, Google, IBM, and Microsoft) as well as less-known firms involved in specific facets of tech development (Anduril, Clarifai, and Palantir). The companies were asked to describe their policies on development of these weapons systems on a questionnaire submitted by PAX; some responded, some declined.
Using survey responses and open-source literature, PAX analysts placed the companies into one of three categories: Best Practices (firms that explained their policies for preventing the use of their technology in developing lethal autonomous weapons systems); Medium Concern (firms known to be working on military applications of their technologies that refused to answer the survey or did answer the survey and claimed their military work did not encompass these systems); and High Concern (firms working on military applications of relevant technologies that refused to answer the survey).
Only seven companies, including Google and General Robotics, were rated as Best Practices; another 22, including Apple, Facebook, and IBM, were placed in the Medium Concern category; and 21, including Amazon, Intel, and Microsoft, were rated in the High Concern column.
With no international agreement in place to constrain the development and deployment of lethal autonomous weapons systems, a greater burden falls on executives of the major tech firms to establish and enforce ethical principles on the military applications of their products. Although officials at some tech firms, such as Google, have expressed reservations about working for the military on projects related to these systems, their colleagues at other companies have professed a willingness to work for the Defense Department or the militaries of other countries on such devices. This lack of consistency could lead to an unregulated environment in which the introduction of these systems proceeds apace. By publishing its survey, PAX hopes to encourage greater transparency and self-restraint on the part of key tech firms.
“Companies working on these technologies…need to have policies that make clear how and when they draw the line regarding the military application of their technology,” said the PAX report.