Pentagon Seeks ‘Ethical Principles’ for AI Use


Hoping to encourage artificial intelligence (AI) experts to support U.S. military programs, the U.S. Defense Department is pursuing plans to develop “ethical principles” for AI use in warfare, Defense One first reported in January. Defense Department leaders asked the Defense Innovation Board, an advisory group that includes Silicon Valley executives, to deliver a set of recommendations in June.

The effort to develop principles follows the expression of concerns by AI specialists over how their expertise would be used in defense programs. In May 2018, for example, more than 4,000 Google employees signed a petition urging the company to discontinue its work on Project Maven, a Pentagon-funded AI effort to evaluate drone footage of suspected terrorists and their hideouts. The employees expressed concerns that their work in the civilian sector would be used in a military manner.

Google subsequently announced that it would not renew the Maven contract and promised never to develop AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Google’s actions have raised concerns at the Defense Department, where senior officials plan to enlist top U.S. software engineers in the design of AI-enhanced weapons and other military systems.

The Defense Innovation Board, an independent federal advisory committee established in 2016 to assist the secretary of defense, is chaired by Eric Schmidt, former executive chairman of Alphabet, Google’s parent company. The board has begun a series of public and private meetings around the country with scientists, academics, legal experts, and others to collect a range of views on the subject.—MICHAEL T. KLARE