"[Arms Control Today] has become indispensable! I think it is the combination of the critical period we are in and the quality of the product. I found myself reading the May issue from cover to cover."
Pentagon Labels AI Company Supply Chain Risk
April 2026
The U.S. Department of Defense designated AI company Anthropic a supply chain risk March 4, in a move the firm described as punishment in an ongoing contract dispute.
The designation will prevent the department and its contractors from using Anthropic products, including its AI model, Claude, and associated tools in their operations.
The company’s chief executive officer, Dario Amodei, said in a Feb. 26 statement that two outstanding issues had arisen in contract renegotiations with the Pentagon: the company’s insistence that its products not be used for domestic mass surveillance or fully autonomous weapons systems.
The Pentagon insisted that its contract with Anthropic permits “any lawful use.”
Anthropic’s “true objective is unmistakable: to seize veto power over the operational decisions of the United States military,” said Defense Secretary Pete Hegseth in a Feb. 27 social media post.
President Donald Trump said on the same day in a social media post that he had ordered the entire federal government to cease using Anthropic products within six months.
A judge in the Northern District of California granted Anthropic a preliminary injunction March 26 in a lawsuit that the firm had filed against the Defense Department. The lawsuit described the Pentagon’s designation as “unprecedented and unlawful.”
Competitor firm OpenAI announced late Feb. 27 that it had reached agreement with the Pentagon on terms of use for its products with the department.
Anthropic’s Claude overtook OpenAI’s ChatGPT in mobile app downloads for the first time following the Pentagon decision, according to mobile analytics firm Appfigures.—XIAODON LIANG