"The Arms Control Association’s work is an important resource to legislators and policymakers when contemplating a new policy direction or decision."
AI Plays Major Role in the War on Iran
May 2026
By Michael Klare
Artificial intelligence has played a major role in selecting targets for attack during Operation Epic Fury, the U.S. air and missile campaign against Iran that began Feb. 28. Senior U.S. officials said that the Pentagon relied on an AI-powered data-fusion and decision-support program, the Maven Smart System, to identify top-priority targets and help choose the weapons used in attacking them.

According to an April 8 White House accounting, the U.S. military struck more than 13,000 targets in Iran during the first 38 days of the war, including more than 2,000 command-and-control targets, 1,500 air defense targets, and 1,450 industrial base targets. The White House claimed that all were legitimate military targets, but The New York Times and other news media reported the visual destruction of civilian facilities. This led critics to claim that overreliance on AI resulted in targeting errors and unnecessary civilian casualties.
As employed in Iran, the Maven Smart System, produced by Palantir Technologies Inc., collected information on enemy positions from radar signals, satellite and drone imagery, electronic communications, and other sources and combined it into a “common operating picture” of the battlefield, Cameron Stanley, the Pentagon’s chief digital and AI officer, told Palantir’s AIPCon 9 conference March 12. He said that Maven was also used to identify the friendly unit best positioned to strike any given target and to generate courses of action for U.S. strike units to follow, encompassing attack vectors, munitions to be employed, and other pertinent data.
“Instead of having eight or nine systems” for commanders to consult when making combat decisions, Maven fuses everything “into a single visualization tool” for use in decision-making, Stanley explained. Once so equipped, commanders can identify a target and then select a strike package to engage it, simply by clicking on a screen, he said. “So, we’ve gone from identifying the target to now coming up with a course of action to now actioning that target, all from one system. This is revolutionary.”
Stanley said that Maven has become increasingly sophisticated over time. When initiated in 2017 by the Pentagon’s Algorithmic Warfare Cross-Functional Team (later the Joint Artificial Intelligence Center), Project Maven was designed to use computer vision and machine learning to sort through drone footage of Middle Eastern battlegrounds and identify potential militant hideouts for possible attack. In the years since then, Palantir has steadily improved the technology, now renamed the Maven Smart System, enabling it to collect and collate data from multiple sources and to recommend possible combat moves.
In late 2024, Anthropic’s Claude AI operating system was merged with the Maven technology to provide military users with enhanced targeting options. Commanders can now use the combined system to generate target lists by assorted criteria, such as radar station, missile battery, communications node, and senior commander, and rank them by strategic importance. Once a target has been attacked, the system can review damage assessment reports and automatically produce new target lists—all in a matter of minutes.
Anthropic has now been barred from providing services to the U.S. military because of its refusal to work on autonomous weapons systems and domestic surveillance operations; other AI companies, including OpenAI, have been recruited to assume Anthropic’s role.
For U.S. military officials, Maven’s speed in selecting and reselecting targets represents a distinct combat advantage, allowing U.S. forces to disable Iranian combat capabilities swiftly and incessantly, preventing their reconstitution.
“Our war fighters are leveraging a variety of advanced AI tools,” said Adm. Brad Cooper, commander of U.S. Central Command, in a March 11 video briefing. “These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react,” Cooper said.
But this very speed in target selection is what worries many observers, given the risk that sites will be chosen for attack without adequate human oversight. Although human officers supposedly review every target before a strike order is issued, there is a growing danger that “humans may rely too much on the system,” and fail to double-check its recommendations, Nilza Amaral, head of research at Chatham House’s Global Governance and Security Centre, told The National, an Abu Dhabi-based newspaper.
With humans granted ever-diminishing time in which to review AI-derived targeting decisions, the risk of error naturally increases, many analysts say. Whether AI played a role in the Feb. 28 U.S. cruise missile strike on the Shajareh Tayyebeh girls' elementary school in Minab, Iran, resulting in the deaths of over 170 people, most of them children, is unclear. According to a preliminary assessment by Central Command, U.S. intelligence maps failed to indicate that the school facility—once part of a military base—had long ago been converted to civilian use, and had been added to an AI-generated target list without adequate human supervision. The New York Times reported March 11 that "officials said the error was unlikely to have been the result of new technology" but rather “human error in wartime.”
Although human error—a failure to update military intelligence maps—has been deemed the most likely cause of the Shajareh Tayyebeh missile strike, many observers warn that increased reliance on AI-powered targeting systems, a phenomenon known as “automation bias,” will result in further tragedies of this sort.
“There’s a concern that targeting [approval] could end up just being a mere formality because of the automation bias, where people are just relying on what the machine is telling them,” Amaral said.