“For 50 years, the Arms Control Association has educated citizens around the world to help create broad support for U.S.-led arms control and nonproliferation achievements.”
Solving the AI-Induced Transparency Paradox in Nuclear Command and Control
December 2025
By José Ignacio Salamanca Friedlaender
Artificial intelligence is entering nuclear command, control, and communications (NC3) faster than diplomacy can react. These tools promise sharper detection of potential missile launches or other anomalous signals, and quicker reactions to the threats that NC3 operators believe they represent. But in crisis conditions, speed itself becomes dangerous: false alarms that once could be double-checked now risk turning into irreversible launch decisions.

Nuclear deterrence has always relied on human commanders maintaining calm and deliberate control in the most dangerous moments of history. If that control erodes, deterrence itself becomes unstable. The challenge is not merely keeping humans “in the loop;” it is ensuring that AI systems embedded in the world’s most secret military infrastructure are safe, reliable, and accountable. The international community recognizes these risks, but one fundamental obstacle remains unresolved.
That obstacle is the transparency paradox. The term captures a core dilemma: verifying AI safety requires visibility into nuclear systems, yet revealing that information risks undermining the deterrence those systems exist to protect. Effective governance demands scrutiny, testing, and verification to ensure AI behaves as intended under stress. Yet nuclear command and control operates under maximum secrecy to protect operational security and maintain strategic advantage. Multiple nuclear-armed states, including France, the United Kingdom, and the United States, have reaffirmed commitments to ensure human control over nuclear-use decisions through international dialogue and the Responsible AI in the Military Domain summit.1 These principles are vital first steps, but without verification that such commitments are implemented, they risk becoming symbolic rather than stabilizing. If AI systems cannot be examined inside classified nuclear architecture, how can they be trusted not to fail when seconds matter most?
AI Is Already Having an Impact
Artificial intelligence already is reshaping NC3 in ways that compress warning and decision time. Even when AI is deployed in conventional military domains, its integration accelerates sensor fusion and threat assessment, narrowing the window for strategic deliberation.2 Major powers are moving in this direction simultaneously, reinforcing uncertainty and mutual suspicion.
At the same time, AI decision-support tools—systems that analyze data and present recommendations to operators—can subtly shift authority over nuclear-use decisions from human reasoning toward algorithmic interpretation. The risk is not theoretical: in 1983, Soviet early-warning software falsely detected a U.S. missile launch; only Soviet Lieutenant Colonel Stanislav Petrov’s judgment prevented retaliation, revealing how critical human control remains.3 Deterrence requires deliberation, but AI incentivizes rapid reaction, and if not carefully regulated, that shift could prove catastrophic.
Governments understand these dangers and are trying to respond. The 2022 working paper by France, the United Kingdom, and the United States pledged to “maintain human control and involvement” in nuclear decisions.4 The blueprint for action produced by the Responsible AI summit acknowledges nuclear-specific risks,5 and dozens of states have endorsed the U.S. political declaration on responsible military use of AI and autonomy.6 Yet such commitments rest on a fragile assumption—that states can prove nuclear compliance without sacrificing secrets—a standard they cannot meet today. Each nuclear-armed state fears that revealing too much would expose vulnerabilities, while opacity in rival programs makes restraint feel risky.
When a government cannot see how far an adversary has progressed in integrating AI into NC3 systems, it must assume the worst: that waiting could leave it strategically disadvantaged. Because demonstrating safety inevitably reveals aspects of how the system works, even verifying a government’s own AI-enabled nuclear systems could expose technical details that must remain classified. As a result, governments often conclude they cannot validate safety without also revealing vulnerabilities. As researchers at the Stockholm International Peace Research Institute have noted, no governance framework currently exists that is tailored to the AI-nuclear interface.7 Governments pledge human control and rigorous oversight, but such assurances cannot be demonstrated while AI-nuclear systems remain fully classified.
The challenge is escalating precisely because AI behaves unlike any previous military technology. Nuclear hardware is static; once deployed, its performance remains consistent for years. AI systems, by contrast, evolve constantly as they ingest new data or receive updates, meaning a system verified today may not behave the same way tomorrow. Verification assumes predictability—and when behavior changes unpredictably, verification loses meaning. States now worry less about adversaries’ warhead numbers and more about how quickly their decision-making cycles might change. Faster interpretation of ambiguous data and greater automation could be misread as preparations to bypass human judgment, increasing the risk of miscalculation. Verification becomes a continuous race for insight, widening the gap between what stability requires and what secrecy allows.
So, what does credible verification require? Assurance depends on understanding how an AI system is trained, what data it processes, how it behaves under stress, and how it fails. That understanding requires at least partial access to how the system is designed and how it performs in controlled testing. Dual-use biotech research illustrates the stakes: an AI model designed for drug discovery generated 40,000 potentially lethal molecules in six hours—including variants of VX, one of the deadliest nerve agents ever created by humans. The developers could not fully explain these results because the system learned through complex, non-transparent iterative processes.8 Nuclear systems cannot tolerate that level of uncertainty. If designers cannot predict failure conditions, operators cannot trust the technology.
Yet revealing NC3 systems to enable testing would create catastrophic vulnerabilities of its own. These systems rely on classified satellite feeds, encrypted communication channels, and fail-deadly logic designed to guarantee retaliation even under attack. Disclosing how AI integrates into these functions could expose exploitable weaknesses or signal precisely where an adversary should strike. The transparency paradox is structural: The very information required for verification is also the most dangerous to disclose.

Historical arms control demonstrates that secrecy and verification can coexist through carefully engineered mechanisms. International Atomic Energy Agency (IAEA) safeguards are designed to verify that nuclear material is not diverted to weapons, measuring outcomes rather than inspecting weapon designs.9 The Joint Comprehensive Plan of Action—the 2015 Iran nuclear deal involving China, France, Germany, Russia, the United Kingdom, and the United States—applied this logic through “managed access,” which allowed IAEA inspectors to confirm limits on enrichment without exposing operational technologies. These mechanisms show that trust can be built without full exposure.
The New Strategic Arms Reduction Treaty (New START) expanded this principle for deployed Russian and U.S. nuclear forces. Tag-and-seal practices, telemetry-sharing limits, and on-site inspections enabled verification while protecting sensitive warhead designs.10 What New START achieved with hardware, AI governance must now accomplish with software: verification without exposure. AI introduces two new complexities: software evolves constantly, requiring ongoing certification; and automation in adversarial contexts can be misinterpreted as aggressive intent. Together, these qualities demand verification that is adaptive, continuous, and insulated from espionage risks.
Strengthening governance also requires institutions capable of turning voluntary political declarations into practical, enforceable measures. States could incorporate AI-NC3 safeguards into existing arms control and risk-reduction channels, including the process involving the five permanent Security Council members and military-to-military communication frameworks. Regional nuclear actors could participate through tailored levels of transparency, adjusted to what each can safely and realistically share. Confidence-building measures, such as reciprocal observation of AI-in-the-loop training exercises or the exchange of safety-testing methodologies, would help normalize expectations for responsible behavior. Over time, such initiatives could evolve into formal agreements embedding human-control guarantees into states’ nuclear postures and establishing periodic AI risk assessments. Governance must begin where political will exists but grow into international institutions capable of managing technologies that evolve far faster than treaty negotiations.
A two-track model could help achieve this. First, political commitments ensuring meaningful human control over nuclear-use decisions must become enforceable rules, not aspirations. Second, international standards should mandate that AI systems used in nuclear command and control are transparent, thoroughly tested, and air-gapped from launch-authorization decisions.11 Secure “red team” exercises, behavioral audits, and stress-testing could confirm reliability under crisis conditions without exposing source code. Independent evaluators could observe system outputs in controlled environments, ensuring that override commands work reliably and dangerous automation does not occur. These tools would replace trust in rhetoric with trust in demonstrated performance.
The window for action is narrowing. AI is rapidly being integrated into early-warning, tracking, and decision-support architectures across nuclear-armed states. Once embedded in classified command chains, unsafe automation will be difficult to remove. Meanwhile, civilian AI breakthroughs are spilling quickly into military applications. In competitive environments, no state wants to appear slower than its rivals; the arms race of the future may not begin with warhead counts, but with response times.
The transparency paradox is not a reason to delay; it is a warning that delay could be fatal. Secrecy will always be essential to deterrence, and verification will always be essential to trust. Arms control has resolved similar tensions before; it can do so again. The task now is to build mechanisms that prove what states cannot reveal. If the result is failure, the world may soon rely on machines to make judgments once reserved for human beings—and in nuclear crises, milliseconds matter. AI is moving faster than most people think.
ENDNOTES
1. Fei Su, Vladislav Chernavskikh, and Wilfred Wan, “Advancing Governance at the Nexus of Artificial Intelligence and Nuclear Weapons,” Stockholm International Peace Research Institute, SIPRI Insights on Peace and Security, No. 2025/03 (March 2025)
2. Vladislav Chernavskikh and Jules Palayer, “Impact of Military Artificial Intelligence on Nuclear Escalation Risk,” Stockholm International Peace Research Institute, SIPRI Insights on Peace and Security, No. 2025/06
(June 2025).
3. Bruce D. Blair, The Logic of Accidental Nuclear War, Brookings Institution Press, Washington, DC, 1993, pp. 187-189.
4. Fei Su, Vladislav Chernavskikh, and Wilfred Wan, “Advancing Governance at the Nexus of Artificial Intelligence and Nuclear Weapons,” Stockholm International Peace Research Institute, SIPRI Insights on Peace and Security, No. 2025/03 (March 2025).
5. Ibid., reference to the REAIM “Blueprint for Action” (2024).
6. Ibid., Political Declaration on Responsible Military Use of AI and Autonomy (2023-2024).
7. Ibid., discussion of governance gaps at the AI-nuclear interface.
8. Fabio Urbina et al., “Dual-use of artificial-intelligence-powered drug discovery,” Nature Machine Intelligence, Vol. 4, no. 5 (2022): pp. 429-436.
9. International Atomic Energy Agency, “The Safeguards Implementation Report for 2024,” 2024.
10. Union of Concerned Scientists, “Verification of New START” factsheet, July 2010.
11. Future of Life Institute and Strategic Foresight Group, “Framework for Responsible Use of AI in the Nuclear Domain,” February 2025.