Reducing Strategic Risks of Advanced Computing Technologies

January/February 2023
By Lindsay Rand

In the past few years, U.S. policymakers have struggled to craft policies that embrace the benefits of advanced computing technologies and enable competitive innovation while mitigating risks from their widespread applications. Even as a U.S.-Chinese technology competition looms, policymakers must recognize the arms-racing risks to strategic stability and pursue policies, even if unilateral, to resolve the ambiguity around computing technologies that are deployed in strategic settings.

U.S. technicians test the operational capabilities of a swarm of 40 drones at the U.S. National Training Center at Fort Irwin, Calif. in 2019. (U.S. Army Photo by Pv2 James Newsome)Although computing technologies have been a security focus since World War II, a strategic shift toward technology competition with China and a rapidly accelerating pace of innovation are challenging prior U.S. governance strategies. A narrower version of this problem was faced in the 1970s, when U.S. policymakers and their UK counterparts agreed on restricting the flow of high-end computers to Eastern Bloc countries over national security concerns but disagreed on whether to control low-end computer trade.1

In the context of a renewed competitive environment, the United States again faces the challenge of crafting policies that will accelerate domestic research and development to compete for and maximize the economic and strategic benefits of these technologies while identifying and curtailing potential national security risks from deployment and proliferation. Yet, policies based on controlling exports of computing technologies, which were leveraged in the 1970s, will be less effective today in the context of a broader network of private sector actors and a wider set of hardware and software computing technologies. For example, the risks of advanced computing application have been highlighted with the development of drone swarm technologies.2 Swarms are fleets of drones that are networked using a variety of advanced computing technologies to perform synchronized maneuvers. Militaries are interested in swarms because they could provide the opportunity to overwhelm an adversary’s offensive or defensive systems or support expanded, persistent intelligence operations.3 Swarms also have civilian applications across many industries, including for emergency response and agriculture, and many technologies developed for private sector applications are repurposed for military operations.

Without adequate vetting and testing of how the various advanced computing elements will perform in a strategic environment, however, swarms with defects or unverified components prematurely deployed could produce significant consequences and lead to escalation scenarios through adversary interference, unintended and unsupervised activities that provoke adversaries, or faulty deployment that is misinterpreted as malicious intent.4 Wider proliferation of swarm-applicable computing technologies also increases the likelihood that swarming will be leveraged for nefarious purposes, such as for easier delivery of chemical, biological, radiological, or nuclear weapons.5

If the history of Cold War competition bears any resemblance to challenges that can be expected in the digital age, Washington will face a pacing issue as it competes to assert leadership in the field but slows innovation enough to assert more effective guidelines for the legitimate use of, access to, and R&D on advanced computing technologies.

Biden administration policies clearly indicate an intent to compete for technological leadership. Given the precedent set by the Cold War, an era of accelerated innovation should be expected. In supporting rapid innovation, however, policymakers could miss an important chance to preempt risky applications and their broad proliferation unless guardrails for safer use, such as clearer definitions, metrics, and testing requirements for military applications, are soon identified and implemented.


Advanced computing technology is a broad category that encompasses systems and techniques used to improve computation hardware and software. Artificial intelligence (AI) is perhaps one of the best-known categories of advanced computing technologies and signals a remarkable improvement in computing software proficiency. Also, key hardware technologies have increased the brute force of computing power, including quantum computers and exascale supercomputers.

In terms of policymaking and governance, improvements in advanced computing hardware are more compatible with existing forms of regulations and controls. Hardware components necessarily consist of tangible elements that can be tested and evaluated in observable ways and are more easily defined and controlled in agreements. One of the key challenges with managing advanced computing technologies is the fact that many are software based, meaning no physical objects associated with the innovations can be tracked, verified, or monitored. This means policymakers must seek new governance options.

Although each hardware or software technology could uniquely improve computing speed or complexity, there is also an element of amplification in the interaction between advanced computing technologies. For example, hardware improvements could allow for more powerful software capabilities or software improvements could further maximize utility under constraints of existing hardware.6 Additionally, significant improvements in either category could catalyze R&D breakthroughs across other branches of advanced computing technologies.7 This interconnection indicates that domino effects in advanced computing R&D, as well as deployment, are feasible and likely, emphasizing the importance of early policy efforts to clearly define and regulate different types of advanced computing technologies.

Momentum in Recent Policies

Recently published strategies and policies on advanced computing technologies indicate that the Biden administration is facing competing pressures in managing the new technologies. On May 4, President Joe Biden signed two presidential directives on quantum computing and the broader category of quantum information sciences.8 Together, they send mixed signals to private sector stakeholders and governmental agencies about the national strategy for supporting and directing R&D on quantum information sciences and technology.9

The first directive calls for bolstering domestic quantum R&D capacity by enhancing the National Quantum Initiative Advisory Committee, which provides independent assessments and recommendations on the national quantum program, and by declaring the importance of U.S. leadership in quantum information sciences and quantum technology applications.10 The second directive, which recognizes the potential risks that quantum technologies pose to cybersecurity, calls for efforts to minimize such vulnerabilities by bolstering cryptography standards and increasing awareness of risks and new security requirements across agencies.11 The two directives present a complex narrative legitimating concern about quantum technologies while endorsing an arms-racing-style competition with near-peer technology competitors for leadership.

In October, the White House Office of Science and Technology Policy released a white paper titled “Blueprint for an AI Bill of Rights” that echoes a similar underlying strategy but with a different policy scope.12 The document declares its purpose to be “to protect the American public” in an age of AI and proposes methods to ensure safe, effective systems; protections against algorithmic discrimination; and data privacy. It also calls for procedures to provide notice when an automated system is being used and an explanation of the scope and mechanisms of its operation, as well as access to human alternatives to the automated system and human consideration and fallback mechanisms if an automated mechanism errs.

Although the document assumes a greater regulatory role than any quantum policy to date, it has been criticized for not explicitly acknowledging the need for caution in the use of AI in any specific circumstances and for the absence of legal force. Without cautioning careful evaluation for use-case suitability, the document could be interpreted as condoning broader, even indiscriminate application of AI technologies as long as the minimal guidelines are met.13 This is despite the widely recognized and legitimate concerns regarding the technology, including the discrimination implicit in algorithms applied for surveillance and the exacerbation of crisis stability and escalation risks in defense applications.14 Further, because of the lack of legal authority, the document really only serves as a recommendation.

As these policies continue to be published, it has become apparent that the Biden administration will prioritize competing for international leadership on advanced computing by fostering rapid technology innovation and limiting regulatory policies. By minimizing regulatory constraints, the administration is likely hoping to reduce any friction that could impede innovation in the United States or disincentivize private sector investment and R&D.15 Although this decision may support strategic competition, it necessarily limits risk reduction to recommendations and reactive measures and makes clear that the resulting risks from deploying these technologies are necessary symptoms to be treated rather than prevented through more rigorous evaluation and testing.

If the dangers of unrestrained technology competition were recognized and acknowledged as arms racing, could risk reduction efforts be improved by governmental adoption of more proactive policies to serve as guardrails for innovation? Importantly, the U.S. decision to compete for international leadership and thus promote the relatively unfettered development of advanced computing technologies introduces risks to strategic stability when leveraged for military applications that may have been undervalued. Understanding the impact of these flawed policies that signal intent to engage in arms racing and devising more constructive ones should be an urgent U.S. priority.

Strategic Stability Risks

In the national security domain, advanced computing technologies are referenced as enabling technologies. This term is used to indicate that they are not weapons themselves but that they enable strategic operations and have broader applications than traditional military technologies.16 Even as enabling technologies, however, advanced computing technologies can create destabilizing risks in three primary ways.

First, they increase offensive cybercapability by allowing for data mining or longer, more persistent engagement, as in the case of AI, or more brute force, as in the case of quantum computing. Numerous articles have been written identifying the strategic stability implications of cybercapabilities, including in their application to critical infrastructure, nuclear command and control, and military operation domains.17 In the cyber application, a drastic, asymmetric advantage gained by one country establishing a clear lead in advanced computing technologies would have significant consequences for the offense-defense balance. In creating a perceived imbalance in military capabilities between adversaries, such an advantage could impose new crisis escalation risks or further incentivize arms-racing dynamics.18

Second, advanced computing technologies increase data processing power, which has been called the weaponization of data. In this context, an increased ability to survey “big data” could enable a country to determine strategically significant operational trends, identify vulnerabilities, or detect the asset locations of its adversaries.19 For example, advanced computing technologies may allow a country to harness big data to improve detection, tracking, and targeting of the mobile nuclear weapons delivery systems that constitute a nuclear-armed state’s second-strike capability.20

The extent to which data can fundamentally disrupt conditions for strategic stability and mutual vulnerability, however, is still not clear. There is an important caveat in discussing this risk factor, namely, the fine line between recognizing the significance of harnessing the ability to analyze large amounts of data and avoiding outsized expectations by considering feasibility and practical constraints that may impede deployment in practice. A country’s perception of an adversary having these capabilities could be destabilizing on its own, but an assessment of the extent to which better data processing power actually could render certain assets vulnerable provides increased clarity on the magnitude of this risk.

Finally, advanced computing technologies shorten decision-making time by accelerating the pace of conflict scenarios. Although the first two risk areas primarily highlight the possibility of deliberate deterrence failure by changing the actual or perceived balance of mutual vulnerability required for deterrence, increasing the speed of combat is most often associated with inadvertent deterrence failure scenarios. Specifically, advanced computing technologies are likely to increase crisis instability by shrinking decision-making time and forcing humans to rely on fully or partially automated decisions in crises. This risk has been discussed extensively in the context of lethal autonomous weapons systems.21

One risk of advanced computing technologies is that they shorten decision-making time by accelerating the pace of conflict scenarios. In December, a Ukrainian soldier loaded a mortar launcher before firing on Russian positions in eastern Ukraine.  (Photo by SAMEER AL-DOUMY/AFP via Getty Images)Beyond these three categories, an additional strategic stability impact arises from the signaling, hype, and investment in these advanced computing technologies. In the policy sphere, the perception of capability is almost as important as the actual capabilities at hand given the ambiguity of these technologies, especially software-based capabilities. For example, there is no way for a country to verify the quality and scope of AI or computing mechanisms that another country is using to augment its system, and thus policymakers and strategists must rely on perception. Because of this ambiguity, a country’s failure to send clear signals to its adversary could incentivize technology buildup and heighten arms-racing instability. Even among domestic stakeholders, hype, or the exaggerated perceptions of a technology’s potential, can lead policymakers to adopt different strategies than they would if they knew a technology’s true limitations.

Biden’s recent policy announcements are particularly risky for this exact reason. When these sorts of policy statements are not accompanied by greater specificity on the purpose of the reported strategies, how the United States intends to leverage its leadership, and what is even meant by leadership, then such policies could be interpreted as a readiness to engage in a technology arms race and a willingness to accept crisis and arms-racing instability.

Governance Challenges

Even in a geopolitical climate that would be favorable toward arms control policies or cooperative regulations on advanced computing technologies, dual-use applications and definitional ambiguity would pose major obstacles.

The term “dual use” refers to the fact that advanced computing technologies have military and civilian applications. In addition to military technologies, advanced computing technologies could be used in nearly any industry, a fact evidenced by the heavy flow of venture capital investment and the large volume of start-up companies geared toward specific applications. This means that technology-based regulations or agreements could impact a wider circle of private sector technology developers and users than those strictly in the defense industrial base. In a competitive environment, policymakers have to navigate private sector interests carefully because overly restrictive regulations could dampen innovation and impact economic security.

Issues also arise from definitional ambiguity given the nascent stage of development of the new technologies.22 Software-based advanced computing technologies pose particularly pernicious definitional challenges because of their ambiguous nature, but distinguishing across different types of advanced computing hardware is also challenging at early R&D stages, as exemplified by quantum technologies. Metrics for evaluating the capability of a certain technology and testing procedures to ensure that the technology operates as expected increase transparency around the capabilities and limitations of a technology, but are often elusive for new technologies.

Given these governance challenges and in the context of a national strategy promoting technology competition with adversaries that makes traditional agreements to restrict certain military applications unlikely, policymakers should prioritize risk reduction policies to minimize disruption to strategic stability. This includes unilateral efforts, as well as cooperation with international allies, to produce clear definitions for each type of advanced computing technology, metrics for evaluating performance, and procedures to test functionality in strategic environments. Although documents such as the AI bill of rights blueprint provide guidance for technology innovation, they will not effectively reduce strategic risks without definitions to scope the technology, metrics to evaluate performance, and testing procedures to identify any risks to be mitigated before acquisition or deployment.

The best immediate policy option is for the United States unilaterally to pursue metrics and rigorous testing procedures to increase transparency and reduce risks in the strategic environment. Even without formal international agreements, rigorous standards precluding acquisition and deployment in strategic environments mitigate unintended escalation risk that could be perceived as being the fault of the United States. Also, this could help reduce hype and mitigate arms-racing risks by providing greater clarity on the computing technologies that are leveraged in a given domain. Finally, a better understanding of a technology’s performance will improve U.S. policymaking. For example, understanding the limitations of missile defense early on was helpful in formulating policy rhetoric around the technology, even if it could not curb acquisition demand.

The power and reach of risk reduction governance mechanisms can be enhanced through U.S. policymaker engagement with broader networks at home and abroad. As the swarming example illustrates, many advanced computing technologies will be developed by the private sector for alternative civilian purposes. U.S. policymakers involved in military acquisition processes should ensure that private sector innovators are aware of the operational risks to which computing technologies will be exposed in strategic environments that may differ from those in civilian environments. Likewise, U.S. policymakers should engage with allies that historically have helped facilitate technology risk reduction measures, such as the UK-U.S. partnership to limit computing technology exports to Eastern Bloc countries.

The Exascale-class HPE Cray EX Supercomputer at Oak Ridge National Laboratory in Tennessee. (Photo courtesy of Oak Ridge National Laboratory)In domestic outreach, policymakers must engage relevant federal agencies and the private sector. On interagency cooperation, policymakers need to weigh economic and security concerns among various governmental stakeholders to identify applications where use-case-oriented testing could reduce strategic risk without creating an obstacle to innovation. Balancing these objectives and ensuring compliance with the metrics and evaluation protocols that are developed will require working to increase trust and understanding with private sector technology developers and users. To some extent, this was undertaken in 2018 when the U.S. Department of Commerce requested comments from the public on the criteria for defining and identifying emerging technologies, but the degree to which the views of private sector stakeholders were considered is not clear.23

Although definitions and testing procedures should be crafted to fit the application needs of the United States, U.S. policymakers should work with allied countries to facilitate dialogue on standards. The United States has a mixed post-Cold War record of cooperating with allies on emerging dual-use technology R&D, but a new series of cooperative agreements on quantum information sciences with Australia, Denmark, Finland, France, Sweden, Switzerland, and the United Kingdom suggest that U.S. policymakers view strategic research partnerships as increasingly important for advanced computing technologies.24 These types of agreements, based in R&D, can help propagate definitions and protocols abroad.

With time and network outreach, unilateral risk reduction measures eventually could have a broader reach. Globalized academic networks and private markets mean that definitions and standards adopted by the United States may permeate naturally to other countries. Especially if technical experts deem the definitions and testing procedures as opportunities to validate their own technologies, U.S. adversaries and competitors may even find strategic benefits in incorporating their own risk reduction measures. These efforts could lay the groundwork for eventual cooperation when geopolitical tensions cool or, at the very least, could provide a starting point for Track 2 dialogue. Furthermore, once better definitions, metrics, and testing procedures are in place, U.S. policymakers can use the increased transparency to develop better policies to restrict use or access eventually and to guide necessary R&D.

The Need for Action

Ultimately, many of the challenges associated with regulating advanced computing technologies in the digital age are not so dissimilar from those faced in the Cold War era. If the history of nuclear weapons is any indication, a reactive policy approach may lead to a decades-long arms reduction and risk reduction process that took years to yield real results and now has ground to a halt for political reasons. Policymakers would be wise to avoid this mistake and instead create space for more proactive governance on advanced computing technologies by establishing unilateral risk reduction measures and laying the groundwork now for eventual agreements.

As it stands, the current U.S. approach of prioritizing competition underestimates the risks of arms racing and the disruptions to strategic stability that advanced computing technologies may provoke. Although an environment of strategic competition will create an impetus for rapid innovation, policymakers would be wise to view better standards in strategic deployments as guardrails to protect against escalation and risks rather than road bumps or detours that fundamentally will impede U.S. innovation.


1. Frank Cain, “Computers and the Cold War: United States Restrictions on the Export of Computers to the Soviet Union and Communist China,” Journal of Contemporary History, Vol. 40, No. 1 (2005): 131–147.

2. Yongkun Zhou, Bin Rao, and Wei Wang, “UAV Swarm Intelligence: Recent Advances and Future Trends,” IEEE, Vol. 8 (2020): 183856-183874.

3. Zachary Kallenborn, “InfoSwarms: Drone Swarms and Information Warfare,” Parameters, Vol. 52, No. 2 (Summer 2022): 87–102.

4. James Johnson, “Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare,” The RUSI Journal, Vol. 165, No. 2 (2020): 26-36. See also Jurgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,” Survival, Vol. 59, No. 5 (2017): 117–142.

5. Zachary Kallenborn and Philipp Bleek, “Swarming Destruction: Drone Swarms and Chemical, Biological, Radiological, and Nuclear Weapons,” The Nonproliferation Review, Vol. 25, Nos. 5-6 (2018): 523–543.

6. Max Levy, “Machine Learning Gets a Quantum Speedup,” Quanta, February 4, 2022,

7. Vedran Dunjko and Hans Briegel, “Machine Learning & Artificial Intelligence in the Quantum Domain: A Review of Recent Progress,” Report on Progress in Physics, Vol. 81, No. 7 (2018): 074001.

8. Patricia Moloney Figliola, “Quantum Information Science: Applications, Global Research and Development, and Policy Considerations,” CRS Report, R45409 (November 1, 2019),

9. The White House, “Fact Sheet: President Biden Announces Two Presidential Directives Advancing Quantum Technologies,” May 4, 2022,

10. Exec. Order No. 14073, 87 Fed. Reg. 27909 (May 9, 2022).

11. The White House, “National Security Memorandum on Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic System,” National Security Memorandum 10, May 4, 2022,

12. White House Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” October 2022,

13. Khari Johnson, “Biden’s AI Bill of Rights Is Toothless Against Big Tech,” Wired, October 4, 2022,

14. Alex Engler, “The AI Bill of Rights Makes Uneven Progress on Algorithmic Protections,” Lawfare, October 7, 2022,

15. Larry Downes, “How Should the Biden Administration Approach Tech Regulation? With Great Care,” MIT Sloan Management Review, January 19, 2021,

16. Michael Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review, Vol. 1, No. 3 (May 2019): 37–57.

17. See Jacquelyn Schneider, “The Capability/Vulnerability Paradox and Military Revolutions: Implications for Computing, Cyber, and the Onset of War,” Journal of Strategic Studies, Vol. 42, No. 6 (2019): 841–863.

18. Rebecca Slayton, “What Is the Cyber Offense-Defense Balance?” International Security, Vol. 41, No. 3 (Winter 2016/17): 72–109.

19. Damien Van Puyvelde, Stephen Coulthart, and M. Shahriar Hossain, “Beyond the Buzzword: Big Data and National Security Decision-Making,” International Affairs, Vol. 93, No. 6 (2017): 1397–1416.

20. Natasha Bajema, “Will AI Steal Submarines’ Stealth?” IEEE Spectrum, July 16, 2022, See also Paul Bracken, “The Hunt for Mobile Missiles: Nuclear Weapons, AI, and the New Arms Race,” Foreign Policy Research Institute, 2020,

21. Michael Horowitz, “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence, and Stability,” Journal of Strategic Studies, Vol. 42, No. 6 (2019): 764–788.

22. Matt O’Shaughnessy, “One of the Biggest Problems in Regulating AI Is Agreeing on a Definition,” Carnegie Endowment for International Peace, October 6, 2022,

23. U.S. Department of Commerce, “Review of Controls for Certain Emerging Technologies,” 83 Fed. Reg. 58201 (November 19, 2018).

24. See “U.S. and France Sign Statement of Cooperation for Quantum Technology,” Quantum Computing Report, December 3, 2022,


Lindsay Rand is a doctoral candidate in public policy at the University of Maryland and a Stanton pre-doctoral fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace.