ACA Logo

Debunking the Missile Defense Agency’s 'Endgame Success' Argument

George N. Lewis and Lisbeth Gronlund

The Pentagon’s Missile Defense Agency (MDA) has become increasingly averse to providing detailed reports to Congress or the public on the progress of U.S. missile defense programs. It has also recently decided to classify more information about missile defense intercept tests. Given this reduction in independent oversight, it is especially important to determine the extent to which information provided by the MDA is credible and trustworthy. Unfortunately, close examination of statements by MDA officials, who have been arguing that the test record for hit-to-kill missile defenses shows that such systems will work, demonstrates that the Pentagon has been less than forthright about its successes and failures.

Specifically, Lieutenant General Ronald Kadish, director of MDA, testified to Congress on June 25 that many missile defense test failures were due to quality-control problems that prevented the interceptor from reaching the “endgame”—the difficult final phase of the intercept attempt that begins when the kill vehicle is released from its booster and attempts to detect, home in on, and destroy its target. That is, according to Kadish, many intercepts fail in their early stages, during the less technologically challenging phases of the test. Kadish argued that, in tests in which the endgame is reached, the interceptors actually have a very high success rate of 88 percent. Moreover, in testimony June 14, 2001, he argued that this high “endgame success” rate shows that “the feasibility of missile defense and the availability of technologies to do this mission should not be in question.”

This argument is wrong for several reasons:

  • Inaccurate statistics: The numbers Kadish uses are incorrect; he undercounts the number of endgame failures. Kadish claims that, of the 25 missile defense tests in which the interceptor reached the endgame, the target was hit 22 times, for a success rate of 88 percent. This formulation, however, omits six additional endgame failures by incorrectly assessing them as failures prior to the endgame. The true endgame success rate is only 71 percent (22 of 31)
  • Midcourse and terminal defenses inappropriately lumped together: Kadish essentially mixes apples and oranges by combining test data for terminal and midcourse missile defenses. Doing so is inappropriate since these two types of defenses operate quite differently. All the midcourse systems are designed to operate above the atmosphere against medium- to long-range missiles (the Theater High Altitude Area Defense is also capable of operating in the upper reaches of the atmosphere), and all use a kill vehicle that is released from a booster rocket, uses infrared sensors for homing, and maneuvers using divert thrusters. In contrast, the terminal defenses tested only operate within the atmosphere against shorter-range missiles. They do not use a kill vehicle but an interceptor that is a single-stage missile; they use a radar for homing instead of an infrared sensor; and they maneuver using atmospheric forces rather than divert thrusters. The endgame success rate of the midcourse intercepts is only 61 percent (11 of 18).
  • Endgame success rate is not higher than pre-endgame success rate: Because a successful intercept requires that all successive phases of the test be successful, the “partial” success rate for any one phase of the intercept attempt will be higher than the overall success rate. For both midcourse and terminal systems, the endgame success rate is actually slightly lower than the success rate prior to the endgame. Of the 27 midcourse tests, 18 (67 percent) successfully reached the endgame. Of these 18, only 11 (61 percent) actually hit their targets. Thus, on a percentage basis, more tests failed during the endgame than before.
  • Endgame success rate is irrelevant: There is no reason to consider the endgame success rate rather than the overall success rate because quality control errors can and have occurred in all phases of the tests. Taking into account failures that occur both prior to and during the endgame, the overall success rate for midcourse systems drops to only 41 percent (11 of 27).
  • Intercept tests do not adequately simulate real world usage: All of the hit-to-kill tests conducted to date have—as MDA itself notes—included numerous “limitations” and “artificialities,” so even a perfect test record would say little about the ability of the system to perform under realistic operational conditions. Contrary to Kadish’s June 2001 statement, the feasibility of missile defense and the availability of needed technologies remain in question.

    Regardless of how they are tabulated, the test results do not indicate anything meaningful about the technical feasibility of the missile defense systems under development. The MDA analysis that Kadish presented to Congress is based on misrepresenting the results of past tests, and its conclusions are misleading. This analysis raises serious questions about the recent MDA decision to classify information about its future intercept tests because further secrecy will make it nearly impossible for independent analysts to check MDA’s claims. If Congress and the public are to have a realistic understanding of the system’s capabilities, MDA programs must be subject to continuing and increased congressional and independent oversight.

    A Closer Look at the Numbers

    During his June 25 testimony, General Kadish explained how MDA obtained its 88 percent success rate in the endgame. A slide accompanying his presentation included five categories of system tests, dating back to the first hit-to-kill missile defense test, conducted in 1983 under the Strategic Defense Initiative:

    (1) Thirteen tests of ground-based midcourse defenses conducted as of Kadish’s June 25 testimony: the four homing overlay experiment (HOE) tests and the two exoatmospheric reentry vehicle interceptor subsystem (ERIS) tests, which were predecessors to the current ground-based midcourse system; the Delta 180 experiment, in which two satellites were maneuvered to collide with each other; and the six ground-based national missile defense system intercept tests. (Kadish’s figures could not include the October 14 ground-based midcourse test, which was a success.)

    (2) Two tests of the sea-based midcourse defense system (formerly known as Navy Theater Wide) conducted in 2002, using the light-weight exoatmospheric projectile (LEAP) kill vehicle.

    (3) Four exoatmospheric LEAP tests carried out between 1992 and 1995, two ground-based and two sea-based; and the two exoatmospheric tests of the Theater High Altitude Area Defense (THAAD), a ground-based system designed to intercept short- and medium-range ballistic missiles.

    (4) Six high-endoatmospheric THAAD tests.

    (5) Fourteen tests of the Patriot Advanced Capability-3 (PAC-3) and its developmental predecessors against ballistic missile targets in their terminal stage.

    Kadish claims that of these 41 tests, 25 reached the endgame and 22 of those were successes, leading to an 88 percent endgame success rate. There are two problems with these figures. The first problem is a minor one: the figures completely omit one failed PAC-3 test. Thus, while Kadish includes a total of 41 tests, of which 14 were tests of terminal defenses, the authors’ analysis includes a total of 42 tests, with 15 tests of terminal defenses. The second problem is much more significant. In addition to the one failed midcourse test acknowledged by Kadish to have reached the endgame, six additional failed tests were endgame failures but were not counted as such by Kadish:

  • HOE, first intercept attempt, February 7, 1983: The kill vehicle’s failure to hit the target was attributed to problems in the interceptor’s infrared sensor cooling system that caused the sensor to be warmer than expected and produced noise saturating the kill vehicle’s flight computer. This endgame failure is similar in nature to the one midcourse endgame failure acknowledged by Kadish: the January 18, 2000, national missile defense test, in which the sensor cooling system also failed. (In the three subsequent Homing Overlay tests, the sensor’s detection threshold was raised in order to eliminate the noise, and the target was also heated.)1

  • HOE, second intercept attempt, May 28, 1983: Although the kill vehicle completed the flight sequence required to intercept the target, the intercept did not occur, Army officials said shortly after the test.2 The interceptor reportedly demonstrated successful homing but missed due to a “random” failure in the guidance electronics.3

  • HOE, third intercept attempt, December 16, 1983: In this test, the kill vehicle successfully demonstrated its ability to home on the target, but a software error in its onboard computer prevented it from converting homing data into steering commands, causing the kill vehicle to miss.4

  • ERIS, second intercept attempt, March 13, 1992: The ERIS failed to hit the target, which was accompanied by a single balloon “decoy,” reportedly missing by “several meters.”5 The decoy and target were separated by about 20 meters and the kill vehicle flew between them.6 The miss was apparently a result of two factors: a greater than anticipated separation between the decoy and target and a later than expected detection (by about 0.2 second) of the target relative to the decoy. According to the ERIS project manager, mission planners had allowed 0.8 seconds for the kill vehicle to maneuver, but at least 0.9 seconds were actually needed.7 This test illustrates that even small deviations from a carefully scripted intercept test can lead to failure.

  • LEAP, fourth intercept test, March 28, 1995: The LEAP failed to hit the target, apparently because a battery failed.8 The intercept proceeded normally up to the point at which the kill vehicle was ejected from the missile, and the LEAP apparently saw the target before it was ejected from the missile. The LEAP had no electrical power after release, however, and missed the target by 167 meters.9

    It could be argued that this is not an endgame failure since the kill vehicle had no power after release. However, the kill vehicle was apparently released in the right place for detecting and intercepting the target, failing to do so because a vital component of the kill vehicle malfunctioned. In a sense, this failure is no different than an error by the seeker or a divert thruster, and the test must therefore be counted as an endgame failure

  • THAAD, third intercept attempt (high-endoatmospheric test), July 15, 1996: Program officials said the kill vehicle came within “a matter of yards of the target.”10 The failure was reportedly caused by a problem with either the seeker electronics or a contaminated dewar in the infrared seeker.11

    In addition to these six endgame failures, four additional failed intercepts clearly entered the endgame but failed because either the target or interceptor was incorrectly positioned (the second and third LEAP and first THAAD intercept tests) or because the kill vehicle did not receive expected information about the target (first LEAP test). We do not count these as having reached the endgame, however, because the failure to hit the target was caused by problems not directly associated with the kill vehicle and its technology. These tests nonetheless provide further examples of how a deviation from the preplanned “script” of the test will lead to failure.

    Systems Tested
    Total Tests
    Reached Endgame
    Hit Target
    Ground-Based Midcourse
    Sea-Based Midcourse
    LEAP and THAAD (Exo)
    THAAD (High Endo)
    Midcourse Subtotal

    This table shows the numbers used by Kadish and, where they differ, the authors’ numbers. Kadish cites 25 total tests as reaching the endgame, 22 of which hit their target. He therefore says that missile defense programs have an 88 percent endgame success rate. In fact, 31 tests reached the endgame, meaning that the success rate is actually 71 percent. Moreover, analysis of only the midcourse endgames yields a success rate of only 61 percent, and the overall success rate for midcourse systems is 41 percent. (Note that this final figure would be 43 percent after the successful October 14 test of the ground-based midcourse system.)

    1. David A. Fulghum, “Army Officials Deny Rigging SDI Test,” Aviation Week and Space Technology, August 30, 1993, p. 25.
    2. “Army Evaluates Homing Vehicle Test Failure,” Aviation Week and Space Technology, June 13, 1983, p. 119.
    3. Clarence A. Robinson Jr., “BMD Homing Interceptor Destroys Reentry Vehicle,” Aviation Week and Space Technology, June 18, 1984, p. 19.
    4. Ibid., p. 20.
    5. Vincent Kiernan and Debra Polsky, “SDI Interceptor Fails to Hit Target,” DefenseNews, March 23, 1992, p. 8.
    6. David Wright communication with authors, based on meeting with Lockheed officials, April 3, 1992.
    7. “SDI Experimental Interceptor Misses Dummy Warhead in Final Flight Test,” Aviation Week and Space Technology, March 23, 1992, p. 21.
    8. Director, Operational Test and Evaluation, “Navy Theater Wide (NTW) Defense,” Fiscal Year 1998 Annual Report.
    9. Senate testimony of Lieutenant General Malcolm O’Neill, June 27, 1995.
    10. Joseph C. Anselmo, “THAAD Fails Third Intercept,” Aviation Week and Space Technology, July 22, 1996, p. 31.
    11. Director, Operational Test and Evaluation, “Theater High Altitude Area Defense (THAAD),” Fiscal Year 1998 Annual Report.

    George N. Lewis is associate director of the Security Studies Program at the Massachusetts Institute of Technology (MIT). Lisbeth Gronlund is senior scientist and co-director of the Global Security Program at the Union of Concerned Scientists and a senior research associate at MIT’s Security Studies Program. An expanded version of this article is available at