Abstract
Prevailing wisdom suggests that innovation dramatically enhances the effectiveness of a state's armed forces. But self-defeating innovation is more likely to occur when a military service's growing security commitments outstrip shrinking resources. This wide commitment-resource gap pressures the service to make desperate gambles on new capabilities to meet overly ambitious goals while cannibalizing traditional capabilities before beliefs about the effectiveness of new ones are justified. Doing so increases the chances that when wartime comes, the service will discover that the new capability cannot alone accomplish assigned missions, and that neglecting traditional capabilities produces vulnerabilities that the enemy can exploit. To probe this argument's causal logic, a case study examines British armor innovation in the interwar period and its impact on the British Army's poor performance in the North African campaign during World War II. The findings suggest that placing big bets on new capabilities comes with significant risks because what is lost in an innovation process may be as important as what is created. The perils of innovation deserve attention, not just its promises.
Conventional wisdom suggests that innovation consistently improves military power. Militaries that oppose it invite defeat, but those that innovate secure victory. Innovation is considered a sign of organizational health because the ever-changing character of war constantly threatens to render existing capabilities obsolete. Conversely, misfortune comes to those who allow the march of historical change to overtake them. The notion that innovation and better military performance go hand in hand is thus intuitive. It is also wrong.
In popular imagination, for example, the German blitzkrieg was a revolutionary innovation in World War II that restored the possibility of decisive victory, which had eluded European armies since the Franco-Prussian War. What is less known is that the British also innovated in armored warfare yet performed poorly on the battlefield. While the German Army mechanized its combined-arms tactics developed at the end of World War I, the British deployed armored brigades comprised almost entirely of tanks and expected them to fight with virtually no help from supporting arms.
What is puzzling about this example is not the presence or absence of innovation—both armies innovated new forms of armored warfare—but instead why some innovations enhance combat effectiveness while other innovations do not. The idea that innovation is a gamble is not novel, but too often analysts focus on only beneficial changes. They overlook harmful innovation in military organizations, implying that the gamble is always worth making. This article seeks to restore the atmosphere of risk inherent to innovation and explain why its perils deserve as much attention as its promises. To do so, I develop a theoretical framework that relates patterns of peacetime innovation to its impact on wartime effectiveness—the ability of a military service to accomplish its assigned missions at acceptable cost.1
My central claim is that innovation is more likely to weaken a service's effectiveness when growing security commitments outstrip shrinking resources. This wide commitment-resource gap exerts pressures to innovate in ways that cannibalize traditional capabilities before beliefs about the effectiveness of new ones are justified. When wartime comes, not only has the service lost proficiency in those older capabilities, but the new capability underdelivers, thereby creating vulnerabilities for the enemy to exploit.
Studying harmful innovation is crucial for both scholarship and contemporary policy challenges. Scholars study military innovation primarily because of its promise to improve effectiveness. But whether peacetime innovation increases military power is usually an assumed relationship rather than a studied one. There is a bias in case selection: scholars almost exclusively study power-enhancing innovation and ask why it occurred. Explaining the adoption of new ways of war, however, says little about whether the change is beneficial or harmful.
For defense policy, this article cautions against overly relying on military innovation to bridge wide commitment-resource gaps. The United States is in an era of military modernization in which military and civilian leaders must make important decisions about future platforms and systems that will shape U.S. military power for a long time. At the same time, the armed services operate with relatively constrained resources compared with their expansive commitments. The confluence of these trends creates pressure to make big bets on new capabilities and take risks in shedding traditional ones. My theory and findings suggest, however, that it is precisely this type of environment that encourages miscalculation.
This article proceeds in eight sections. First, I review the existing literature on military innovation, highlighting the curious absence of studies that systematically examine the downside risks of innovation. Next, I define military innovation as used in this article. The third section proposes a theory of harmful military innovation. In the fourth section, I introduce the puzzling case of British armor innovation after World War I and the British Army's subsequent combat ineffectiveness, and I discuss the research design. Sections five and six illustrate the theory, tracing British armor innovation in the interwar period and performance in the Desert War during World War II. I then evaluate alternative explanations in section seven, before concluding with avenues for future research and implications for scholarship and defense policy.
Innovation and the Promise of Military Power
In popular discourse, the word “innovation” connotes desirable progress. The same is true in research on international relations and military innovation. Theories of international relations assume that innovation enhances a state's power in the international system by changing the unit cost of military power such that a given supply of resources is converted more effi- ciently into wartime effectiveness. Robert Gilpin argues that military innovation gives a “particular society a monopoly of superior armament or technique and dramatically decreases the cost of extending the area of domination.”2 John Mearsheimer similarly observes that great powers “prize innovation” because it offers “new ways to gain advantage over opponents.”3 Assuming then that innovation bestows a competitive edge, “contending states imitate the military innovations contrived by the country of greatest capability and ingenuity.”4
Theories of military innovation reflect this optimistic view. In his influential review of the literature, Adam Grissom identifies a “tacit definition of military innovation that is, approximately, ‘a change in operational praxis that produces a significant increase in military effectiveness’ as measured by battlefield results.” He finds that “only reforms that produce greater military effectiveness are studied as innovations, and few would consider studying counterproductive policies as innovations.”5 Some researchers have made Grissom's tacit definition a formal one, treating effectiveness as a defining feature of military innovation.6
Equating peacetime innovation with greater military effectiveness is puzzling, however, because scholars recognize that the two are not synonyms. Barry Posen categorizes military doctrine as either innovative or stagnant, but he recognizes that “neither … should be valued a priori.” He suggests that instead of stagnation, “stability might be a better choice of terms, as it is less loaded [italics in the original].”7 Historians Allan Millett and Williamson Murray caution that during peacetime innovation “wrong choices and irrelevant investments will occur and will be hard to correct.”8 Others warn that “it is entirely possible that a military innovation may make a military less effective,” and that “not all innovations should be welcomed.”9
Nonetheless, virtually all theories of military innovation are built and tested on cases of performance-enhancing innovations. Posen's influential study of the Battle of France and the Battle of Britain finds that it was the military services that innovated before the war that achieved political-military integration.10 In Winning the Next War, another agenda-setting work, Stephen Rosen ignores “innovations that were put into practice but were clearly mistaken” because “despite an extensive and intensive search,” he finds that mistakes made by the U.S. military “all appear to have been the result of failures to innovate, rather than inappropriate innovations.”11
When innovation improves performance, it is appropriate to merely explain its presence or absence, as existing theories aim to do. But this approach cannot fully explain whether, when, and how innovation affects military power. Moreover, equating innovation and effectiveness wrongly implies that resistance to innovation is always an undesirable military pathology. Military organizations are conservative for a reason: there are countless solutions to complex problems, but many of them could produce catastrophic results. Therefore, assuming improved performance strips the concept of innovation of its most interesting and dangerous attribute—it is a gamble that costly changes are worth making.
Military Innovation and Its Risks
I define military innovation in this article as the process of creating a new capability—a new institutionalized technique of organized violence intended to convert a service's resources into success in future missions.12 For instance, an air wing designed for strategic bombing will be organized, equipped, and trained to operate in a way that is distinct from close air support. Capabilities are embedded in the service's organization and equipment (i.e., force structure) and a relatively ordered and consistent way of using these components in combat (i.e., doctrine). Capabilities reflect the service's preferred methods of using military force in response to particular historical modes of warfare.
Following common practice, I limit the concept of military innovation's scope to major changes in peacetime at the service level. Most studies distinguish between peacetime innovation and wartime adaptation because their learning environments are different: performance feedback from combat is unavailable in the fog of peace.13 I also focus on “major” military innovations, which Michael Horowitz defines as “a major change in the conduct of warfare” that involves “shifts in the core competencies of military organizations, or shifts in the tasks that the average soldiers perform.”14 The unit of analysis is therefore an innovating service.
The ostensible purpose of innovation is to enhance military effectiveness. For the economist Joseph Schumpeter, innovation is a new production function that changes the rate of converting a fixed quantity of factors into products.15 In a similar fashion, military innovation tries to improve the efficiency of converting allocated resources—money and personnel—into mission success by creating a new capability. Ideally, armed forces field capabilities that maximize their chances of accomplishing assigned missions at a minimal cost in resources.
But innovation's promise of resource efficiency and mission effectiveness comes with risks. The first risk is that creating a new capability is a step into the unknown without the benefit of experience, hindsight, or relevant skills.16 The second risk relates to the destruction of old capabilities in the process of creating new ones, or what Schumpeter calls “creative destruction.”17 As military organizations innovate, they are “down-grading or abandoning older concepts of operation and possibly of a formerly dominant weapon.”18 In other words, “a military service destroys or thoroughly redirects an important part of itself.”19 When a traditional capability is destroyed, it may be recoverable, but creative destruction inevitably involves opportunity costs. Destroying traditional capabilities is risky because they often are battle-tested methods of generating military power. After all, effective organizations survive and succeed in part by maintaining their existing “infrastructure”—those unglamorous and old investments.20
For innovation to improve military effectiveness, it must create more combat power than it destroys. A military service ideally calibrates its balance of capabilities such that a new capability's marginal benefit equals or exceeds the marginal cost of losing traditional ones. But the optimal balance between creation and destruction is unknown. If the organization invests too heavily in the new capability, the costs to long-established capabilities can weaken the service's overall combat performance. Destructive changes are not adequately compensated by the creative developments allegedly taking their place. But if the organization invests too little, it forgoes potential gains in wartime effectiveness. Innovation is therefore an exercise in risk management, a balancing act between the promises of a new capability and the perils of losing older ones.
A Theory of Harmful Innovation
My central claim is that harmful innovation is more likely to occur when military services, faced with growing security commitments that outstrip shrinking resources, make desperate gambles on new capabilities to meet overly ambitious goals while cannibalizing older capabilities. The military service treats innovation as a silver bullet and endorses destroying traditional capabilities before innovation advocates can justify their beliefs about the new one's effectiveness. The service later discovers that the new capability alone may not accomplish assigned missions, that the enemy can exploit vulnerabilities produced by the loss of traditional capabilities, and that the service likely must restore traditional capabilities as a backstop to shore up its combat power. Figure 1 summarizes the causal logic.
A Theory of Harmful Innovation
commitment-resource gaps and the “wicked mismatch”
Achieving an economically solvent alignment between commitments and resources is a perennial concern of statecraft. The journalist Walter Lippmann popularized the idea that “foreign policy consists in bringing into balance, with a comfortable surplus of power in reserve, the nation's commitments and the nation's power.”21 But when available means are insufficient to achieve desired political ends, there is overstretch or overcommitment. My theory emphasizes how the confluence of expanding commitments and shrinking resources—what I call a “wicked mismatch”—can shape innovation processes in harmful ways. The term is drawn from “wicked problems,” of which a key characteristic is that “proposed ‘solutions’ often turn out to be worse than the symptoms.”22
Security commitments refer to the mission burdens assigned to a military service. Some commitments are written down in treaties or domestic legislation, whereas others are declared in speeches announcing a vital interest or policy doctrine.23 The service uses these commitments to set appropriate benchmarks for the size, shape, and types of its forces, to which the state allocates money and personnel. These resources maintain or expand force structures, training regimens, military bases, administration, and operations. A military service also worries about whether it has enough personnel with the requisite skill and training to accomplish assigned missions. The service invests these resources into capabilities.24
A commitment-resource gap develops when a service's mission burdens grow, its allocation of money and personnel shrink, or both. Mission burdens can grow in scope, intensity, or time. Mission scope widens when the state acquires new territories and bases to defend, makes or expands security guarantees to allies and partners, or adds entirely new tasks to a service's mission set. The mission burden intensity deepens when a mission becomes more difficult to accomplish because of threats such as a competitor's relative military strength, changes in an adversary's military strategy, or shifts in the technological landscape. Finally, a service's mission set demands higher levels of military readiness as war appears likelier and more imminent because of diplomatic crises, militarized disputes, or alarming intelligence assessments.25
A gap can also open when the state reduces the service's allocated money or personnel. It might redirect money to other investments, social spending, or private consumption through tax cuts. Or civilian leaders might manipulate budget levels, provoking greater interservice competition over budget allocations.26 Innovation scholars have also highlighted historical episodes in which civilian leaders shortened conscription time.27 Moreover, the quantity and quality of personnel eligible for military service varies with a population's age distribution and the national system of military recruitment.28
If the state reduces resources or expands commitments, all else being equal, it can weaken or exceed the service's capabilities and, by implication, its military effectiveness. One solution is retrenchment, which can take the form of territorial withdrawal, diplomatic accommodation, appeasement, arms control, or increasing reliance on allies.29 Retrenchment, however, may weaken the state's security posture, embolden rivals, and afford domestic political opponents the opportunity to criticize incumbent leaders for reducing the credibility of the country's commitments, betraying allies, or being soft toward a security threat. Another remedy is a military buildup—allocating a larger portion of national resources to the service.30 But constituents might prefer more spending on butter and less on guns, or policymakers might believe that a military buildup would destabilize the economy.
In contrast to these alternatives, innovation promises to restore the service's effectiveness by increasing efficiency without reducing commitments or expanding resources.31 If political leaders reject both retrenchment and a military buildup, the affected service has an incentive to innovate. Whether it does so is outside the scope of this theory, but the size of a commitment-resource gap has important implications for whether innovation, if it occurs, is likely to succeed.
I expect that harmful innovation is more likely in a wicked mismatch, when a service's commitments are increasing and its resources are decreasing. In a wicked mismatch, a service's traditional capabilities not only are threatened by severe resource scarcity for the foreseeable future but also are rendered ineffective by the ambition of future missions. Expectations about the future are bleak. The service is therefore, I argue, in a professional and bureaucratic crisis. Officers doubt the service can perform assigned missions successfully, fear that the security of the state is at risk, and worry about the service's status and continuing relevance to national security. The service also wants to rectify this situation because it is overstretching its operational capacity by offering a semblance of meeting commitments, and business as usual offers little prospect of enhancing its political standing, contribution to national defense, and associated budget justifications.
Military innovation becomes a desperate, high-payoff, low-probability gamble to resolve the wicked mismatch by placing large bets on a new capability and cannibalizing traditional ones to do so. This strategy expands the range of possible outcomes: the new capability may significantly increase mission effectiveness and resource efficiency, but the service could perform even worse by neglecting traditional capabilities.32 Such innovative gambles are surprising because the standard intuition in military innovation studies is that bureaucratic organizations in general, and military hierarchies in particular, are prone to stasis and resist dramatic changes that disrupt their standard operating procedures.33 But a wicked mismatch generates pressures to adopt risk-seeking preferences.34
Factors widely considered to be conducive to military innovation can, when taken to extremes, cause harm. Innovation scholars argue that shrinking resources or expanding commitments can align bureaucracy behind innovation, but I propose that extremely wide commitment-resource gaps significantly increase the probability that innovation will be too radical and ultimately self-defeating. Although similar behavior might result from either severe resource scarcity or ambitious commitments, the gap's size, not its drivers, ultimately trigger harmful innovation. The size of a commitment-resource gap, however, is in large part a matter of political and professional judgment and thus hard to measure objectively.35 After all, armed services perennially complain about resource scarcity. A wicked mismatch therefore serves an analytical purpose because an extremely wide gap is especially obvious when resources shrink and commitments grow at the same time.
flawed innovation process
A wicked mismatch can produce flaws in the innovation process, with three particularly dangerous and potentially interrelated characteristics: radicalism, wishful thinking, and rushed development.
First, a wicked mismatch can elicit radical proposals to substitute a new capability for traditional ones. In professional military organizations, officers regularly search for new solutions that could increase effectiveness and improve efficiency.36 What makes these proposals different is their radicalism—the degree of creative destruction—and their ready audience. The new capability promises to do much more with much less if it heavily cannibalizes traditional ones. Proponents of the new capability suggest that older capabilities are obsolete and cannot meet future mission requirements, and they further suggest that the military service should divest from old capabilities and transfer resources to create new ones. Such proposals should fare poorly in hierarchical and conservative military bureaucracies because of their organizational predilection for current operating procedures, but the crisis produced by a wicked mismatch opens an opportunity for radicalism to gain a wider audience. For example, the U.S. Army in the 1950s—facing global commitments in Europe and Asia amid shrinking personnel and money—adopted the pentomic division proposal, specializing in strategic mobility and limited nuclear warfare to the detriment of its conventional capabilities.37
A second flaw in the innovation process is that a wicked mismatch incentivizes wishful thinking that exaggerates the rewards of innovation and downplays its risks. As the service experiments with the new capability, concerns almost inevitably arise. Political constraints might preclude its future use, the underlying technology might be premature, enemy countermeasures might negate its intended effects, or the loss of traditional capabilities might significantly weaken the service. But there is an organizational imperative to justify the service's continued relevance.38 The service may therefore disregard plausible criticisms and ignore contemporary evidence that innovation's promises may be oversold.39 Desperation motivates generous interpretations of the limited data about the new capability based on deductive logic that arrives at favorable conclusions, rather than prolonged empirical testing.
Wishful thinking can overemphasize the promises of innovation. For example, in the late 1940s, the U.S. Air Force innovated an air-atomic blitz capability in which an unescorted fleet of intercontinental bombers could rapidly drop most of if not the entire U.S. nuclear stockpile and destroy the Soviets' capacity and will to fight. But the air force dismissed several plausible criticisms of the new capability. First, it assumed that political and moral constraints would not preclude nuclear use in the next war. Second, it overlooked shortcomings in several key elements needed to ensure the success of the new capability: the intelligence to identify, the bombing accuracy to destroy, and the fighter escorts to reach critical targets deep within the Soviet Union. Consequently, the air force disproportionately invested in Strategic Air Command and cannibalized important capabilities in air superiority and close air support.40
Wishful thinking can also de-emphasize the perils of creative destruction. The new capability will allegedly cover vulnerabilities opened by the loss of traditional methods. Officers may interpret experimental data using a one-size-fits-all approach to problem solving. For instance, if a capability allows the army to win a major war, then it should also be effective at fighting small wars. Another example of wishful thinking occurred before World War II, when it was thought that strategic bombers could operate effectively in independent flying formations without support from escort fighters.
Third, desperation can accelerate and rush the innovation process, reducing the quality of vetting. In standard accounts, a military innovation process unfolds as a protracted, even decades-long struggle between innovators and conservatives. Naval aviators championing the aircraft carrier, for example, criticized battleship admirals for wanting to preserve outmoded ways of war. But predictions about future warfare are often wrong, and the military's enduring quest for short, decisive battles is arguably misguided.41 Conservatives—a better term is “maintainers”—can therefore serve as a healthy check on magical thinking. Prolonged intellectual and bureaucratic tugs-of-war serve a virtuous purpose because “debate and resistance are required to separate the truly good from the merely new among innovations.”42
In contrast to prolonged debate, acceleration increases the risk of implementing inferior procedures—what organization theorists call a “competency trap.”43 This is particularly dangerous when undertaking major changes in the highly complex and difficult conduct of modern warfare. Radical change is not inherently harmful, but it requires time and resources to properly assess and manage its higher degree of risk. As military technology and operations become more complex, organizations must grapple with “rogue outcomes” and develop appropriate information practices.44 Similar friction emerges in organization and doctrine. The larger the magnitude of change, the greater the likelihood that complications and countervailing problems will arise, both of which take time and careful consideration to address.
combat effectiveness
When conducted under the pressures of a wicked mismatch, an innovation process animated by radicalism, wishful thinking, and rushed development is more likely to undermine combat performance. Combat effectiveness is ultimately about producing favorable outcomes, which vary among different missions.45 Effectiveness also involves achieving results at acceptable costs (i.e., in terms of casualties and losses in matériel), as determined by the political stakes and commanders' intents. Therefore, innovation harms military performance insofar as it prevents fielded combat forces from achieving mission objectives, or from doing so at acceptable levels of cost in lives and equipment.
The drivers of ineffectiveness are twofold: the intended effect is not achieved, and unintended effects are harmful and consequential. First, combat forces associated with the new capability find that their force structure and doctrine fail to deliver the promised decisive effects. This could be because the underlying technology is premature or the enemy deploys predictable countermeasures. The point is not the specific problems, but rather that the service willfully ignores these well-anticipated concerns in their desperate search for a silver bullet. Second, traditional capabilities atrophy, and the service cannot rely on these either. The innovation process overlooks foreseeable vulnerabilities that will emerge if the service does not maintain its traditional methods—the very capabilities developed to prevent these weaknesses.
A final indicator of harmful innovation is that, over time, disappointing combat results prompt the service to unlearn or abandon the new capability and restore traditional ways of war. It attempts to reverse creative destruction. Reverting to older methods reflects an effort to shore up combat power after the promises of military innovation are unmet and its perils materialize.
This evaluative framework examines just one causal pathway between military innovation and combat effectiveness. Innovation can also be self-defeating because a service prepares for the wrong mission by misreading the nature and context of the next conflict. Innovation can also diffuse across the international system, giving imitators an unexpected competitive advantage. Or cost overruns might dwarf the innovation's promised efficiency gains.46 These alternatives are all valid ways to assess an innovation's impact on performance, but I focus narrowly on intended effects and immediate unintended consequences. In doing so, I sidestep the challenge of evaluating the impact of extenuating circumstances and comparing near-term versus long-term effects.
The Puzzle of British Performance in the Desert War, 1941–1942
I demonstrate the plausibility of the theory using a case study of British Army innovation in armored warfare during the interwar period and its subsequent performance in the Desert War. From February 1941 to July 1942, British and Commonwealth forces suffered a string of defeats against the German Army and its Italian allies in North Africa, but then notably improved their battlefield performance and achieved three victories (two at El Alamein and one at Alam el Halfa) in subsequent months. The theory suggests that the British Army's ineffectiveness and its subsequent improvement in combat occurred in large part because of harmful innovation in peacetime and that innovation's reversal in wartime.
One of the most enduring images of World War II remains Germany's rapid victory in the Battle of France. According to standard treatments in political science and strategic studies, the German Army was effective because it innovated blitzkrieg, whereas its British counterpart failed to innovate and thus performed poorly.47 Both interpretations need correction. The word blitzkrieg conjures the idea of fast tanks rolling up on the opponent with infantry following behind conducting mop up operations. But the blitzkrieg is a misleading myth that the Germans innovated a revolutionary fully mechanized approach to mobile warfare.48 In reality, rather than being a drastic departure from the past, German tactics and operations in World War II were an extension of the infiltration tactics that it developed to restore mobile warfare on the Western Front.49 German innovation in armored warfare involved mechanizing and motorizing combined-arms organization and doctrine and, as such, is synonymous with modern concepts of maneuver warfare.50
The blitzkrieg myth complements another misleading narrative: that British armor innovation failed because conservative army leaders suppressed a small group of prophetic tank enthusiasts, which resulted in weak army performance on the armor-dominated battlefields of World War II. But in reality, Britain's interwar army favored reform and mechanization. Inspired by futuristic visions of mechanical warfare, the British Army relied too heavily on armor organized in tank-only brigades as the decisive arm on the battlefield.51 Any praise of such a blitzkrieg and condemnation of British resistance to similar ideas must assume that this radical vision would work—but the Germans never tried it and the British Army in many respects did.52
The case of British armor innovation and performance in the Desert War is a well-suited illustration of the theory, for three reasons. First, it is a “hard case” because the case outcome is surprising from the perspective of previously established theory, yet it matches the expectations of a new argument.53 The narrative of British ineffective conservatism and German effective radicalism is attractive because it conforms to the dubious assumption that innovation consistently improves military effectiveness. In contrast, my theory of harmful innovation counterintuitively expects British ineffectiveness to stem in part from prewar innovation.
Correcting the understanding of the British armor case also sheds light on German armored warfare—a critical case for military innovation studies. My theory does not identify causal mechanisms for beneficial innovation. That is, the absence of a wicked mismatch does not guarantee beneficial innovation even if resources exceed commitments. But if a wicked mismatch imposes harmful pressures on an innovation process, its absence removes certain constraints on that process.54 An important dimension of effective German innovation in the interwar period was the army's ambitious rearmament programs, which reduced demands for radical innovation to bridge its commitment-resource gap.55
Second, analyzing British battlefield performance in the Desert War from February 1941 to November 1942 offers some control over five other important determinants of military effectiveness: balance of numerical strength, balance of qualitative superiority, regime type, the adversary's leadership and military prowess, and prewar preparations.56 The first three factors—all commonly cited sources of effectiveness—suggest that British forces were the favored belligerent. At the outset of almost every battle, British forces had more infantry, tanks, and artillery. British armaments in North Africa were also at least qualitatively equivalent to German tanks and artillery.57 To be sure, the Germans had superior anti-tank guns, but the British did not use their superiority in field artillery to negate this advantage.58 Finally, some theories of military effectiveness argue that democratic regimes produce better armies because their meritocratic systems promote higher-quality commanders and liberal values cultivate tactical initiative.59 But Britain was the relatively democratic belligerent, not Germany.
Analyzing British performance over time also helps put in perspective the relative significance of German leadership and general military prowess. Because Erwin Rommel was primarily in command for much of the period under study, his celebrated leadership of Axis forces in North Africa cannot alone explain variation in British effectiveness. And even though German forces had inherent advantages—a long legacy of combat effectiveness and ideologically motivated cohesion—these too were relatively constant throughout the Desert War, so again cannot by themselves explain variation in battlefield results.60
The British Army also had more opportunity to prepare for the mission. Before the war, the British Army trained and prepared to fight in the desert, and by none other than Maj. Gen. Percy Hobart, the army's leading armor innovator at the time. In fact, desert warfare embodied everything that British armor innovators dreamed of: a featureless landscape allowing fluid offensives carried out by fast tanks.61 In contrast, German armor organization and doctrine were developed with the narrower topographies of Europe in mind.62 Battlefield results are not mono-causal outcomes, but the case selection strategy weakens confounding factors and increases the likelihood that peacetime innovation played a consequential role in army ineffectiveness.
Third, the case of British armor innovation is data-rich, making it suitable for process tracing. I draw evidence from a variety of sources, including official government documents, internal army memoranda, army publications, published memoirs, and official histories. I examine objective indicators (i.e., foreign policy decisions, service budgets, and troop levels) and subjective perceptions (i.e., how service leaders describe strategic challenges) to identify whether the interwar British Army operated under wicked mismatch pressures. I also process trace the interwar debates among innovators and maintainers, looking for “mechanistic evidence” of radicalism, wishful thinking, and a rushed process.63 I then analyze the wartime effectiveness of the new capability, the loss of traditional ones, and the army's attempts to improve performance under fire in the Desert War.
British Innovation in Armored Warfare, 1919–1939
The British Army's size and expenditure experienced unrelenting downward pressure for virtually the entire interwar period, despite a heavy mission burden and growing international threats to its security commitments. To resolve this wicked mismatch, the army innovated all-tank mobile assaults that allegedly improved its effectiveness in assigned missions while also economizing the army's limited budget and personnel. The innovation process was characterized by radicalism, wishful thinking, and rushed development, all of which downplayed criticisms that contemporary tanks were mechanically unreliable and that armored assaults were susceptible to anti-tank countermeasures if they lacked the support traditionally provided by other arms.
commitment-resource gap: wicked mismatch
The British Army faced the challenge of bridging a wicked mismatch between expanding commitments and shrinking resources—a gap that persisted for most of the interwar period.64 The army had expansive imperial, domestic, and continental obligations at this time.65 It had to police and defend an empire at its territorial zenith, having grown from one-fifth of the world's landmass before World War I to one-quarter of the globe.66 At home, the army had to contain an Irish insurgency movement and quell what were perceived to be coordinated labor strikes that posed a political challenge to the government.67 Finally, the army had continuing obligations in Europe, most significant of which was upholding the Locarno Pact to guarantee the common borders between Belgium, France, and Germany.
To meet these wide-ranging security commitments, the army had fewer soldiers and less money than before World War I. From a high of 3.8 million regulars in 1918, army strength rapidly dropped to 217,477 by 1922 and remained below 200,000 for most of the interwar period. In comparison, there were 247,250 army regulars in 1913.68 India and the Dominions—which significantly contributed to the war effort—were unwilling or unable to assist in imperial emergencies. They took responsibility for local defense, but Britain was responsible for imperial defense as a whole.69
The army's personnel shortage was exacerbated by budget levels being held as low as possible for almost two decades. In August 1919, the cabinet decided that the defense departments should base their budget requests on the assumption that “the British Empire will not be engaged in any great war during the next ten years, and no Expeditionary Force is required for this purpose.”70 Beginning in 1928, this Ten Year Rule was renewed daily. To produce surpluses and pay off wartime debts, the treasury tried to cut spending to 1913–1914 defense estimates, with the army consistently in the weakest position among the armed services. From the 1922–1923 to the 1925–1926 defense estimates, almost all real cuts came from the army, whose net estimates fell by 25 percent.71 Even after the cabinet revoked the Ten Year Rule in 1932, army spending remained low because the government allocated most rearmament resources to the Royal Air Force and Royal Navy.72 Two interlocking beliefs guided Britain's reluctance to rearm and build a field army. First, Britons would “never again” fight a drawn-out war in Europe because another war meant the end of empire if not civilization.73 Second, rearmament would lead to an egalitarian socialist state, and financial stability was, in addition to the land, naval, and air arms, the fourth arm of defense.74
Each successive Chief of the Imperial General Staff (CIGS) drew attention to the wicked mismatch and pressed civilian leaders to either reduce the army's commitments or increase its resources. CIGS Field Marshal Sir Henry Wilson (1918–1922) wrote to the secretary of state for war: “I cannot too strongly press on the Government the danger, the extreme danger, of His Majesty's army being spread all over the world, strong nowhere, weak everywhere, and with no reserve to save a dangerous situation or to avert coming danger.”75 Wilson's successor, Gen. Sir Rudolph Lambert (1922–1926), the Earl of Cavan, recorded that “the whole of my four years as C.I.G.S. was a period of [army] retrenchment … a struggle for existence.”76 The next CIGS, Field Marshal Sir George Milne (1926–1933), described how the army was operating at full capacity, with fewer infantry battalions than before World War I, while trying to match its Locarno obligations and respond to unrest in China, the swaraj movement in India, policing Palestine and Iraq, and an anti-British Egypt.77 “The Army is pared to the bone,” Milne warned, and “our army is so small that it is incapable of fulfilling our international obligations.”78
british innovation of armored maneuver
To resolve the wicked mismatch, the army found an innovative solution in a new capability that I call “armored maneuver.” The idea was that a mobile force, consisting almost entirely of tanks, could maneuver on the future battlefield with impunity and land a decisive blow against the enemy's rear areas. This new capability allegedly solved the army's wicked mismatch by improving combat effectiveness in great wars, small wars, and internal security, while requiring fewer men and less money than the army's existing force structure. But experimentation with prototype forces yielded worrying results, and critics raised plausible concerns about enemy countermeasures and mechanical unreliability. Nonetheless, the army remained wedded to a futuristic vision of armored warfare.
radicalism. British armor innovators shared an overarching idea of armored warfare as mobile all-tank operations with little need for supporting infantry and conventionally towed artillery.79 Maj. Gen. J. F. C. Fuller and Capt. Basil Liddell Hart were the key spokespeople for armored maneuver, though other figures in the Royal Tank Corps—namely, George Lindsay, Charles Broad, and Percy Hobart—were the implementers. Together, they argued that tanks were the optimal combination of protection, mobility, and offensive power.80 As such, armored maneuver promised to be a panacea for the army's wide-ranging security commitments and a substitute for the traditional capabilities associated with the combined-arms offensives of the Western Front. Armor radicalism demanded a high degree of creative destruction.
Armored maneuver allegedly increased the army's effectiveness in all its assigned missions, whether it be great wars, small wars, or internal security.81 In the next great war in Europe, armored maneuver would prevent another bloody Western Front. Battles would begin with an armored clash for “tank supremacy” in which infantry, artillery, and horsed cavalry would play “the part of interested spectators” and “do next to nothing.”82 Fast tanks would exploit into the enemy's rear and paralyze the enemy's communication and command centers, plunging the opposing army into psychological disarray. The traditional arms would come into play only after the battle had been decided: as armored forces moved forward “by a series of bounds,” the traditional arms would occupy conquered territory and garrison “a chain of fortified depots” established behind the advancing tank forces.83
In small wars, the main challenges were that military garrisons were usually located far away from disturbances and rebels had growing access to small arms. But tanks could allegedly travel far without relying on supply lines, do so quickly across various terrains, and counter small arms fire.84 Mechanization functionally reduced the size of empire. Finally, for policing and internal security, tanks dispensing nonlethal chemical gases offered a discriminating and less escalatory way to disperse riots.85
Armored maneuver could allegedly do these things at cheaper cost than the traditional capabilities developed in World War I. In the final year on the Western Front, the British Army was integrating infantry, artillery, and armor capabilities, with the aid of aerial spotting and surprise, to penetrate German defensive positions held in depth and to do so with acceptable losses.86 Heavy counter-battery artillery fire was followed by a tank-supported infantry advance under cover of a creeping artillery barrage that included both high-explosive shells and smoke shells to suppress enemy resistance.87 This became the standard way of war enshrined in British doctrine after 1919.88
In contrast, innovators touted armored maneuver as an efficient substitute for the difficult and demanding tasks associated with implementing combined-arms principles.89 Fuller and Liddell Hart proposed a “new model army” in which mechanical vehicles performed all primary land combat functions. “The tank is likely to swallow the infantryman, the field artilleryman, the engineer and signaller,” Liddell Hart wrote, “while mechanical cavalry will supersede the horseman.”90 Moreover, light tanks, fast cruiser tanks, and heavy tanks could allegedly coordinate better than the current arrangement of inter-arm cooperation.91 Conversely, hitching tanks to slower elements such as infantry would be “tantamount to yoking a tractor to a draught-horse” and having them “operate together under fire” would be “equally absurd.”92
By reducing troop requirements and mechanizing the remainder, a remodeled army could “produce, within the limits of the money available, a military organization of the highest efficiency and with powers of efficient development along the economic line.”93 With stagnant army budget estimates for the foreseeable future, “new mechanised units” were touted “in place of, not in addition to the old infantry and cavalry units,” or else there would be “no real reduction of cost, nor modern efficiency.”94 Substitution was not only economical, it would improve the army's combat power. In a great war, a new model division had equivalent fighting value to four or more current divisions and to “almost any number of present-day divisions” if fighting a small war.95
wishful thinking. The British Army experimented with armored maneuver during two training seasons in 1927 and 1928, which featured the world's first fully mechanized combat brigade—the Experimental Mechanized Force, later renamed the Experimental Armored Force. These experiments should have tempered excessive faith in the promises of armored maneuver, but they did not.
First, the experimental force failed to achieve assigned mission objectives despite being organized according to armored-maneuver principles. Col. George Lindsay, inspector of the Royal Tank Corps, actively lobbied for armor-centric formations to economize personnel and money, as opposed to a prototype all-arms mechanized division. CIGS Milne sided with Lindsay (“Colonel Lindsay is on the right lines and we have now to decide how to translate his ideas into action”) and offered command to Fuller (though he declined).96 The 1927 and 1928 training seasons culminated in large exercises that pitted the experimental force against a more traditional opponent. In 1927, opposed by an infantry division and a horsed cavalry brigade, the Experimental Mechanized Force failed to take a high-ground location. In 1928, the same infantry division, augmented by a tank company, an armored car company, a cavalry regiment, and an artillery brigade, successfully stalemated a combined force of the Second Cavalry Brigade and the Experimental Armored Force.
Second, given the tight resources, the mechanized formations lacked adequate and appropriate equipment, which limited the reliability of experimental data. Milne recognized that the army wanted “to make certain experiments and we have not had the money to do what we really intended.”97 The prototype units lacked suitable, reliable, and streamlined vehicles to conduct the desired exercises, and they struggled to field them in adequate numbers.98
Third, the exercises were designed to highlight the vulnerabilities of armor. Maj. Gen. Sir John Burnett-Stuart served as the director of the maneuvers. He openly admitted that the armored force's 1928 exercises were “deliberately planned to bring out its limitations rather than to make a display of its powers.”99
Criticism of armored maneuver centered on enemy anti-tank countermeasures and unreliable tank mobility.100 Postexercise assessments repeatedly emphasized the need for greater supporting fire in any tank attack on enemy defenses because armored maneuver was vulnerable to enemy anti-tank weapons and artillery.101 Similarly, a general staff memorandum on army training criticized the failure to secure proper fire support to suppress enemy anti-tank fire before assaulting a position, violating the “correct principles” established during World War I.102 It warned that mechanized forces must not be allowed to “upset all our preconceived notions of war.”103 Tank mobility was also a perennial issue. Burnett-Stuart cautioned that tanks could not traverse all terrain and that their mobility was still in the developmental stage.104 The experimental force lost many of its medium tanks from breakdowns even on short trips.105
These criticisms were highly plausible. In World War I, British tanks were indeed vulnerable to German countermeasures in the form of field guns, anti-tank rifles, armor-piercing machine gun ammunition, and minefields. In the interwar period, British tanks could not survive a direct hit by the shell of even a small-caliber, high-velocity gun (technology that was already available).106 Moreover, much like the tanks in World War I, interwar models had trouble traversing difficult terrain and often broke down.107 When the War Office dispatched an armored “Mobile Force” to Egypt during the Abyssinian crisis, it had the newest light tanks, yet struggled with so many broken tracks that it was nicknamed the “Mobile Farce.”108
Instead of altering the radical trajectory of British armor innovation, however, the experiments somehow confirmed the theory of armored maneuver. Instead of conceding that traditional capabilities of infantry cooperation and indirect artillery fire support were necessary, armor innovators argued that tank mobility sufficed as a form of protection. First, exercise umpires allegedly overestimated the effectiveness of anti-tank weapons. The representative white and green flags used to fortify defenses in the exercises were “cheap to provide and easy to wave” but “an effective weapon, complete with tractor and ammunition trailer” was “an expensive item,” such that no “infantry division could be provided enough to form the immense circular screen that would be necessary for its protection.”109
Second, European armies in the next great war would allegedly be smaller than in the last one, and therefore would have exposed flanks.110 Even if anti-tank weapons were lethal, an armored force could use its incredible mobility to maneuver around obstructions, which in turn precluded the need for infantry cooperation to establish bridgeheads and clear localities.111 A few years later, when crystal sets (a rudimentary form of radio) made possible the tactical control of a mobile force, Liddell Hart declared the dawn of new “anti-anti-tank gun” tactics with which a “few scattered guns” could “easily be overrun by a tank force in its onward surge.”112
Finally, critics simply needed greater faith in the principle that “he who applies a novel device by a novel method has oftenest attained revolutionary results in history.”113 Liddell Hart attributed the mechanized force's defeat in the 1927 training season to the commander's imprudent fear of enemy attack. The problem was not the enemy, but that the mechanized force was too concerned about security; its maneuvers were not bold enough.114 The 1928 exercises revealed nothing that reasoning—“the cheapest form of experiment”—had not already made self-evident: “that the present composition of the force is fundamentally unsuitable” and the “obvious truth that armoured and unarmoured vehicles do not coalesce.”115 The solution, Liddell Hart reiterated, was an all-tank force with streamlined vehicles.116 Whereas concerns about anti-tank weapons and unreliable mobility arose from the army's own practical experiences in World War I, armored maneuver was based primarily on deductive logic and theoretical leaps into an uncertain future.
rushed development. Armor innovators quickly succeeded in entrenching their mechanizing agenda. Top army officers endorsed the ideas behind armored maneuver and empowered known armor radicals to design and train mechanized formations. The historian David French observes that “by the end of the 1920s the British had virtually abandoned the attempt to create permanent, all-arms formations incorporating a balance of tanks, infantry, and supporting arms.”117 Attempts to temper armor radicalism in the 1930s failed; and armored maneuver principles guided how British armor organization and doctrine developed in the lead-up to World War II.
In September 1927, Milne praised the Experimental Mechanized Force because he was “perfectly certain that we are working on absolutely the right lines.” At the outbreak of war, a mobile force designed to operate across hundreds of miles could deliver “a swinging blow to come around the flank” and “carry out big operations and big turning movements.” Normally, this force would remain entirely armored because infantry became a liability in combat.118 Milne later proposed to the Army Council a future armored brigade with essentially the same blueprint as Liddell Hart's all-tank force.119 He also tasked Col. Charles Broad, a known supporter of armored maneuver, to compile primers on armored warfare. These envisioned tank brigades achieving decisive victory with numerically inferior forces comprised of light tanks for reconnaissance and medium tanks for striking, but they excluded other arms.120 When Milne's successor, Field Marshal Sir Archibald Montgomery-Massingberd, permanently established the First Tank Brigade in 1933, it adhered to an all-tank conception.121 The director of staff duties observed that it could “safely be said that the general consensus of Army opinion was in agreement,” that the armored brigade was a commander's “most powerful offensive agent.”122
The 1934 trials with the First Tank Brigade represented the last serious attempt to temper armor radicalism. Hobart, now inspector of the Royal Tank Corps and commander of the First Tank Brigade, preferred an independent tank brigade concept, in which tanks would carry out deep penetrations with only the smallest attachments to avoid logistics problems and coordinating different arms. But Lindsay pushed for a mobile division concept that incorporated the tank brigade into an all-arms mechanized division. The two agreed to temporarily form a Mobile Force comprised of the First Tank Brigade, Seventh Infantry Brigade, a mechanized field artillery brigade, and other supporting arms—an armored division in all but name.123 The Mobile Force was defeated by the unmechanized First Infantry Division, which prepared significant defensive arrangements and used motorized units to block the Mobile Force's retreat with mines and anti-tank guns. Again, a more traditional force defeated the more innovative one. The Royal Tank Corps blamed the Mobile Force's poor performance on Lindsay's command, not on armored maneuver, and, as a result, Hobart's independent tank brigade concept eclipsed Lindsay's mobile division.124
Thereafter, the army designed its armored division for armored maneuver carried out by their main striking element—all-tank armored brigades.125 Tanks and infantry would be organized separately, cooperating only in particular operations and only at the divisional level. As the armored division evolved over the latter half of the 1930s, the already small representation of supporting arms shrank over time. In the final prewar model, the division contained only one infantry battalion, whereas four battalions eventually became standard in World War II.126 The Royal Tank Corps “dominated” the armored division, “committed to a machine-age vision that tanks by themselves could win battles.”127
Variation in British Army Effectiveness in the Desert War, 1941–1942
British armored maneuver and German combined-arms maneuver came
head-to-head in the Desert War. Three months after Italy invaded Egypt in September 1940, Britain's Western Desert Force launched Operation Compass, a counterattack that resulted in a complete rout of the Italian Tenth Army as it retreated westward toward Tripolitania. In February 1941, the Afrika Korps under Lt. Gen. Erwin Rommel's command arrived in North Africa to make sure that Tripoli was not abandoned without a fight. Over the next two years, the Desert War unfolded across a 1,200-mile stretch of land between Tripoli in the west and Alexandria in the east.
British military performance for the first sixteen months was poor—the army repeatedly failed to achieve mission objectives at acceptable costs—but then noticeably improved at the First Battle of El Alamein, the Battle of Alam el Halfa, and the Second Battle of El Alamein. My theory of harmful innovation expects, and the evidence shows, that British armor innovation helped undermine military effectiveness. When the principles of armored maneuver held sway, British forces were ineffective, but as British commanders gradually unlearned armored maneuver and restored traditional capabilities—specifically those associated with the infantry-artillery team developed on the Western Front—performance improved.
british army ineffectiveness in the desert war, 1941–1942
From March 1941 to June 1942, British forces suffered a string of defeats as depicted in figure 2. Rommel's first offensive (March 28–May 30, 1941) reversed Italian territorial losses from Operation Compass and pushed the British out of Libya, except for the garrison at the port city of Tobruk. The British and Commonwealth allies tried to relieve the siege of Tobruk three times. Operation Brevity (May 15–16, 1941) and Operation Battleaxe (June 15–17, 1941) failed to reach Tobruk, and British armor suffered shocking losses. On the third attempt, in Operation Crusader (November 18–December 30, 1941), the Western Desert Force had expanded into the Eighth Army and finally relieved Tobruk with overwhelming matériel superiority. But again, the British bore an unacceptable cost to its armored forces, while Rommel and his staff were satisfied with their army performance.128 Shortly thereafter, Rommel launched his second offensive and again chased British and Commonwealth forces eastward across Libya. The offensive slowed just west of Tobruk, around Gazala. During the subsequent Battle of Gazala (May 26–June 21, 1942), Rommel's divisions again forced the Eighth Army into retreat, but this time seized Tobruk and pushed onward into Egypt.
Major Operations of the Desert War, 1941–1942
British Army ineffectiveness can be traced to the radicalism of its armor innovation: the new capability failed to deliver on its promises, but the army could not rely on its traditional capabilities either. The central principle of armored maneuver was tank primacy—the mistaken idea that tanks would be war-winning weapons if they were unencumbered by the complicated tasks of cooperating with infantry and artillery. But British armored divisions conducting armored maneuver found that their tank numbers fell at an astonishing rate for the very reasons raised by interwar critics. Tanks' mechanical unreliability was a persistent problem throughout the conflict, as it had been in the interwar period.129 The chief culprit, however, was the German use of anti-tank guns—a plausible countermeasure that armor innovators downplayed by appealing to tank mobility and surprise attacks.
British campaign plans expected tanks to search for and destroy German panzer forces in decisive tank battles that would determine any land operation's outcome.130 But German combined-arms maneuver eschewed tank-on-tank engagements and instead emphasized anti-tank guns, as was done in World War I. German tactics pushed anti-tank guns forward to prepare the way for panzer regiments and to cover their flanks in combat. Rommel drew British armor onto anti-tank guns while reserving his own armor for maneuver against more vulnerable targets such as supply columns, dismounted infantry, or a formation's headquarters.131
When confronted with these tactics, British armored divisions struggled to overcome enemy defenses because they lacked traditional capabilities. Traditionally, infantry spotting and artillery fire would be used to suppress enemy defenses, but the armored division's artillery and infantry components operated haphazardly and independently.132 The artillery lacked a standard technique to support fast tank forces, and mobile infantry battalions did not know how to cooperate with tanks.133 For example, although the Eighth Army's order of battle showed a decisive field artillery advantage in Operation Crusader, panzer divisions typically enjoyed a local superiority in artillery support against British armor attacks.134 After the Battle of Gazala, the chief of staff of the Middle East Headquarters criticized the handling of British armor, which “fought without its vital motor infantry component.”135 With little fire support from other arms, British tanks repeatedly charged German anti-tank gun screens to their own demise. When the British tried to work around the German flank, they were lured onto German guns.136
Commanders on both sides eventually recognized the causal relationship between British armored maneuver, the loss of traditional capabilities, and military ineffectiveness. From Rommel's perspective, “the British armoured divisions—in contrast to our own—were ‘pure in race,’ that is to say, they consisted of armour throughout.”137 Similarly, Lt. Gen. Sir Henry Maitland Wilson, who commanded the Western Desert Force, sought to “check a pernicious doctrine … that tank units were capable of winning an action without the assistance of the other arms.”138 Gen. Sir Claude Auchinleck, commander in chief in the Middle East and commander of the Eighth Army, bemoaned “the idea that the Royal Armoured Corps was an army within an army” and emphasized the need to “restore the proper balance of the three arms and so secure their better co-operation on the battlefield.”139
The root problem was innovative deviation from combined-arms organization and tactics. According to one German staff officer, the German panzer division was “a highly flexible formation of all arms, which always relied on artillery in attack or defense,” whereas the British forces “failed to make adequate use of their powerful field artillery, which should have been taught to eliminate our anti-tank guns.”140 Maj. Gen. William Gott, commander of the Seventh Armored Division, attributed German strength and British weakness to the way that a German soldier “[in] every phase of battle … co-ordinates the action of his anti-tank guns, Field Artillery and Infantry with his tanks.”141 And Maj. Gen. Sir Bernard Freyberg, commander of the New Zealand Division, concluded that British failures through 1942 were not for want of a good tank but for want of artillery support for British armor.142
The heavy reliance on armored maneuver also weakened the infantry divisions' effectiveness in defensive and offensive engagements. Since armor was deemed the principal anti-tank weapon, infantry commanders expected and demanded fast tanks to be stationed nearby to defend them against panzers. This strategy warped campaign plans. For instance, in Operation Crusader, the Eighth Army suboptimally dispersed its overwhelming number of tanks and positioned them to guarantee protection to infantry divisions that refused to move until the armored battle was underway.143 But later experience demonstrated that an infantry division with adequate artillery could repel a panzer attack and even inflict heavy damage.144
The British Army had also lost the traditional capabilities needed to attack a position held in depth.145 Infantry mounted night attacks and successfully seized their objectives by daybreak, but supporting arms got lost in the night, were held up by enemy posts that the infantry had bypassed, or never departed from the starting line. Shorn of supporting arms, entire infantry brigades were destroyed by inevitable German counterattacks at dawn. Infantry commanders expected too much from tanks, whereas tank commanders were not trained to cooperate with infantry.146
improvement in british army effectiveness, 1942
After the Battle of Gazala, the Axis forces' momentum weakened near a defensive line prepared by the Eighth Army that ran south from El Alamein. It was here that British forces began to show signs of improvement. From July 1942 to the end of the year, the Eighth Army mounted three successful operations, achieving mission objectives at acceptable cost. At the First Battle of El Alamein (July 1–27, 1942), the Eighth Army successfully repelled Axis advances, though its counterattacks failed to make headway against enemy defenses. Rommel again tried to break through British defenses in the Battle of Alam el Halfa (August 30–September 5, 1942) but was similarly repulsed. This time, instead of an immediate counterattack, British forces reorganized and retrained for almost two months before initiating the Second Battle of El Alamein (October 23–November 4, 1942), which was the British Army's first truly effective offensive against German forces.
What were the British doing that they had not done before? The evidence suggests that British military effectiveness improved as the army reversed innovation, unlearned armored maneuver, and restored traditional capabilities: an infantry-artillery team supported by tank forces. British commanders first restored traditional capabilities on the defense. The Eighth Army learned to coordinate and concentrate artillery fire to peel apart the all-arms organization of attacking panzer divisions. Infantry learned to defend themselves as they received more anti-tank weapons, and to act as forward infantry observers for artillery fire. And British armor learned to lure the enemy into combined-arms fire delivered by artillery, infantry, and tanks in hull-down position, as opposed to charging forward against attacking panzer formations.
At the First Battle of El Alamein, Rommel tried to outflank the Eighth Army's position and force it back to the Suez Canal. On the first day, an infantry brigade, supported by nine heavy tanks and artillery, blunted the attack at Deir el Shein. The next day, Rommel redirected his attack, but coordinated artillery fire pinned down his forces. By the third day, attacking forces dug in and transitioned to the defensive, and eventually withdrew. After a month's rest, at the Battle of Alam el Halfa, Rommel tried once more to swing south of the British defensive line. British armor and anti-tank gunners hid in the folds of Alam el Halfa ridge and fired on the advancing panzers once they were within 300 yards, followed by heavy concentrated fire from over 100 field guns.147 After two days, Rommel was forced to withdraw.
During the two-month lull between Alam el Halfa and the Second Battle of El Alamein, the Eighth Army restored traditional offensive capabilities. Lt. Gen. Bernard Montgomery was in command, having replaced Auchinleck after the First Battle of El Alamein. He reorganized and retrained the infantry, armor, and artillery to carry out coordinated set-piece battles that were fit for the Western Front.148 Each infantry division and its components underwent full-scale rehearsals to form a bridgehead: the infantry assault, artillery support, minefield gapping, and cooperation with heavy infantry tanks and the Royal Air Force. British armored divisions practiced coordinating tank, artillery, and machine-gun fire to fight as a division rather than as independent armored brigades. Finally, the Eighth Army returned authority to divisional artillery commanders, reintroduced counter-battery and creeping barrage methods developed in World War I, and adapted a new standardized technique of defensive fire against alternating impromptu targets.149
Unlike previous offensives, the Second Battle of El Alamein exhibited the qualities that were a hallmark of British operations in World War I. It was a rehearsed infantry-artillery assault, supported by heavy tanks, against fixed defenses and enemy garrisons, designed to destroy the enemy's offensive power through attrition. The battle opened with a counter-battery barrage that destroyed up to half the enemy's anti-tank guns, followed by creeping barrages to suppress enemy fire and guide the infantry forward. The infantry returned to the “bite-and-hold” tactics that were common on the Western Front. And through robust battle drills and proven consolidation techniques, the infantry defended themselves against counterattacks even by enemy panzer divisions. After the Second Battle of El Alamein, a flood of reports promoted a return to the 1918 practice of coordinating massed fires from field artillery in support of infantry advances.150
The Second Battle of El Alamein was Britain's first permanent land victory in World War II, and it was achieved at expected costs. Remarkably, Montgomery predicted that the battle would last ten to twelve days and instructed medical services to prepare for 13,000 casualties. From opening salvo to Rommel's official retreat, the battle lasted twelve days and the Eighth Army suffered 13,500 dead, wounded, or missing.151 Lingering vestiges of armored maneuver continued to hamper the armored divisions' performance, but by restoring the traditional infantry-artillery team—reversing the radical innovation of armored maneuver—the British Army improved its performance.152
In sum, wartime evidence reveals that innovation can have varying effects. German innovation in armored warfare improved effectiveness but British innovation did not. Moreover, reversing innovation improved British combat power.
Evaluating Alternative Explanations of Harmful Innovation
Research on military innovation has not systematically explained harmful innovation, but some existing intuitions about technology and culture could plausibly privilege bad innovation trajectories and screen out better pathways.
First, a given technology's characteristics could mislead innovation efforts away from its optimal employment. Disruptive technology, for instance, can improve performance in a dimension of combat that is undervalued by a given service, which increases the likelihood that the service uses the technology in suboptimal ways.153 But the German Army also innovated with tank technology, did so differently, and to better effect. In fact, it was the British Army that undertook more “disruptive” innovation, which is commonly assumed to be a superior mode of competition, whereas the German Army incorporated tanks into its traditional operational concepts.
Second, organizational culture could either prevent innovation altogether or channel innovation efforts in harmful directions. A service's culture is a “set of basic assumptions, values, norms, beliefs, and formal knowledge that shape collective understandings,” which in turn defines “what is a problem and what is possible.”154 British Army culture thus might have obstructed innovation in armored warfare. Elizabeth Kier makes this case, for example, when she argues that British Army culture valued drills and ceremonial duties befitting of a gentleman-officer, rather than professional skills and technological expertise.155 But the bulk of the officer corps, including every CIGS, accepted mechanization as the primary way that the army could win quickly while avoiding casualties—internal disagreements centered on the pace of reform.156 Critics also suggest that the cavalry's regimental commitment to horses was a major obstacle to mechanization efforts.157 But once the army decided to mechanize the cavalry, most regimental officers were “determined to make a success of it as the only way of ensuring the future of their regiments.”158
Another approach to organizational culture argues that armed services tend to develop new capabilities that align with preferred mission goals and methods, which can misalign with effectiveness.159 The British regimental system, for instance, could have prevented inter-arm cooperation.160 But by the late nineteenth century, the War Office professionalized the regimental system and broke down regimental parochialism through compulsory training, promotion exams, and overseas duties.161 Or perhaps the army's cultural identity, being rooted in imperial garrisoning, could have prioritized frontier warfare, which contradicted the skills needed for conventional warfare.162 But the general staff stubbornly prepared for continental involvement; and the army resisted training specifically for small wars and preferred training for conventional military operations.163 Disruptive technology and organizational culture are thus plausible alternative explanations, but cannot in themselves account for the British Army's harmful innovation.
Conclusion
Military innovation is more dangerous than is generally acknowledged. Prevailing wisdom suggests that innovation improves military power, and that the more disruptive the change the more effective the resulting combat forces. In contrast, this article has argued that under the stress of expanding commitments and shrinking resources, an impacted military service is incentivized to make desperate gambles on new and relatively untested capabilities, wish away problems that may arise from cannibalizing traditional capabilities, and rush the innovation process. When the resulting force structure and doctrine is used in combat, however, the military service is likely to discover that it has overspecialized in the new capability to its own detriment. To improve performance, the service may try to downgrade the centrality of the new capability and restore traditional capabilities that remain surprisingly relevant and necessary.
Evidence from British armor innovation shows the plausibility of this argument. Facing a wicked mismatch between ambitious commitments that outstripped austere resources, the British Army developed armored maneuver and siphoned resources away from traditional capabilities, placing a big bet on a radical vision of future warfare while ignoring plausible vulnerabilities. But in the Desert War, British armor radicalism did not deliver on its promises; the enemy exploited vulnerabilities left open by the loss of traditional capabilities associated with the infantry and artillery arms, and commanders returned to older methods of the infantry-artillery team as a backstop to shore up combat power.
My theory of harmful innovation identifies only one set of conditions that generate adverse pressures to innovate in self-defeating ways. Innovation is inherently complicated and laden with idiosyncratic processes that can vary according to distinct environmental factors, across organizations, and given the personal predilections of influential individuals. As such, for reasons other than extreme commitment-resource gaps, military organizations pursuing innovation might be tempted to overhype new capabilities and underestimate the impact of the loss in traditional ones. Nonetheless, the proposed theory may offer generalizable explanations beyond the British armor case. For instance, it may offer important insights into the U.S. Air Force's innovation of an air-atomic blitz capability in the late 1940s and the U.S. Army's pentomic division in the 1950s.
The theory has critical implications for the study and practice of military innovation. If innovation is not always beneficial for combat performance, then identifying the conditions under which innovation occurs is insufficient. Current theories cannot fully explain why the identified causes of innovation should improve combat performance and how innovation relates to military power. Therefore, military innovation research needs to refocus on the quality of the innovation process.
For those concerned with the future character of war, the findings suggest that innovation does not necessarily improve combat performance. A bias in favor of military innovation may be helpful because of countervailing bureaucratic and cultural pressures against disruptive changes, but professional instincts to preserve existing ways of war can also be prudent, especially if warfare evolves toward essential continuity rather than discontinuous revolutions.164 Making big bets on technologies such as unmanned systems comes with significant risks because the novel capabilities are unfamiliar and what is lost in an innovation process can be as important as what is created—capital substitution involves tradeoffs.165
Fervor for military innovation is especially high in the United States because there is a particularly foreboding sense that the U.S. military is overstretched, and that its resources and commitments are misaligned. Ever since World War II, the United States has accumulated expansive interests abroad, but it has been reluctant to invest the necessary resources to sustain these commitments.166 Today, while the United States remains concerned about Russia in Europe and Iran in the Middle East, Chinese military modernization and foreign policy has eroded confidence in the U.S. military's ability to operate effectively in the western Pacific and credibly deter Chinese aggression. Meanwhile, resources are relatively stagnant as the rising cost of military equipment exceeds inflation and growth in the defense budget.167
The shift to great power competition in U.S. foreign and military policy has animated a range of promising and innovative proposals, but their perils should be explicitly recognized. With China as the pacing threat, the United States has incentive to reorganize its force structures, concentrate on weapon technologies for high-end conventional warfare, and develop new operational concepts to counter Chinese anti-access/area-denial military forces in the Indo-Pacific region.168 This is evidenced most starkly in the recent decision by the U.S. Marine Corps to divest from all its tanks and cut back on aircraft and cannon artillery to invest in new technologies and novel “littoral combat regiments” designed to conduct expeditionary advanced base operations.169 The U.S. Army has also debated whether the infantry brigade combat team, the building block of its operations in Iraq and Afghanistan, will become obsolete in future multi-domain operations.170
Military innovation can be healthy insofar as it realigns military means with political ends. But ensuring the proper balance and integration of new and traditional capabilities involves calibrating the appropriate level of radicalness in an innovation process for an uncertain strategic landscape, which only becomes more challenging as a commitment-resource gap widens and options narrow. If mission burdens continue to grow, and the resourcing or efficiency of the armed services declines, U.S. policymakers can expect increasingly radical proposals for innovation that promise dramatic returns in combat power. But it is in this very context of a yawning commitment-resource gap that harmful innovation is more likely to occur. In 1942, the strategist Bernard Brodie warned that the United States was “under the sway of a dogma of innovation, just as blind and as dangerous as that there is nothing essentially new in war.”171 His warning remains relevant today.
For their comments on various iterations of this project, the author thanks Stephen Biddle, Jasen Castillo, Audrey Kurth Cronin, Fiona Cunningham, Alexander Downes, Martha Finnemore, Benjamin Friedman, Andres Gannon, Eugene Gholz, Mariya Grinberg, Jason Lyall, Sara Plana, John Schuessler, Caitlin Talmadge, Rachel Tecott, Sanne Verschuren, seminar participants at American University, the Catholic University of America, the U.S. Naval War College, the University of Notre Dame, and Texas A&M University, and the anonymous reviewers. The ideas in this article were first presented in “Military Magic: The Promise and Peril of Military Innovation,” Ph.D. dissertation, George Washington University, 2021. The views expressed here are those of the author and do not necessarily represent those of the U.S. Department of Defense or its components.
For a similar definition of military effectiveness, see Dan Reiter, “Confronting Trade-Offs in the Pursuit of Military Effectiveness,” in Dan Reiter, ed., The Sword's Other Edge: Trade-Offs in the Pursuit of Military Effectiveness (New York: Cambridge University Press, 2017), p. 4. I use “military effectiveness,” “combat effectiveness,” “wartime effectiveness,” “military power,” “combat power,” and “military performance” as interchangeable terms. The terms “military service,” “armed service,” and “service” are also used as synonyms.
Robert Gilpin, War and Change in World Politics (New York: Cambridge University Press, 1981), p. 60.
John J. Mearsheimer, The Tragedy of Great Power Politics (New York: W. W. Norton, 2001), p. 166.
Kenneth N. Waltz, Theory of International Politics (New York: McGraw-Hill, 1979), p. 127.
Adam Grissom, “The Future of Military Innovation Studies,” Journal of Strategic Studies, Vol. 29, No. 5 (2006), p. 907, https://doi.org/10.1080/01402390600901067.
For example, see Nina Kollars, “Military Innovation's Dialectic: Gun Trucks and Rapid Acquisition,” Security Studies, Vol. 23, No. 4 (2014), p. 790, https://doi.org/10.1080/09636412.2014.965000; and Adam M. Jungdahl and Julia M. Macdonald, “Innovation Inhibitors in War: Overcoming Obstacles in the Pursuit of Military Effectiveness,” Journal of Strategic Studies, Vol. 38, No. 4 (2015), p. 469, https://doi.org/10.1080/01402390.2014.917628.
Barry R. Posen, The Sources of Military Doctrine: France, Britain, and Germany between the World Wars (Ithaca, N.Y.: Cornell University Press, 1984), p. 29.
Williamson Murray and Allan R. Millett, “Military Effectiveness Twenty Years Later,” in Allan R. Millett and Williamson Murray, eds., Military Effectiveness, Vol. 2: The Interwar Period (New York: Cambridge University Press, 1988), p. xiii.
Theo Farrell, Sten Rynning, and Terry Terriff, Transforming Military Power since the Cold War: Britain, France, and the United States, 1991–2012 (New York: Cambridge University Press, 2013), p. 8; and Harvey M. Sapolsky, “On the Theory of Military Innovation,” Breakthroughs, Vol. 9, No. 1 (2000), p. 35. See also Harvey M. Sapolsky, Brendan Rittenhouse Green, and Benjamin H. Friedman, “The Missing Transformation,” in Harvey M. Sapolsky, Benjamin H. Friedman, and Brendan Rittenhouse Green, eds., U.S. Military Innovation since the Cold War: Creation without Destruction (New York: Routledge, 2009), p. 6.
Posen, The Sources of Military Doctrine, pp. 102–104.
Stephen Rosen, Winning the Next War: Innovation and the Modern Military (Ithaca, N.Y.: Cornell University Press, 1991), p. 53. But see Andrew Bacevich, The Pentomic Era: The U.S. Army between Korea and Vietnam (Washington, D.C.: National Defense University Press, 1986).
For similar definitions, see Kimberly Marten Zisk, Engaging the Enemy: Organization Theory and Soviet Military Innovation, 1955–1991 (Princeton, N.J.: Princeton University Press, 1993), p. 4; and Michael C. Horowitz, The Diffusion of Military Power: Causes and Consequences of International Politics (Princeton, N.J.: Princeton University Press, 2010), pp. 22–23.
For example, see Rosen, Winning the Next War, pp. 22–23.
Horowitz, The Diffusion of Military Power, pp. 22–23. See also Theo Farrell and Terry Terriff, “The Sources of Military Change,” in Theo Farrell and Terry Terriff, eds., The Sources of Military Change: Culture, Politics, Technology (Boulder, Colo.: Lynne Rienner, 2002), p. 5.
Joseph Schumpeter, Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process (New York: McGraw-Hill, 1939), pp. 87–88.
Joseph Schumpeter, Capitalism, Socialism, and Democracy (New York: Harper Perennial, 1942; repr., 2008), p. 132; and James G. March, “Footnotes to Organizational Change,” Administrative Science Quarterly, Vol. 26, No. 4 (1981), p. 572, https://doi.org/10.2307/2392340.
Schumpeter, Capitalism, Socialism, and Democracy, p. 83.
Rosen, Winning the Next War, pp. 7–8.
Owen R. Cote Jr., “The Politics of Innovative Military Doctrine: The U.S. Navy and Fleet Ballistic Missiles,” Ph.D. dissertation, Massachusetts Institute of Technology, 1996, p. 9.
Andrew L. Russell and Lee Vinsel, “After Innovation, Turn to Maintenance,” Technology and Culture, Vol. 59, No. 1 (2018), p. 17, https://doi.org/10.1353/tech.2018.0004.
Walter Lippmann, U.S. Foreign Policy: Shield of the Republic (Boston: Little, Brown, 1943), p. 9.
C. West Churchman, “Guest Editorial: Wicked Problems,” Management Science, Vol. 14, No. 4 (1967), p. B141, https://www.jstor.org/stable/2628678.
Harold Sprout and Margaret Sprout, “‘Retreat from World Power’: Processes and Consequences of Readjustment,” World Politics, Vol. 15, No. 4 (1963), p. 658, https://doi.org/10.2307/2009462.
There are other military-relevant resources not considered here. For example, see Klaus Knorr, The Power of Nations: The Political Economy of International Relations (New York: Basic Books, 1975), pp. 45–78.
Scholars recognize the role of mission burdens in the innovation process. For example, on scope, see Rebecca D. Patterson, The Challenge of Nation-Building: Implementing Effective Innovation in the U.S. Army from World War II to the Iraq War (Lanham, Md.: Rowman and Littlefield, 2014), p. 3. On intensity, see Rosen, Winning the Next War, p. 76; and Zisk, Engaging the Enemy, pp. 3–4. On time, see Posen, The Sources of Military Doctrine, pp. 59, 74–75.
Cote, “The Politics of Innovative Military Doctrine,” pp. 339–342.
Elizabeth Kier, Imagining War: French and British Military Doctrine between the Wars (Princeton, N.J.: Princeton University Press, 1997), pp. 56–88.
Klaus Knorr, War Potential of Nations (Princeton, N.J.: Princeton University Press, 1956), pp. 167–169; and Eliot A. Cohen, Citizens and Soldiers: The Dilemmas of Military Service (Ithaca, N.Y.: Cornell University Press, 1985), pp. 117–151.
Paul K. MacDonald and Joseph M. Parent, “Graceful Decline? The Surprising Success of Great Power Retrenchment,” International Security, Vol. 35, No. 4 (Spring 2011), pp. 19–21, https://doi.org/10.1162/ISEC_a_00034.
Joseph M. Parent and Sebastian Rosato, “Balancing in Neorealism,” International Security, Vol. 40, No. 2 (Fall 2015), pp. 61–64, https://doi.org/10.1162/ISEC_a_00216.
Gilpin, War and Change in World Politics, pp. 188–189.
The logic here is akin to “gambling for resurrection,” in which high-risk policies are adopted in hopes of staving off an otherwise certain and undesirable outcome. See George W. Downs and David M. Rocke, “Conflict, Agency, and Gambling for Resurrection: The Principal-Agent Problem Goes to War,” American Journal of Political Science, Vol 38, No. 2 (1994), pp. 374–376, https://doi.org/10.2307/2111408.
David Barno and Nora Bensahel, Adaptation under Fire: How Militaries Change in Wartime (New York: Oxford University Press, 2020), pp. 10–17.
Pursuing high-risk innovation amid a wicked mismatch aligns with prospect theory: risk-seeking is more common in the “domain of losses” as opposed to the “domain of gains.” The originating work is Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision under Risk,” Econometrica, Vol. 47 (1979), pp. 263–291.
Richard K. Betts, Military Readiness: Concepts, Choices, and Consequences (Washington, D.C.: Brookings Institution Press, 1995), pp. 87–143; and Michael E. O'Hanlon, The Science of War: Defense Budgeting, Military Technology, Logistics, and Combat Outcomes (Princeton, N.J.: Princeton University Press, 2009), pp. 31–43.
Benjamin Jensen, Forging the Sword: Doctrinal Change in the U.S. Army (Stanford, Calif.: Stanford University Press, 2016), pp. 16–17.
Kendrick Kuo, “Military Magic: The Promise and Peril of Military Innovation,” Ph.D. dissertation, George Washington University, 2021, pp. 525–565.
Philip Selznick, Leadership in Administration (New York: Harper and Row, 1957), pp. 65–89.
Anthony Downs calls this the “superman syndrome.” See Downs, Inside Bureaucracy (Boston: Little, Brown, 1967), pp. 216–219.
Kuo, “Military Magic,” pp. 326–349.
Lawrence Freedman, The Future of War: A History (New York: PublicAffairs, 2019), pp. 264–287; and Cathal J. Nolan, The Allure of Battle: A History of How Wars Have Been Won and Lost (New York: Oxford University Press, 2017).
Sapolsky, Green, and Friedman, “The Missing Transformation,” p. 7.
Barbara Levitt and James G. March, “Organizational Learning,” Annual Review of Sociology, Vol. 14 (1988), pp. 322–323, https://doi.org/10.1146/annurev.so.14.080188.001535.
On rogue outcomes, see Chris C. Demchak, Military Organizations, Complex Machines (Ithaca, N.Y.: Cornell University Press, 1991), pp. 15–27. See also Jon R. Lindsay, Information Technology and Military Power (Ithaca, N.Y.: Cornell University Press, 2020), pp. 32–70.
Stephen Biddle, Military Power: Explaining Victory and Defeat in Modern Battle (Princeton, N.J.: Princeton University Press, 2004), pp. 5–6.
On these alternative pathways, see Murray and Millett, “Military Effectiveness Twenty Years Later,” p. xiii; Emily O. Goldman and Richard B. Andres, “Systemic Effects of Military Innovation and Diffusion,” Security Studies, Vol. 8, No. 4 (1999), pp. 102–122, https://doi.org/10.1080/09636419908429387; Horowitz, The Diffusion of Military Power, pp. 42–51; and Lena Andrews and Julia Macdonald, “Five Costs of Military Innovation,” War on the Rocks, February 18, 2016, https://warontherocks.com/2016/02/five-costs-of-military-innovation/.
Posen, The Sources of Military Doctrine, pp. 143–144, 156, 179–182, 205–208; and Kier, Imagining War, pp. 120–121. For similar interpretations, see Williamson Murray, “Armored Warfare: The British, French, and German Experience,” in Williamson Murray and Allan R. Millett, eds., Military Innovation in the Interwar Period (New York: Cambridge University Press, 1996), pp. 21–29; and John Stone, “The British Army and the Tank,” in Farrell and Terriff, The Sources of Military Change, pp. 193–194.
Karl-Heinz Frieser, The Blitzkrieg Legend: The 1940 Campaign in the West, trans. John T. Greenwood (Annapolis, Md.: Naval Institute Press, 2005).
Instead of being a “mechanized juggernaut,” the Wehrmacht was a semi-modern, semi-motorized army that relied primarily on feet, horses, and railroads for movement. The critical difference between 1917–1918 and 1940 was that the radio and the internal combustion engine accelerated the tempo of combat operations for some assault divisions. Otherwise, the German Army applied its traditional principles of operations. Richard L. DiNardo, Mechanized Juggernaut or Military Anachronism? Horses and the German Army of World War II (Westport, Conn.: Greenwood, 1991); Frieser, The Blitzkrieg Legend, pp. 329–339; and Stephen Biddle, “The Past as Prologue: Assessing Theories of Future Warfare,” Security Studies, Vol. 8, No. 1 (1998), pp. 44–49, https://doi.org/10.1080/09636419808429365.
John J. Mearsheimer, Conventional Deterrence (Ithaca, N.Y.: Cornell University Press, 1983), pp. 35–52. According to one strategic interpretation of blitzkrieg, Germany planned a series of rapid and decisive campaigns. But the German Army was arguably preparing for a prolonged total war like World War I. See Wilhelm Deist, “‘Blitzkrieg’ or Total War? War Preparations in Nazi Germany,” in Roger Chickering and Stig Förster, eds., The Shadows of Total War: Europe, East Asia, and the United States (Cambridge: Cambridge University Press, 2003), pp. 278, 282.
For revisionist accounts, see Robert H. Larson, The British Army and the Theory of Armored Warfare, 1918–1940 (Newark: University of Delaware Press, 1984); Harold R. Winton, To Change an Army: General Sir John Burnett-Stuart and British Armored Doctrine, 1927–1938 (Lawrence: University Press of Kansas, 1988); and J. P. Harris, Men, Ideas, and Tanks: British Military Thought and Armoured Forces, 1903–1939 (New York: Manchester University Press, 1995).
Timothy Harrison Place, Military Training in the British Army, 1940–1944: From Dunkirk to D-Day (London: Frank Cass, 2000), p. 96.
Aaron Rapport, “Hard Thinking about Hard and Easy Cases in Security Studies,” Security Studies, Vol. 24, No. 3 (2015), pp. 454–456, https://doi.org/10.1080/09636412.2015.1070615.
On asymmetric causal mechanisms, see Gary Goertz, Multimethod Research, Causal Mechanisms, and Case Studies: An Integrated Approach (Princeton, N.J.: Princeton University Press, 2017), pp. 70–71, 98–100.
For example, see Geoffrey P. Megargee, “The German Army after the Great War: A Case Study in Selective Self-Deception,” in Peter Dennis and Jeffrey Grey, eds., Victory or Defeat: Armies in the Aftermath of Conflict (Canberra: Big Sky, 2010), pp. 105–108; and Deist, “‘Blitzkrieg’ or Total War?” pp. 274–275.
The engagements before and after this timeframe offer more ambiguous and less useful evidence for the purposes of examining the impact of British armor innovation on combat performance. During Operation Compass (December 1940–February 1941), Britain's small Western Desert Force routed the large Italian Tenth Army in North Africa, but the British Seventh Armored Division only took part in a few minor skirmishes, while low morale and inferior tanks could as easily explain Italian defeat. The analysis does not extend beyond November 1942 because U.S. forces began landing in North Africa on November 8, which introduces additional factors related to coalition warfare that shaped British planning and performance.
On the numerical and qualitative balance between British and German matériel in North Africa, see John Agar-Hamilton and Leonard C. F. Turner, Crisis in the Desert, May–July, 1942 (Cape Town: Oxford University Press, 1952), pp. 10–13; and John Agar-Hamilton and Leonard C. F. Turner, The Sidi Rezeg Battles, 1941 (Cape Town: Oxford University Press, 1957), pp. 36–50, 53–56.
Agar-Hamilton and Turner, Crisis in the Desert, p. 11; and Agar-Hamilton and Turner, The Sidi Rezeg Battles, pp. 45–46.
Dan Reiter and Allan C. Stam, Democracies at War (Princeton, N.J.: Princeton University Press, 2002).
On German force cohesion, see Jasen Castillo, Endurance and War: The National Sources of Military Cohesion (Stanford, Calif.: Stanford University Press, 2014), pp. 44–93.
Percy Hobart to Director of Staff Duties, “A.F.V. Requirements in the Revised Field Force,” November 25, 1937, LH 15/11/7, Liddell Hart Center for Military Archives (LHCMA), London, United Kingdom; Michael Carver, Tobruk (London: Pan, 1964), pp. 266–267.
Ronald Lewin, The Life and Death of the Afrika Korps (New York: Quadrangle, 1977), pp. 11–13.
On mechanistic evidence, see Derek Beach and Rasmus Brun Pedersen, Process-Tracing Methods: Foundations and Guidelines (Ann Arbor: University of Michigan Press, 2019), pp. 165–172.
Brian Bond, British Military Policy between the Two World Wars (Oxford: Clarendon, 1980), pp. 94–97; and John Ferris, “Treasury Control, the Ten Year Rule, and British Service Policies, 1919–1924,” Historical Journal, Vol. 30, No. 4 (1987), pp. 874–875, https://doi.org/10.1017/S0018246X00022354.
Michael Howard, The Continental Commitment (London: Maurice Temple Smith, 1972; repr., 1989), pp. 74–79.
Keith Jeffrey, “Sir Henry Wilson and the Defence of the British Empire, 1918–22,” Journal of Imperial and Commonwealth History, Vol. 5, No. 3 (1977), p. 271, https://doi.org/10.1080/03086537708582487.
Jeffrey, “Sir Henry Wilson,” pp. 276–278.
Statistical Abstract for the United Kingdom, Cmd. 2207, 4489, 6232, ProQuest UK Parliamentary Papers, https://parlipapers.proquest.com/parlipapers/search/basic/hcppbasicsearch, accessed September 9, 2022.
Anthony Clayton, The British Empire as a Superpower, 1919–39 (Athens: University of Georgia Press, 1986), pp. 5–9; Douglas E. Delaney, The Imperial Army Project: Britain and the Land Forces of the Dominions and India, 1920–1945 (Oxford: Oxford University Press, 2018), pp. 166–167, 170–180; and George C. Peden, Arms, Economics, and British Strategy (New York: Cambridge University Press, 2007), pp. 148–150.
Quoted in Bond, British Military Policy, p. 24.
Ferris, “Treasury Control,” p. 880.
Howard, The Continental Commitment, p. 116; and George C. Peden, “The Burden of Imperial Defence and the Continental Commitment Reconsidered,” Historical Journal, Vol. 27, No. 2 (1984), pp. 410–415, https://doi.org/10.1017/S0018246X00017854.
Howard, The Continental Commitment, pp. 74, 107.
Daniel Todman, Britain's War, Vol. 1: Into Battle, 1937–1941 (New York: Oxford University Press, 2016), pp. 69–82; Alan Allport, Britain at Bay: The Epic Story of the Second World War, 1938–1941 (New York: Alfred A. Knopf, 2020), pp. 44–47; and Peden, Arms, Economics, and British Strategy, p. 132.
Henry Wilson to Secretary of State, June 9, 1920, WO 33/1004, British National Archives (BNA), Kew, United Kingdom. See also General Staff, “Military Liabilities of the Empire,” July 27, 1920, CAB 4/7, BNA.
Quoted in Jeffrey, “Sir Henry Wilson,” p. 289.
George Milne to Laming Worthington-Evans, November 2, 1927, WO 32/2823, BNA.
Quoted in N. H. Gibbs, Grand Strategy, Vol. 1: Rearmament Policy (London: Her Majesty's Stationary Office [HMSO], 1976), p. 64.
Winton, To Change an Army, pp. 17–23.
J. F. C. Fuller, “The Development of Sea Warfare on Land and Its Influence on Future Naval Operations,” RUSI Journal, Vol. 65, No. 458 (1920), pp. 289–290, https://doi.org/10.1080/03071842009421887.
J. F. C. Fuller, “Problems of Mechanical Warfare,” Army Quarterly, Vol. 3, No. 2 (1922), pp. 284–301; and Percy Hobart to Director of Staff Duties, March 22, 1935, LH 15/11/2, LHCMA.
Fuller, “Problems of Mechanical Warfare,” p. 287. See also J. F. C. Fuller, “Progress in the Mechanicalisation of Modern Armies,” RUSI Journal, Vol. 70, No. 477 (1925), p. 79, https://doi.org/10.1080/03071842509433766; and B. H. Liddell Hart, “Army Manæuvres, 1925,” RUSI Journal, Vol. 70, No. 480 (1925), p. 653, https://doi.org/10.1080/03071842509426075.
B. H. Liddell Hart, “The Development of the ‘New Model’ Army: Suggestions on a Progressive but Gradual Mechanicalisation,” Army Quarterly, Vol. 9, No. 1 (1924), p. 45.
Fuller, “The Development of Sea Warfare on Land,” pp. 283, 288; and Fuller, “Problems of Mechanical Warfare,” pp. 292–294.
Fuller, “Problems of Mechanical Warfare,” pp. 295–296.
David French, Raising Churchill's Army: The British Army and the War against Germany, 1919–1945 (Oxford: Oxford University Press, 2000), pp. 17–19.
Robin Prior and Trevor Wilson, Command on the Western Front: The Military Career of Sir Henry Rawlinson 1914–18 (Oxford: Blackwell, 1992), pp. 292–295, 311–315. This is not to overstate the uniformity of British tactics; see Aimée Fox, Learning to Fight: Military Innovation and Change in the British Army, 1914–1918 (Cambridge: Cambridge University Press, 2018), pp. 51–77.
French, Raising Churchill's Army, pp. 27–33.
On these difficulties, see Paddy Griffith, Battle Tactics of the Western Front: The British Army's Art of Attack, 1916–18 (New Haven, Conn.: Yale University Press, 1994), pp. 20–44; and Michael Hunzeker, Dying to Learn: Wartime Lessons from the Western Front (Ithaca, N.Y.: Cornell University Press, 2021), pp. 47–55.
Liddell Hart, “The Development of the ‘New Model’ Army,” p. 44.
Ibid., pp. 45–46.
J. F. C. Fuller, Lectures on F.S.R. III (London: Sifton Praed, 1932), p. 12.
J. F. C. Fuller, “Gold Medal (Military) Prize Essay,” RUSI Journal, Vol. 65, No. 458 (1920), p. 255, https://doi.org/10.1080/03071842009421885.
B. H. Liddell Hart to Lloyd George, “The Economic Efficiency of the Army,” April 18, 1929, LH 11/1929/6, LHCMA.
Fuller, “Gold Medal (Military) Prize Essay,” p. 263.
George Milne, “Comments on BM 796,” May 15, 1926, LH 15/12/4, LHCMA. On the debate over force composition, see Harris, Men, Ideas, and Tanks, pp. 211–214. On Fuller's refusal of command in the so-called Tidworth Affair, see J. P. Harris, “British Armour 1918–40: Doctrine and Development,” in J. P. Harris and F. H. Toase, eds., Armoured Warfare (London: B.T. Batsford, 1990), pp. 36–37.
Address by Chief of the Imperial General Staff (CIGS) to Experimental Mechanized Force, September 1927, LH 11/1927/5–16, LHCMA.
J. P. Harris, “British Armour and Rearmament in the 1930s,” Journal of Strategic Studies, Vol. 11, No. 2 (1988), p. 223, https://doi.org/10.1080/01402398808437339.
J. T. Burnett-Stuart, “Armoured Forces Training Report—1928,” Minute 1C, WO 32/2828, BNA.
Both criticisms were publicized by the military writer Victor Germains. See Victor Germains, The “Mechanization” of War (London: Sifton Praed, 1927); and Victor Germains, “‘Armoured Warfare’: A Plea for Common Sense,” Army Quarterly, Vol. 16, No. 2 (1928), pp. 361–374. The Army Quarterly dismissed his critiques as “theoretical rather than practical.” See “Editorial,” Army Quarterly, Vol. 16 (April 1928), pp. 178–179.
R. J. Collins, “Experimental Mechanized Force,” Journal of the Royal Artillery, Vol. 55 (1928), p. 33.
Army Training Memorandum, “Collective Training Period,” 1927, LH 15/3/115, LHCMA.
Ibid.
Burnett-Stuart, “Armoured Forces Training Report—1928.”
David French, “The Mechanization of the British Cavalry between the World Wars,” War in History, Vol. 10, No. 3 (2003), p. 306, https://doi.org/10.1191/0968344503wh279oa.
Shelford Bidwell and Dominick Graham, Fire-Power: British Army Weapons and Theories of War 1904–1945 (Boston: George Allen and Unwin, 1985), p. 179; and French, “The Mechanization of the British Cavalry,” p. 307.
David J. Childs, A Peripheral Weapon? The Production and Employment of British Tanks in the First World War (Westport, Conn.: Greenwood, 1999), pp. 141–170.
French, “The Mechanization of the British Cavalry,” pp. 306–309.
B. H. Liddell Hart, “Contrasts of 1931: Mobility or Stagnation,” Army Quarterly, Vol. 23, No. 2 (1932), p. 248.
B. H. Liddell Hart, “Armoured Forces in 1928,” RUSI Journal, Vol. 73, No. 492 (1928), p. 725, https://doi.org/10.1080/03071842809422496. The idea that future European armies would be relatively small was a common assertion made by armor innovators. See Harris, “British Armour 1918–40,” p. 39; and Percy Hobart to George Lindsay, November 10, 1933, Liddell Hart 1/376/5, LHCMA.
Liddell Hart, “Armoured Forces in 1928,” pp. 723, 727–728. But he was willing to consider a company of “land-marines” for “stalking and silent penetration.”
Liddell Hart, “Contrasts of 1931,” p. 244.
Liddell Hart, “Armoured Forces in 1928,” p. 729.
B. H. Liddell Hart, The Tanks: The History of the Royal Tank Regiment, Vol. 1: 1914–1939 (New York: Praeger, 1959), pp. 249–250, 253–254.
Liddell Hart, “Armoured Forces in 1928,” p. 722.
Ibid., p. 723.
French, Raising Churchill's Army, p. 29.
Address by CIGS to Experimental Mechanized Force, September 1927. To avoid provocation, the address was not publicly circulated.
George Milne to Laming Worthington-Evans, November 12, 1928, WO 32/2825, BNA.
These pamphlets were Mechanized and Armored Formations (1929) and an updated version, Modern Formations (1931). The latter considered possible field artillery support against prepared enemy positions but assumed that these operations would be rare. See Harris, “British Armour 1918–40,” p. 39.
Ibid., pp. 40, 42; and Larson, The British Army, pp. 156, 163.
E. K. Squires, “Note on the Composition of the Mobile Division,” October 11, 1937, Minute 4A, LH 15/11/7, LHCMA.
George Lindsay to Percy Hobart, November 17, 1933, LH 15/12/8, LHCMA. The difference between their concepts can easily be exaggerated. Both organizations lacked substantial organic support from arms other than the Royal Tank Corps. See Winton, To Change an Army, p. 178.
Harris, Men, Ideas, and Tanks, pp. 250–252.
By the mid-1930s, the British Army adopted a distinction between “cruiser” and “infantry” tanks. The armored division's armored brigade centered on mobile cruiser tanks. Heavy infantry tanks were organized in “army tank battalions” attached to infantry divisions but did not represent combined-arms integration. Trained and operated by the Royal Tank Corps, army tank battalions also tended to function independently against enemy armor as anti-tank weapons. On cruiser versus infantry tanks, see Harris, “British Armour and Rearmament in the 1930s,” pp. 221–228.
Richard M. Ogorkiewicz, Armoured Forces: A History of Armoured Forces and Their Vehicles (New York: Arco, 1960), pp. 59–60, 73–74.
French, Raising Churchill's Army, p. 42.
Martin Kitchen, Rommel's Desert War (Cambridge: Cambridge University Press), pp. 177–179.
Daniel Todman, Britain's War, Vol. 2: A New World, 1942–1947 (New York: Oxford University Press, 2020), pp. 262–263.
Agar-Hamilton and Turner, The Sidi Rezeg Battles, p. 35.
Barrie Pitt, The Crucible of War: Western Desert 1941 (London: Jonathan Cape, 1980), p. 302.
British weakness in combined-arms fighting is widely recognized. For example, see Williamson Murray, “British Military Effectiveness in the Second World War,” in Allan R. Millett and Williamson Murray, eds., Military Effectiveness, Vol. 3: The Second World War (New York: Cambridge University Press, 2010), pp. 110–113. For an overview of various unorthodox organizations and desert tactics, see Shelford Bidwell, Gunners at War (London: Arrow, 1972), pp. 170–184.
Bond, British Military Policy, p. 187.
Agar-Hamilton and Turner, The Sidi Rezeg Battles, p. 53.
Quoted in Niall Barr, Pendulum of War: The Three Battles of El Alamein (London: Pimplico, 2005), p. 57.
Agar-Hamilton and Turner, The Sidi Rezeg Battles, pp. 35, 47.
Quoted in Agar-Hamilton and Turner, Crisis in the Desert, p. 13.
Henry Maitland Wilson, Eight Years Overseas, 1939–1947 (London: Hutchinson, 1950), p. 28.
Claude J. E. Auchinleck, “Operations in the Middle East from 1st November 1941 to 15th August 1942,” London Gazette, January 15, 1948, p. 368.
Quoted in Correlli Barnett, The Desert Generals, 2nd ed. (Bloomington: Indiana University Press, 1982), p. 108.
Quoted in ibid., p. 109.
Agar-Hamilton and Turner, Crisis in the Desert, p. 11.
Agar-Hamilton and Turner, The Sidi Rezeg Battles, pp. 61–70; and Barnett, The Desert Generals, pp. 88–89.
Agar-Hamilton and Turner, The Sidi Rezeg Battles, pp. 65–66.
Barr, Pendulum of War, pp. 141–142.
I. S. O. Playfair, The Mediterranean and Middle East, Vol. 3: British Fortunes Reach Their Lowest Ebb (London: HMSO, 1960), pp. 351–352; and Barr, Pendulum of War, pp. 122–139.
Francis Tuker, Approach to Battle (London: Cassell, 1963), pp. 195, 199; and Barr, Pendulum of War, p. 230.
French, Raising Churchill's Army, p. 282. Maj. Gen. Sir Bernard Freyberg observed that the operational plan for the Second Battle of El Alamein “approximates to the battles fought in 1918.” Quoted in Barr, Pendulum of War, p. 261, see also pp. 409–410.
For details, see Barr, Pendulum of War, pp. 262–265, 289–293; Tuker, Approach to Battle, pp. 249–250; and Bidwell, Gunners at War, pp. 189–190.
Mark Johnston and Peter Stanley, Alamein: The Australian Story (South Melbourne, Australia: Oxford University Press, 2002), p. 204.
Barr, Pendulum of War, p. 404.
French, Raising Churchill's Army, pp. 274–285; and Barr, Pendulum of War, pp. 409–410.
On disruptive technology in military innovation, see Gautam Mukunda, “We Cannot Go On: Disruptive Innovation and the First World War Royal Navy,” Security Studies, Vol. 19, No. 1 (2010), pp. 124–159, https://doi.org/10.1080/09636410903546731.
Kier, Imagining War, p. 28.
Ibid., pp. 120–137.
French, Raising Churchill's Army, pp. 12–16, 35–36, 43; and Liddell Hart, “The Development of the ‘New Model’ Army,” p. 37.
Murray, “Armored Warfare,” pp. 22–24; Liddell Hart, The Tanks, pp. 199–201; and Bidwell and Graham, Fire-Power, pp. 190–191.
French, “The Mechanization of the British Cavalry,” p. 299.
Kier, Imagining War, p. 31.
Barnett, The Desert Generals, pp. 103–104; and Brian Bond and Williamson Murray, “The British Armed Forces, 1918–39,” in Millett and Murray, Military Effectiveness, Vol. 2, pp. 121–122.
David French, Military Identities: The Regimental System, the British Army, and the British People, c. 1870–2000 (New York: Oxford University Press, 2005), pp. 153–160.
Bond, British Military Policy, pp. 124–125, 181, 188.
J. P. Harris, “The British General Staff and the Coming of War, 1933–39,” in David French and Brian Holden Reid, eds., The British General Staff: Reform and Innovation, c. 1890–1939 (London: Frank Cass, 2002), pp. 177–181; T. R. Moreman, “‘Small Wars’ and ‘Imperial Policing’: The British Army and the Theory and Practice of Colonial Warfare in the British Empire, 1919–1939,” Journal of Strategic Studies, Vol. 19, No. 4 (1996), pp. 125, 127, https://doi.org/10.1080/01402399608437654.
Biddle, “The Past as Prologue.”
Daniel R. Lake, The Pursuit of Technological Superiority and the Shrinking American Military (New York: Palgrave Macmillan, 2019), pp. 1–7, 17–62.
Colin Dueck, Reluctant Crusaders: Power, Culture, and Change in American Grand Strategy (Princeton, N.J.: Princeton University Press, 2006), pp. 30–34.
John A. Alic, Trillions for Military Technology: How the Pentagon Innovates and Why It Costs So Much (New York: Palgrave Macmillan, 2007), pp. 49–106.
Ronald O'Rourke, Renewed Great Power Competition: Implications for Defense—Issues for Congress, CRS Report R43838 (Washington, D.C.: Congressional Research Service, 2022), pp. 13–22.
For a recent critique of the U.S. Marine Corps's Force Design 2030, see Charles Krulak, Jack Sheehan, and Anthony Zinni, “War Is a Dirty Business. Will the Marine Corps Be Ready for the Next One?” Washington Post, April 22, 2022, https://www.washingtonpost.com/opinions/2022/04/22/marines-restructuring-plan-scrutiny-generals/. For a defense, see Robert Work, “USMC Force Design 2030: Threat or Opportunity?” 1945, May 15, 2022, https://www.19fortyfive.com/2022/05/usmc-force-design-2030-threat-or-opportunity/.
Liam Collins and Harrison Morgan, “Affordable, Abundant, and Autonomous: The Future of Ground Warfare,” War on the Rocks, April 21, 2020, https://warontherocks.com/2020/04/affordable-abundant-and-autonomous-the-future-of-ground-warfare/.
Bernard Brodie, A Layman's Guide to Naval Strategy (New York: Oxford University Press, 1942), p. 177.