The Führerprinzip has not been confined to Nazi Germany. The cult of the strong leader thrives in many authoritarian regimes and has its echoes even in contemporary democracies. The belief that the more power a president or prime minister wields the more we should be impressed by that politician is a dangerous fallacy. In authoritarian regimes, a more collective leadership is a lesser evil than personal dictatorship. In countries moving from authoritarian rule to democracy, collegial, inclusive, and collective leadership is more conducive to successful transition than great concentration of power in the hands of one individual at the top of the hierarchy. Democracies also benefit from a government led by a team in which there is no obsequiousness or hesitation in contradicting the top leader. Wise decisions are less likely to be forthcoming when one person can predetermine the outcome of a meeting or foreclose the discussion by pulling rank.
The cult of the strong individual leader remains alive and well, even in democracies. Less surprisingly, but with more dire consequences, it flourishes in authoritarian regimes. Within dictatorships, vast resources are devoted to portraying the top leader as the embodiment of strength and wisdom, setting him (political dictatorship being overwhelmingly a masculine preserve) far above any colleagues or potential rivals. For the autocrat, as distinct from the people, both the accumulation of personal power and the creation of a personality cult make sense, at least in the short term. It is altogether more puzzling when citizens who have some choice in the matter–those who live in a democracy–look to, and even yearn for, a strong leader to take the big decisions on their behalf. Yet effective leadership is seldom one-person leadership, and strength–as defined by the maximization of power vis-à-vis colleagues, political party, and governmental institutions–is not synonymous with effectiveness.
There is reason also to be wary of “charismatic” leaders, especially if we follow Max Weber–who first elaborated the concept and extended its application from religious to political leaders–in holding that “genuine charismatic domination … knows of no abstract legal codes and statutes and of no ‘formal’ way of adjudication.”1 Charisma is very much in the eye of the beholder and, as Weber noted, however god-given a charismatic leader's claims, this sheen rubs off if the leader fails to deliver. Charisma is not a lifetime endowment but rather a personality responding to qualities and attributes that followers project upon the leader at a given time. Our approval of a charismatic leader depends very much on whether we approve of the goals toward which that person's leadership is directed. Such a leader may be a Hitler or Mussolini or, on the contrary, a Gandhi or Martin Luther King, Jr. Following a charismatic leader involves suspending, to a large extent, one's critical faculties and independent judgment. This has adverse consequences in the long term even for the leader, and is debilitating for the follower. It is seldom, even when the values of the charismatic leader are benign, conducive to wise and accountable government. No one person is likely to embody all of the qualities desirable in a paragon of a leader. Since, indeed, leadership is highly contextual, the attributes most valuable in one situation are liable to be of very limited use in another. We would do well to replace our obsession with the leader by an appreciation of the advantages of power shared within a leadership team.
Much the greater part of the literature on political leadership focuses on the holders of political power, and this essay will be only a partial exception to that general rule. Nevertheless, it is worth distinguishing at the outset political leadership from political power. Power-holders can quite quickly come to believe they are gifted leaders because of the readiness with which people respond to their suggestions and commands. Yet the responsiveness of “followers” owes a great deal to the influence over their career prospects that the head of government or party leader possesses. How many leaders of major political parties had more than a handful of followers before it looked as if they might become the top leader, at which point calculations of benefit from future patronage come into play? The answer is: not many. Once a party leader is ensconced as head of government, colleagues' receptiveness to his or her wishes tends to depend heavily on the inequality of the power relationship.
Leadership in its purest form is most clearly evident when all members of the group are “on an equal footing” but there is, as Adam Smith observed, “generally some person whose counsel is more followed than that of others.”2 We need to draw a clear distinction between a leader other people wish to be guided by, and who attracts a spontaneous following, and a power-wielding leader who has the prerogative of promoting or demoting and who has an armory of other favors to bestow or withhold. Examples of outstanding political leadership divorced from positions of political power are not hard to find.
In the Soviet Union of the post-Stalin but pre-perestroika era, the moral leadership of Aleksandr Solzhenitsyn and of Andrei Sakharov, who were united by their civic courage and in their rejection of Marxism-Leninism but divided by political orientation, had a significant impact on, and following among, different parts of the Russian intelligentsia. The Soviet authorities were sufficiently worried by this writer and this physicist to deport Solzhenitsyn from the country and to send Sakharov into internal exile in Nizhny Novgorod (or Gorky, as it was called at that time). For celebrated examples of more overtly political leadership disconnected from governmental power, we need look no further than Mahatma Gandhi and his promotion of nonviolent struggle for Indian independence from British imperial rule; Martin Luther King, Jr., whose inspirational leadership of the civil rights movement in the United States helped pave the way for the Civil Rights Acts of 1964 and 1965 (although that legislation owed a huge debt also to the presidential leadership of Lyndon B. Johnson); and Aung San Suu Kyi, the 1991 winner of the Nobel Peace Prize, whose long campaign for democracy in Burma (Myanmar) condemned her to many years of house arrest that ended only in 2010. It brought her electoral success in 2015 and, at long last, something resembling political power in 2016.
An outstanding contemporary example of political leadership is that of the youngest-ever winner of the Nobel Peace Prize, Malala Yousafzai, from the Swat Valley of Pakistan. She was seventeen years old when she became a Nobel laureate. Her campaign for girls' education, in the face of the obscurantist hostility of the Taliban, led to the assassination attempt that almost killed her in 2012, when she was fifteen. After numerous medical operations in both Pakistan and Great Britain, Malala resumed her campaigning, although now doing so as a schoolgirl in Birmingham, England. She has said that “I don't want to be thought of as ‘the girl who was shot by the Taliban’ but ‘the girl who fought for education.’” In her speech to the United Nations on her sixteenth birthday, she described “our books and our pens” as “our most powerful weapons” and proclaimed: “One child, one teacher, one book and one pen can change the world.”3 Herself a Muslim, Malala Yousafzai carried her activism to Nigeria in the attempt to galvanize the government of that country to do more to rescue the girls who had been kidnapped from their predominantly Christian secondary school by the radical Muslim terrorist group Boko Haram in April 2014. More recently, she has campaigned against the practice of female genital mutilation practiced by some of her co-religionists.
Writers on leadership who focus as much on followers as on leaders, and who study the interaction between the two, provide a more realistic account of the political process than those who are almost exclusively obsessed with the top person. To pay due attention to followership is not, however, enough. When we observe the top team within a government or political party, we shall almost invariably find people who cannot, by any stretch of the imagination, be regarded as “followers” of the top leader. To take the example of the George W. Bush administration, does it make sense to describe Colin Powell, Dick Cheney, and Donald Rumsfeld as followers of Bush? Hardly. The president, by virtue of his office, had a higher authority, but that is far from the same as these Cabinet members seeing him as the possessor of superior wisdom or judgment. Similarly, successive secretaries of state in the Barack Obama administration, Hillary Clinton and John Kerry, who have been important players in their own right, cannot meaningfully be described as followers of Obama. In a democracy there usually are within the top leadership team people of high political standing who are relatively independent of the top leader–and so there should be. They may or may not constitute “a team of rivals,” but it is essential that they should feel free to question the judgment of the top leader in any particular instance and be ready to advance contrary arguments.
Although this essay focuses mainly on political leadership during processes of democratization and in democracies, it is worth paying attention to the Führerprinzip in the country where the term was first employed, and to authoritarian or totalitarian regimes more generally. When in 1930 Otto Strasser, a would-be ideologist of German National Socialism, suggested to Adolf Hitler that “A Leader must serve the Idea”–since the idea was eternal and the leader (for obvious biological reasons) was not–Hitler told him that this was “outrageous nonsense” and an example of “revolting democracy,” for “the Leader is the Idea, and each party member has to obey only the Leader.”4 The “leader principle” was fundamental to Nazi doctrine, and while it “worked” for a time inasmuch as Hitler went on to consolidate his power, gain a vast following, and achieve military successes, it was the inability of informed subordinates to question his judgment that fostered the miscalculation that, more than any other, led to his downfall and that of the Nazi regime.
Although it is not a particularly salient component of popular perceptions of World War II either in Great Britain or (still less) in the United States, there is no doubt that the most substantial contribution to the defeat of Nazi Germany in the ground war in Europe was made by the Soviet army. The Soviet war dead, including civilians, were vastly greater in number than those of any other allied country; indeed, some five times more Soviet than German citizens perished.5 Of the German soldiers who lost their lives in the war, more than three-quarters of them did so at the hands of their Soviet adversary. Thus, when Hitler launched the German invasion of the ussr in June 1941, unilaterally abrogating the Nazi-Soviet Pact of nonaggression, he made a fateful error. Although the invasion was delayed for logistical reasons until the following year, it was Hitler alone who in 1940 took the decision to break the pact. His generals shared his detestation of Soviet communism and likewise underestimated the potential of the Red Army, yet they had misgivings about the desirability of war on another front. Such qualms, however, they suppressed not only to protect their careers, but also because they did not think that they were wiser than the Führer.6 And after the speedy fall of France following the German invasion, Hitler informed his principal military advisers that “a campaign against Russia would be child's play.”7
Iosif Stalin, especially during the last twenty years of his life, had acquired a personal power and cult of personality that were scarcely less exalted than Hitler's. This extended even to a life-or-death power over senior figures in the ruling party: namely, members of the Central Committee of the Communist Party of the Soviet Union and of its inner circle, the Politburo. Nevertheless, Stalin was not quite so free of ideological constraints as was Hitler. He could not explicitly reject Leninist concepts. As Alan Bullock aptly observed, for Nazi Germany, “ideology was what the Führer said it was,” whereas “in the case of Stalin it was what the General Secretary said Marx and Lenin said it was.”8 Within Soviet society and even inside Stalin's inner circle there was a reluctance similar to that which prevailed in Nazi Germany to contradict the vozhd' (leader).9 Again, this was not only because to do so would be life-threateningly dangerous, but because, to a greater or lesser extent, members of the political elite, as well as ordinary Russians, subscribed to the sedulously promoted notion of Stalin's genius.
Adam Smith, whose insights on society and government (as distinct from his economic analysis and moral philosophy) were until recent times largely overlooked, noted that “gross abuse” of power and “perverseness, absurdity, and unreasonableness” were more liable to be found under the rule of “single persons” than of larger assemblies.10 Both Hitler and Stalin exemplified such perversity and unreasonableness not only in the murderous policies they pursued but also through their profound failures of judgment. Thus, in June 1941 the Soviet leader made a catastrophic error that was almost on par with that of the German dictator. Whereas Hitler had made the huge mistake of invading the Soviet Union, Stalin's error was to convince himself, in the face of much evidence to the contrary from a variety of sources, that Germany would not attack Russia at any time in that year. And once Stalin reached that conclusion, there could be no dissension in Moscow. On June 21, 1941, the day before German troops launched their blitzkrieg on the Soviet Union, the head of the security police, Lavrenti Beria, issued an instruction that four nkvd officers “be ground into labour camp dust” for having persistently sent reports of an impending Nazi invasion. “I and my people,” wrote Beria to Stalin on the same day, “have firmly embedded in our memory your wise conclusion: Hitler is not going to attack us in 1941.”11
While all authoritarian regimes, by definition, suffer from lack of accountability and from censorship and self-censorship, oligarchy is generally a lesser evil than autocracy. A more collective leadership is less likely than personal dictatorship to commit state-sponsored murder on an industrial scale. A brief glance at the history of the two largest, and most important, communist states, the Soviet Union and the People's Republic of China, helps to illustrate the point. The Soviet Union in the 1920s and in the post-Stalin era was never less than a highly authoritarian state (until, that is, the system-transformative change of the late 1980s), as was China in the first half of the 1950s and in the years of more enlightened absolutism following Mao Zedong's death in 1976. Yet these periods in the two countries' histories were far less politically oppressive, lethal, and arbitrary than the years of Stalin's and Mao's overwhelming personal ascendancy (in the Soviet case, roughly the twenty years preceding Stalin's death in 1953; in China, from the late 1950s until Mao's death in 1976).
The worst of the Soviet purges took place during the time of Stalin's dictatorship over the Communist Party as well as over the rest of Soviet society. The show trials reached their peak in 1937–1938, when almost 1.6 million people were arrested, of whom approximately 682,000 were shot.12 Millions more died, directly or indirectly, as a result of the policies pursued by Stalin. In China, during the years of Mao's supreme power, barbaric means were used in the attempt to reach wildly impractical utopian goals. The Great Leap Forward of the late 1950s and early 1960s sidelined the institutions of China's central government, created vast “people's communes” in the countryside, and substituted mass mobilization for the technical expertise of engineers and technologists. Along with the purposeful killing of tens of thousands, who dragged their feet rather than make the Great Leap, at least thirty million people–forty-five million according to a high-end estimate, but one based on archival research–died, mainly of starvation as a direct or indirect consequence of this attempt to fast-track China into communism.13 Mao's other infamous brainchild, the Great Proletarian Cultural Revolution, killed fewer people (between 750,000 and one-and-a-half million died as a direct result of it), but it lasted much longer, from the mid-1960s until Mao's death in 1976, especially harshly in the second half of the sixties. The Cultural Revolution affected the political elite and the most educated segment of Chinese society, and the urban population more generally, to a greater extent than did the earlier revolution from above. Both the Great Leap and the Cultural Revolution were unmitigated disasters, and it was revulsion against this turmoil that enabled pragmatists and reformers to gain ascendancy in the post-Mao era, with Deng Xiaoping playing a decisive role.14
If the most that can be said of collective leadership as compared with the dictatorship of one person in authoritarian regimes is that the former is a lesser evil, the general point can be made much more positively when we consider transitions from authoritarian rule to democracy. While there are a number of factors conducive to the success or failure of attempts at democratic transition, among them political-cultural inheritance and geopolitical environment, the characteristics and values of the principal leaders of the attempt to accomplish systemic change can make a decisive difference. There is a body of evidence, drawn especially from the comparative study of Latin American countries, which indicates that in the transition and early posttransition period, the normative commitment of leading politicians to democracy is of particular relevance for its attainment. Politicians who place great value on democracy as such are “less likely to understand policy failures” of the new postauthoritarian pluralist politics–following the dismantlement of the old order–“as regime failures,” and they have longer time horizons than those who do not share their commitment to democratic values.15
There are also good reasons to conclude that collegial, inclusive, and collective leadership is more conducive to successful transition to democracy than great concentration of power in the hands of one individual at the top of the political hierarchy, regardless of whether that person is a prime minister or president. A focus exclusively on institutional arrangements, involving linkage of successful democratic transition to the choice of a parliamentary rather than a presidential system, or to a particular type of semipresidentialism, is attractive because it provides the possibility of measurement and gives at least the illusion of precision. The results of such studies, however, have been contradictory and inconclusive, not least because they leave out of the analysis factors less readily measurable but still more important–the values of the top leader and of the leadership group and also the style of leadership of the head of government in a democratizing regime.
With good reason, scholars view Spain as an outstanding example of transition to democracy, following the long years of Franco's authoritarian rule. Adolfo Suárez, the Spanish prime minster who was appointed by King Juan Carlos in 1976 and who held that post for just five years, had a consensus-building style that succeeded in bridging what had appeared to be irreconcilable differences in Spanish society and among competing political groups. In a television address justifying the legalization of the Communist Party, Suárez proclaimed his belief that the Spanish people were mature enough “to assimilate their own pluralism.”16 Of equal significance, the most important opposition personality, Felipe González, the leader of the Socialist Party and future prime minister, was firmly committed to democratic values. If Suárez was the key political actor in Spain's transition to democracy, González was no less surely the most crucial figure in its consolidation.
It was an integral part of Suárez's approach to leadership to get Spain's new constitution accepted as a result of national accord, rather than by using all the instruments of power at his disposal to drive it through by a simple majority. In this strategy of inclusiveness, he was remarkably successful. The constitution was approved almost unanimously in parliament and by nearly 90 percent of the population. Suárez was by no stretch of the imagination a charismatic leader, nor was he a “strong” leader in the sense of maximizing his power and dominating all those around him. His style was collegial and he made significant concessions and compromises in order to get agreement on important issues, not least to persuade long-standing Republicans to accept that a constitutional monarchy had a place in the new political order. The Socialist Party eventually acquiesced in exchange for Suárez agreeing to their demands for abolition of the death penalty and reduction of the voting age to eighteen.17 This turned out to their advantage, and that of other Spanish democrats, when the king played a pivotal role in ending the 1981 attempted military coup against the new democratic regime.
Inclusive leadership and a commitment to dialogue were important also in the successful transitions to democracy of Chile and Brazil. The international environment changed beyond all recognition in the second half of the 1980s as a result of the transformation of the Soviet Union, which undermined the claims to international legitimacy of right-wing authoritarian regimes on the pretext of standing as a bulwark against the spread of communism. In Chile, this change in the external environment, including a shift of attitudes in Washington, made Augusto Pinochet's oppressive regime more vulnerable. The Chilean autocrat's loss of a plebiscite in 1988 was followed by victory for the Christian Democratic political leader Patricio Aylwin in 1989 and a return to democratic civilian rule in 1990. Aylwin sought dialogue with union leaders to get their agreement to moderate their economic demands and they, in turn, compromised in pursuit of the more fundamental goal of reestablishing and consolidating democratic rule. Noting that “throughout my political life I have always worked well in teams,”18 Aylwin proved to be a successful coalition builder, and he played an important part in reducing the dangerous level of polarization in Chilean politics.
In Brazil's transition from military authoritarian rule, both the international context and enlightened leadership were likewise crucial. That leadership was provided, most impressively, by Fernando Henrique Cardoso, president from 1995 until 2003, who was both a distinguished social scientist and an astute politician. Summarizing the successes and shortcomings of Brazil's transition to and consolidation of democracy, Cardoso observed:
We were able to converge around the main objectives despite the plurality of visions and interests of the different opposition parties that rose up. In this way, a culture of mutual negotiation and dialogue was reinforced as an aspect of Brazilian democracy. But this can deteriorate into co-optation and the accommodation of interests, weakening democratic politics, discouraging the citizenry, and compromising the state's ability to engage in republican action. The style of the transition conditions democratic governance, for better or worse.19
The Chilean academic and politician Sergio Bitar and the American specialist on Latin American politics Abraham Lowenthal undertook a series of revealing interviews with leaders of transitions from authoritarian rule in Europe, Latin America, Asia, and Africa and reached significant conclusions. They stress that a common factor among the leaders they interviewed was a commitment to inclusionary and accountable governance and a fundamental preference for peaceful and incremental, rather than violent or convulsive, transformation. They shared power, rather than hoarding it, and relied heavily on capable associates, some of whom had specific expertise that they themselves might not possess. Although they sometimes made key choices personally, most of these leaders “concentrated on building consensus, forging coalitions, constructing political bridges, and communicating consistently with key constituencies and the broad public.”20
The most momentous systemic change of all in the past half-century was of the Soviet Union. The second half of the 1980s witnessed the historic role that could be played by a leader who both acquired the most powerful political office and who had different values from those dominant in the regime hitherto. The Gorbachev era was one of movement from government by fiat and fear to governance by persuasion and societal empowerment. Fundamental change of the Soviet political system was accompanied by a transformation of Soviet foreign policy, including enunciation in 1988 of the principle that the people of every country were entitled to decide for themselves what kind of political and economic system they wished to live in. One year later these words became deeds, facilitating the democratization of half a continent. The countries of Eastern and Central Europe, whose sovereignty had previously been strictly limited by their Soviet overlords, became non-communist and independent while Soviet troops obeyed orders from Moscow to remain in their barracks.
In many respects Mikhail Gorbachev led from the front, especially during the first four years of perestroika; yet at the same time, government became more collegial and collective, partly from necessity. The general secretary of the Central Committee of the Communist Party of the Soviet Union had significant levers of power at his disposal, but he enjoyed a high security of tenure only so long as he did not challenge any of the basic norms of the system. Gorbachev, however, embarked on a process of change in 1985 that had become increasingly fundamental by 1988–1989, with glasnost by then virtually indistinguishable from freedom of speech and (increasingly) publication, and with contested elections introduced for a legislature with real power. Thus, the last leader of the Soviet Union was running grave risks. Until the creation in March 1990 of an executive presidency, to which Gorbachev was indirectly elected by the new legislature, the Soviet leader could have been removed from office at a moment's notice by a vote in the Politburo, speedily endorsed by the Central Committee. Only when in 1990 Communist Party organs ceased to be the highest institutions of state power did Gorbachev have some protection from removal from power by his Politburo colleagues. The threats to his leadership were by then, however, coming thick and fast from other quarters.21
For the first five years of his leadership, it made sense for Gorbachev patiently to persuade his Politburo colleagues to go along with policy innovation that was far in excess of anything they had contemplated, and which was to become threatening to their interests. Accepting collective responsibility, following lengthy discussions, for new policies and concepts weakened their resistance, which would have been stronger had Gorbachev simply bypassed them. Moreover, the change in the political system brought countervailing forces into play, including public opinion. Even so, in a highly ideologized system, Gorbachevian formulations, such as, from 1987, socialist pluralism, which by 1990 had become political pluralism, met with resistance in the party leadership.
Nevertheless, as even one of the more conservative members of the Politburo, Vitaliy Vorotnikov, noted, Gorbachev gave everyone around the table a chance to speak, and he listened to their arguments. His style of chairing the meetings, as transcripts of the proceedings attest and as Vorotnikov, among others, has confirmed, was “democratic and collegial.” If there was significant disagreement, Gorbachev would propose a change of wording, adopt a middle position, or postpone a decision until a later meeting, although in the final analysis, Gorbachev more often than not would get his way.22 Even those to whom in the early years of his general secretary-ship Gorbachev could simply have issued instructions, he sought, rather, to persuade. The head of Soviet space research, Roald Sagdeev, had opportunities at that time to observe Gorbachev in small group discussions. The general secretary, he recalled, overestimated his, admittedly, formidable powers of persuasion, apparently believing that “he could persuade anyone in the Soviet Union of anything.” Yet what was especially significant, Sagdeev aptly observes, was precisely that Gorbachev attempted to persuade his interlocutors, since this approach represented a sharp break with Soviet tradition. Hitherto, senior party officials “never tried to change people's genuine opinions or beliefs, but simply issued an instruction and demanded that it be followed.”23 Sagdeev's personal journey was just one illustration of the dramatic scale of change during the period of less than seven years of perestroika. In what would earlier have been unthinkable for a Soviet scientist with close ties to the military-industrial complex, he became the husband of Susan Eisenhower, granddaughter of President Dwight D. Eisenhower, and was able freely to move to the United States in early 1990.
Persuasion is no less central to political life in established democracies than in regimes in transition from authoritarian rule. Democracy itself has been described as “above all the name for political authority exercised solely through the persuasion of the greater number.”24 More concretely, as Richard Neustadt famously put it: “Presidential power is the power to persuade.” Although a simplification, the statement encapsulated an important truth, and drew on President Harry Truman's remark that he spent his days “trying to persuade people to do things they ought to have sense enough to do without my persuading them. … That's all the powers of the President amount to.”25 Whereas in a number of consolidated democracies, and not only in countries in transition from authoritarian rule, there is a danger of heads of government concentrating excessive power in their hands, this is scarcely a serious problem in the United States, with the partial exception of some foreign policy areas. It is exceedingly difficult for an American president to become over-powerful, given the constitutional constraints, institutional obstacles, and powerful interests that confront him (or her). U.S. presidents have little option but to try to work collegially, given the strength of the other components of the American political system. They may wield greater power within the executive than a prime minister in a parliamentary system typically does, but there is a strong convention that the president does not readily dismiss members of the Cabinet. Moreover, American presidents are usually weaker vis-à-vis the legislature than their prime ministerial counterparts.
Yet there is a hankering in the United States for more assertive leadership, as well as ambivalence when it is provided. The chief U.S. commentator of the Financial Times, Edward Luce, recently wrote, “One of the loudest complaints of Mr Trump's followers is they believe America lacks a strong leader.” He immediately added, “If Mr Trump is the answer, there is something wrong with the question.”26 The search for a strong leader–in the sense of one who will dominate all and sundry–is indeed the pursuit of a false god. But Luce is correct when he goes on to note that there is still a case for a president taking the initiative in a political system that has seen as much gridlock as the United States has experienced in recent years.
On the vexed issue of gun control, President Obama has, in fact, increasingly led from the front, in the face of a gun lobby that attributes the prevalence of death by shooting merely to “bad people” in the United States without explaining why, then, there should be such a spectacularly higher incidence of evil among the American population than, for example, in the United Kingdom, Western Europe, or Australia. Obama also led from the front on health care, but was more sparing in the use of his “power to persuade” Congress than was a Lyndon Johnson. Of course, the gulf between Obama's and Johnson's ties to and knowledge of every member of the House and Senate was immense, but with his constant telephone calls, plus invitations to the White House, Johnson used to the full his considerable powers of persuasion and cajolery. If Obama has appeared less constantly engaged, his wariness of entanglement in foreign conflicts, and reluctance to accept that American leadership should consist “of us bombing somebody,” is one vital area where his style contrasts favorably with the way Johnson was sucked into a disastrous war in Vietnam and did not know how to get out.27
The demand for a strong leader is heard in many countries, including Britain, where over the past half-century there has been an increasing focus in the mass media on the person of the prime minister (and on the leader of the main opposition party), rather than on the government as a whole or on ministers responsible for particular areas of policy. Newspaper articles have come to discuss prime ministers in much more personal terms, and with reference to their perceived leadership qualities.28 Television has accentuated the focus on the top leader, who now has to be viewed going to the scene of a disaster, such as a flooded town, looking determined as he promises that everything will be done to avoid such devastation in the future. Similarly, on one currently controversial issue, whether or not London's Heathrow airport should open a third runway, the Financial Times quotes an “official close to the process” as saying: “Only David Cameron knows what he will finally decide to do.”29 But why should the prime minister “finally” decide this question? There is a secretary of state for transport and also a Cabinet subcommittee on aviation, for the issue, with its environmental as well as economic dimensions, is politically sensitive. That suggests that the matter should, “finally,” be debated and decided by the whole Cabinet. Perhaps collective responsibility will remain a political reality and the decision will emerge from Cabinet discussion rather than by prime ministerial ruling. At best, then, the political discourse is misleading. At worst, prime ministers are getting too big for their boots and treating colleagues in whom executive powers have been vested as if they were but advisers.
Some authors, who argue that heads of government have gained in power as well as visibility over the past half-century, see this as a positive development: “By focusing attention on the prime minister as an individual who is accountable for the government's collective performance, the public finds it easier to deliver reward or punishment, particularly when compared with an abstract collectivity.”30 This is very doubtful. There has been a long-term decline in voter turnout in general elections in the United Kingdom over the postwar period. Voters in 1945 or in the 1950s (when in 1950 and 1951 the turnout was as high as 84 percent and 82.5 percent, respectively) did not have any trouble in voting for or against a Labour government. We do not have survey data on the relative popularity of Winston Churchill and Clement Attlee in 1945, but given that acclaim for Churchill's wartime leadership crossed party boundaries and that victory of the Allies in World War II, in which Churchill had counted as one of the “Big Three,” was the high point of his career, it is a reasonable assumption that he would have had more personal support than did Attlee that summer and would have prevailed if votes had been cast primarily for leaders rather than for parties and policies. In fact, the election resulted in a landslide victory for the Labour Party.
The greater prominence accorded prime ministers and party leaders in postwar Britain did not translate into votes primarily for the leader, rather than for the party. Harold Wilson, the Labour leader and outgoing prime minister was more popular in 1970 than the Conservative leader, Edward Heath, but the Conservatives won the election comfortably enough. Although commentators write of Margaret Thatcher's triumph over James Callaghan in the 1979 general election, Callaghan enjoyed a popularity lead of more than twenty points over Thatcher on the eve of the poll.31 The vote was against the Labour government, which had become unpopular during a “winter of discontent” marked by industrial unrest, and a victory for the Conservative Party, rather than a personal accomplishment of their leader. In contrast, in 1983, a year after the successful Falklands war, Thatcher polled well ahead of the policies of her party.32
More commonly, of course, support for a party and for the party's leader go together. Although it has been hypothesized that the personality of the party leader would be most important for people with a weak sense of party identification, the evidence points the other way. Attachment to the party label determines to a large extent the perception of particular leaders, with party loyalists the most attached to the team captain.33 Having a popular leader is, of course, a plus for a political party and, in a closely contested election, may have real electoral significance. It is, nevertheless, rare for the personality and popularity of the top leader to make the difference between victory and defeat in a general election.
Exaggeration of the electoral impact of party leaders in parliamentary democracies is less serious than the notion, regularly encountered in the mass media, that a strong leader–who maximizes his or her personal power and attempts to take all the big decisions in different areas of policy–exemplifies the most successful and admirable type of leadership. There are only twenty-four hours in the day of even the strongest leader, and the more that person tries to do individually, the less time he or she has to focus on and to understand the complexity and nuances of each issue. A prime minister's personal aides are usually among the most enthusiastic supporters of placing ever greater power in the hands of the head of government. That is hardly surprising, for they are the main beneficiaries of a leader cult and of concentration of power in the leader's office. The more one top leader is set apart from other elected politicians, the greater the independent influence–and de facto power–acquired by his or her nonelected advisers.
A case in point is Jonathan Powell, who was chief-of-staff to Tony Blair throughout Blair's premiership. Before he entered 10 Downing Street as Blair's righthand man, Powell expressed the wish to curb the independence of individual ministers and government departments, and to move to what he called a “Napoleonic system” of government.34 Reflecting on his years at the heart of government, after intraparty pressure had forced Tony Blair to cede the premiership to Gordon Brown, Powell made a sustained effort to portray Blair as a strong leader and Brown as weak. His underlying assumption was that Machiavelli's maxims for a prince operating within an authoritarian system are no less applicable, with suitable updating, for a democracy. While Machiavelli and Napoleon may be useful mentors for an autocratic leader within an authoritarian regime, they are highly dubious models for political leaders in a democracy. It may be assumed also that Powell would not wish the Labour Party leader elected in 2015 to follow his and Machiavelli's precepts on the maximization of his power, since Jeremy Corbyn abhors much of what Blair stood for.
Were we to draw a graph of the extent to which personal power has been hoarded and wielded by the various British prime ministers over the last hundred years, it would not, however, show an upward curve of increasing power, but zigzags. David Lloyd George, almost one century ago, and Neville Chamberlain, in the late 1930s, wielded more individual power vis-à-vis their colleagues than did the great majority of their post–World War II successors. A comparison over time would also not show a positive correlation between prime ministerial domination of Cabinet colleagues and of the policy process, on the one hand, and governmental achievement, on the other. The two postwar British governments that made the biggest difference to the country they ruled–they can be described as redefining governments in the sense that they redefined the limits of the possible in UK politics, and introduced radical change–were the Labour government of 1945–1951, headed by Clement Attlee, and the Conservative government of 1979–1990, under the leadership of Margaret Thatcher. The immediate postwar Labour government set the political agenda for a generation until it was challenged fundamentally by the Conservative government of Margaret Thatcher.
The leadership styles of Attlee and Thatcher could scarcely have been more different. Attlee neither dominated the policy process nor aspired to do so. His main achievement was to keep a strong team together–a group of people of independent political standing, of great and varied experience, divergent views, and personal animosities and rivalries. Attlee played a coordinating rather than domineering role. Individual ministers had autonomy, subject to their clearing important issues of principle with the appropriate Cabinet committee or with the Cabinet as a whole. With the passage of time, and partly because Attlee was such an unflamboyant politician, the nature and effectiveness of the collegial and collective style of leadership of the radical government he headed has receded not only from public consciousness, but even from the heads of many British political commentators.
The creeping-in of the idea in Britain that the prime minister should be the dominant policy-maker owes a lot to the premiership of Margaret Thatcher. In her own terms–what she set out to achieve and the extent to which she met those objectives–she was a successful prime minister, and undoubtedly a strong one. The disadvantages, however, of an overly mighty head of government became increasingly apparent the longer she was in office. Sir Geoffrey Howe, whose House of Commons speech in 1990 explaining his resignation from the government triggered Thatcher's removal from the premiership by her own Conservative colleagues, later noted how the prime minister had come to dominate the reactions of ministers and officials to such an extent that meetings in Whitehall and Westminster were “subconsciously attended, unseen and unspoken” by her. He added: “The discussion would always come round somehow to: how will this play with the prime minister?”35
That illustrates a major flaw of the “strong leader” who so intimidates his (or in this case, her) colleagues that they engage in self-censorship and themselves rule out policy options that might displease the leader. As no leader in a democracy was ever selected because he or she was believed to have a monopoly of wisdom, it defies common sense and is at odds with democratic values for senior politicians to subordinate their own judgments to the perceived predilections of the top leader. Eventually, of course, Thatcher's senior colleagues rebelled, and so her style of leadership–notwithstanding her considerable, but highly controversial, achievements while she occupied 10 Downing Street–led to her political demise.
In any government, of course (including that headed by Margaret Thatcher), policy is made by a great many people, not least by the departmental heads (secretaries of state, ministers) in whom executive power is vested. A president or prime minister can do much to set the tone, but political commentary, especially in the mass media, focuses excessively on the head of government. Thus, it is common in the United Kingdom to find everything that was done between 1997 and 2007 attributed to the prime minister, Tony Blair. Yet the most far-reaching innovation of that Labour government lay in its constitutional reform: the creation of a Scottish parliament and government; the formation of a Welsh assembly and executive; devolved government and a power-sharing agreement in Northern Ireland; the passing of the Human Rights Act; the introduction of a Freedom of Information Act; and House of Lords reform (which, while incomplete, rid the legislature of 90 percent of the hereditary peers).
Of those reforms, Blair played a major role only in the Northern Ireland settlement. Indeed, he was unenthusiastic about several of the others. More important in their formulation, and as chairman of the relevant Cabinet committees, was an unsung member of the Cabinet, Derry (Lord) Irvine, the Lord Chancellor. Similarly, the economic policies of that government are regularly attributed to Blair, though they were jealously guarded by an even-more-than-usually powerful Chancellor of the Exchequer, Gordon Brown. Among other things, he prevented Blair from realizing his wish to take Britain into the common European currency. Only in foreign policy–where heads of governments generally have played a more dominant role since World War II–did Blair's power and control (the euro apart) match popular perceptions. But since it is his zealous advocacy of British participation in the 2003 war-of-choice in Iraq that is most clearly remembered in contemporary Britain, its resonance does the former prime minister no favors.
Wise decisions are less likely to be forthcoming when one person can predetermine the outcome of a meeting or foreclose the discussion by pulling rank. In any cabinet, council, committee, or group, some members are better informed than others. There will be a few whose judgment generally carries particular weight. That will often include the chair of the meeting, but the collective wisdom of the group will almost invariably be greater than that of the individual presiding over the proceedings, even if he or she heads the government. The advantages of collective leadership can manifest themselves, however, only when discussion is unconstrained–not governed by obsequiousness or fear of the consequences of contradicting the top leader.
Barbara Kellerman is prominent among those who argue that “Leader-centrism no longer explains, if it ever did, the way the world works.”36 Yet her observation that “the traditional view of the leader, the suggestion that ‘the leader’ is all-important, is simply passé”37 may be less true than it deserves to be, so far as popular perceptions are concerned. Social psychologists Alexander Haslam, Stephen Reicher, and Michael Platow are right to regard an “individualistic and leader-centric view of leadership to be deeply flawed,” being both “a poor explanation of leadership phenomena” and “bad in the sense of sustaining toxic social realities.”38 Yet, they observe, the idea of heroic leadership remains popular, in spite of its evident deficiencies. The attraction for many a top leader of the idea that victories and successes are due to him and failures the fault of insufficiently loyal “followers,” is clear enough. Why the rest of us should go along with such illusions, put up with one-person dominance, and in its absence pine for it, rather than embrace a more collective and dispersed leadership, is altogether less obvious.
H. H. Gerth and C. Wright Mills, eds. and trans., From Max Weber: Essays in Sociology (London: Routledge & Kegan Paul, 1848), 250.
Adam Smith, Lectures in Jurisprudence, ed. R. L. Meek, D. D. Raphael, and P. G. Stein (Oxford: Clarendon Press, 1978), 201–202.
Malala Yousafzai, I am Malala: The Girl Who Stood Up for Education and was Shot by the Taliban (London: Weidenfeld & Nicolson, 2013), 261–262.
Ian Kershaw, Hitler (London: Penguin, 2009), 200–201.
Antony Beevor, Stalingrad (London: Penguin, 2007), 428.
Ian Kershaw, Fateful Choices: Ten Decisions that Changed the World, 1940–1941 (London: Penguin, 2007), 69–70.
Alan Bullock, Hitler and Stalin: Parallel Lives (London: Fontana, 1993), 451.
The Russian word, vozhd', acquired a connotation close to that of Führer.
Smith, Lectures in Jurisprudence, 322–323.
Christopher Andrew and Vasili Mitrokhin, The Mitrokhin Archive: The KGB in Europe and the West (London: Allen Lane, 1999), 123–124.
N. Vert and S. V. Mironenko, Massovye repressii v SSSR, T. 1. Istoriya stalinskogo gulaga (Moscow: Rosspen, 2004), 728; and Michael Haynes and Rumy Hasan, A Century of State Murder? Death and Policy in Twentieth-Century Russia (London: Pluto Press, 2003), 70.
Rana Mitter, A Bitter Revolution: China's Struggle with the Modern World (Oxford: Oxford University Press, 2004), 194–198; and Frank Dikötter, Mao's Great Famine: The History of China's Most Devastating Catastrophe, 1958–1962 (London: Bloomsbury, 2011).
Roderick MacFarquhar, ed., The Politics of China: The Eras of Mao and Deng, 2nd ed. (Cambridge: Cambridge University Press, 1997); and Archie Brown, The Rise and Fall of Communism (New York: Ecco, 2009).
Scott Mainwaring and Aníbal Liñán, Democracies and Dictatorships in Latin America: Emergence, Survival, and Fall (New York: Cambridge University Press, 2013), 273–274.
Juan Linz and Alfred Stepan, Problems of Democratic Transition and Consolidation: Southern Europe, South America, and Post-Communist Europe (Baltimore: Johns Hopkins University Press, 1996), 96.
Simon Parlier, “Adolfo Suárez: Democratic Dark Horse,” in Leaders of Transition, ed. Martin Westlake (London: Macmillan, 2000), 149.
Sergio Bitar and Abraham F. Lowenthal, eds., Democratic Transitions: Conversations with World Leaders (Baltimore: Johns Hopkins University Press, 2015), 71.
These changes are analyzed in much greater detail in Archie Brown, The Gorbachev Factor (Oxford: Oxford University Press, 1996); and Archie Brown, Seven Years that Changed the World (Oxford: Oxford University Press, 2007).
V.I. Vorotnikov, A bylo eto tak … Iz dnevnika chlena Politbyuro TsK KPSS (Moscow: Sovet veteranov knigoizdaniya, 1995), 260.
Roald Sagdeev, The Making of a Soviet Scientist: My Adventures in Nuclear Fusion and Space from Stalin to Star Wars (New York: Wiley, 1994), 272.
John Dunn, Setting the People Free: The Story of Democracy (London: Atlantic Books, 2005), 132.
Richard E. Neustadt, Presidential Power: The Politics of Leadership (New York: Wiley, 1960), 9–10.
Edward Luce, “Obama's High Stakes Final Year,” Financial Times, January 4, 2016.
Ibid.; Robert A. Caro, The Years of Lyndon Johnson, Volume 4: The Passage of Power (London: Bodley Head, 2012); Randall B. Woods, LBJ: Architect of American Ambition (Cambridge, Mass.: Harvard University Press, 2007), esp. 434–436 and 440–441; and Alfred Stepan and Juan J. Linz, “Comparative Perspectives on Inequality and the Quality of Democracy in the United States,” Perspectives on Politics 9 (4) 2011: 841–856.
Lauri Karvonen, The Personalisation of Politics: A Study of Parliamentary Democracies (Colchester, United Kingdom: European Consortium for Political Research, 2009), 87–93.
Jim Pickard and Tanya Powley, “Heathrow Decision Faces Emissions Delay,” Financial Times, December 7, 2015.
Ian McAllister, “Political Leaders in Westminster Systems,” in Political Leaders and Democratic Elections, ed. Kees Arts, André Blais, and Hermann Schmitt (Oxford: Oxford University Press, 2011), 52, 64.
Kenneth O. Morgan, Callaghan: A Life (Oxford: Oxford University Press, 1997), 692–693.
Charles Moore, Margaret Thatcher: The Authorized Biography. Volume Two: Everything She Wants (London: Allen Lane, 2015), 58.
Karvonen, The Personalisation of Politics, 102; Amanda Bittner, Platform or Personality? The Role of Party Leaders in Elections (Oxford: Oxford University Press, 2011), 73; and Anthony King, ed., Leaders' Personalities and the Outcomes of Democratic Elections (Oxford: Oxford University Press, 2002).
Jonathan Powell, The New Machiavelli: How to Wield Power in the Modern World (London: Bodley Head, 2010), 78.
Archie Brown, The Myth of the Strong Leader: Political Leadership in the Modern Age (London: Bodley Head; and New York: Basic Books, 2014), 352.
Barbara Kellerman, The End of Leadership (New York: HarperCollins, 2012), 183.
S. Alexander Haslam, Stephen D. Reicher, and Michael J. Platow, The New Psychology of Leadership: Identity, Influence and Power (Hove, United Kingdom; and New York: Psychology Press, 2011), 200.