Abstract

The aim of the present paper is to analyze Feynman diagrams within the context of recent historical and philosophical debates about models in science and against the backdrop of other diagrammatic methods in mathematical physics when dealing with infinite or asymptotic series. Today’s philosophical model debate largely defines itself by rejecting the traditional understanding of models as mathematical objects that fulfill the axioms of a theory and are isomorphic to an empirical phenomenon. Instead, it emphasizes the autonomy of models within, or even outside, an overarching theory. The example of Feynman diagrams shows that models thus conceived do not necessarily cease to be mathematical objects, if only in a heuristic or “theoretical” sense. Integrating Feynman diagrams into the mathematical tradition of infinite or asymptotic series allows one to avoid the dichotomy whether they represent mathematical or physical objects, or a mere tool mediating between them. Along those lines one does, however, not obtain a universal answer to the question as to what Feynman diagrams represent. A single Feynman diagram, in actual scientific practice, can stand for a single mathematical expression or for a physical phenomenon depending on whether the diagram stands for a single term in an infinite series or for a subseries that is given a physical interpretation. This reading of the representation problem also derives support from the historical fact that Feynman was initially motivated by the Breit-Schrödinger model of a quivering electron. Following Schrödinger, such fluctuations may be considered as a physical phenomenon in its own right that is mathematically construed from the macro-level without having a physically fully specified micro-theory.

1. Introduction: A Tale of Two Readings

Since its inception in the late 1920s and 30s, the main problem of quantum electrodynamics (QED) had been that any interaction or scattering event involved processes of a higher order that arose from vacuum polarization, the creation and subsequent annihilation of particle-antiparticle pairs, and the mutual interactions of all those short-lived entities.1 These processes posed two kinds of conceptual problems. First, they were not detectable individually, but had a measurable effect on the energy of the overall process. Even in simple quantum systems, such as the hydrogen atom, they showed up, for instance, in the form of a further splitting up of spectral lines, most prominently the Lamb shift and measurements of the magnetic moment of electrons. The Lamb shift was experimentally discovered in 1947 and became a major topic at the Shelter Island conference on quantum physics the same year.

Second, the numerical analysis of the whole scattering process produced infinities that had to be kept under control by error-prone and shakily justified expedients. The first systematic approach at taming the infinities was a complicated renormalization scheme developed independently by Julian Schwinger and Tomonaga Sin-itiro in 1947, and was presented at the 1948 Pocono conference. At the same meeting the young Richard P. Feynman outlined a new pictorial way to analyze scattering processes in QED. It had emerged from his particle-centered approach to quantum mechanics and his attempts to systematize the thorny calculation of perturbation expansions.

Feynman’s initial roll-out was unsuccessful (cf. Schweber 1994; Kaiser 2005). The fate of the diagrams only changed after Freeman Dyson successfully integrated them into the Schwinger-Tomonaga quantum field theory. He did so by setting up rules that associated each single diagram with a specific term in the S-matrix—the matrix transforming the in-states into the out states—while making sure that no term was forgotten. In Dyson’s reinterpretation, the diagrams were simply pictorial placeholders for the individual terms in an infinite series; these terms were built up from the Green’s functions K+(.,.) associated to each leg in Figure 2. Feynman diagrams, on Dyson’s account, were paper tools that served as mnemonic devices, whereas Feynman originally assigned them a more autonomous role. Yet Feynman’s own understanding of the diagrams changed over the years; at first he regarded them as a depiction—not necessarily in space-time—of possible physical processes, but later he moved on to a more abstract conception in which the Green’s functions modeled the propagation of a particle in a non-classical sense.

The plurality of interpretations of Feynman diagrams persists to this day. In a standard textbook we read “Feynman diagrams are purely symbolic; they do not represent particle trajectories (as you might see them in, say, a bubble chamber photograph)…. Each Feynman diagram actually stands for a particular number, which can be calculated using the so-called Feynman rules” (Griffiths 1987, p. 59). Yet elementary particle physicists also use the diagrams to denote specific sub-processes of a complex scattering event. One of the historic presentations of the Higgs discovery in 2012 was full of Feynman diagrams, such as Figure 1, that denoted the channels of Higgs production and the channels into which the short-lived Higgs-particle decayed. Each of these channels warrants a specific data analysis, and these analyses are eventually combined into an experimental result.2

Figure 1. 

A Feynman diagram depicting the production of a Higgs particle H from gluons g through a top-antitop quark pair t, t¯ and its decay into a pair of W-bosons W+, W.

Figure 1. 

A Feynman diagram depicting the production of a Higgs particle H from gluons g through a top-antitop quark pair t, t¯ and its decay into a pair of W-bosons W+, W.

2. Tools, Images, and Abstract Representations

In recent years, Feynman diagrams have attracted historians and philosophers of physics. The books by Kaiser (2005) and Wüthrich (2010) lay out not only the diagrams’ prehistory, their gradual development in Feynman’s notes and lectures, and their dispersion across the scientific community, but also discuss the manifold difficulties that have arisen when interpreting them. Kaiser understands the diagrams primarily as calculation techniques in the sense of Warwick’s (2003) “theoretical technology” and Klein’s (2003) “paper tools.” While this does not in principle exclude a mimetic function that goes above and beyond the mnemonic function of the Feynman rules, such a function largely operates independently of theoretical frameworks, among them QED, meson theory, quantum field theory, or the Standard Model of elementary particle physics.

Early Feynman diagrams, Kaiser (2005) argues, were embedded in a large variety of pictorial traditions, among them Minkowski diagrams (cf. Kaiser 2005, p. 185) and bubble chamber photographs. This can be seen in Feynman’s own notes and early lectures (cf. Gross 2012), but also in the diagrams’ use by others—their dispersion, as Kaiser puts it, from a few centers of expertise into different local contexts. Some of these pictorial traditions were physically intuitive, others were not. Rather than being variants of an original style, these diagrams primarily bore family resemblances that reflected styles adapted to the local demands of the associated accelerator experiments. As a contemporary quip had it, “there are field theorists, and there are house theorists” (Kaiser 2005, p. 249). But why then did these different paper tools eventually develop into the more or less unified method particle physicists use today? According to Kaiser, it was the existence of shared visual traditions that explains why Feynman’s diagrams stuck within the community.

Yet Kaiser also provides ample material to show that there were initially significant disagreements between the various local groups as to which terms to include into an expansion of the S-matrix. Some of those examples can be seen as errors that are unavoidable for a new technique requiring knowledge and training that, in those days, could only be obtained through personal contacts with the leading research centers. But there were also disagreements about the physical analysis of the scattering processes, especially when applying the diagrams outside QED. This shows that Feynman diagrams were implicitly understood as some kind of representation, though not simply of space-time processes registered in bubble chambers or depicted in Minkowski diagrams, but in a more abstract sense.

Already Feynman’s first published diagram (Figure 2) makes clear that only time was a physical parameter and that the spatial separation of the vertices played no role, despite the suggestive analogy with bubble chamber photographs. But being a physical parameter did not imply that the time depicted in Feynman diagrams described a temporal direction for causal processes. This marks a significant difference with Minkowski diagrams; for the world lines in Minkowski diagrams connect definite events, and special relativity even defines the causal past and future of an event in terms of such diagrams.

Figure 2. 

Feynman’s first published diagram that shows an electron-electron scattering mediated by a (virtual) photon (Wüthrich 2010, p. 184).

Figure 2. 

Feynman’s first published diagram that shows an electron-electron scattering mediated by a (virtual) photon (Wüthrich 2010, p. 184).

Gross (2012) shows that, in Feynman’s early lectures, the diagrams reflecting different styles also had different functions, even though Feynman appeared to shift back and forth between them with ease. Diagrams anticipating the style of Figure 3 were not drawn primarily for the purposes of calculation, but as guiding ideas, as illustrations that “help explicate or explain particular physical interpretations, concepts, or mathematical features” (Gross 2012, p. 190). Feynman, who had introduced his new method at what looked temptingly close to space-time diagrams, also developed momentum-space diagrams that did not cater to any physical intuition and whose primary function was to facilitate calculations.

Figure 3. 

Interpreting the Dirac equation in terms of scattering probability waves and the introduction of local processes (Wüthrich 2010, p. 120).

Figure 3. 

Interpreting the Dirac equation in terms of scattering probability waves and the introduction of local processes (Wüthrich 2010, p. 120).

Wüthrich (2010) discusses two kinds of roots for Feynman diagrams. First, he traces them back to the ordered tabulation of processes and the energy term schemata that had been imported from the quantum theory of atomic spectra into nuclear physics during the 1930s. This pictorial tradition was more abstract than bubble chamber pictures or Minkowski diagrams, but physically more flexible and adaptable to the quantum world. The term schemata were not tied to the depiction of atomic orbitals as probability clouds. Second and beyond the pictorial tradition, Feynman was also looking for a physical interpretation of the diagrams. His starting point was to understand the Dirac equation in terms of the Breit-Schrödinger model of a quivering electron. He specifically wanted to avoid Dirac’s interpretation of negative energy states in relativistic quantum theory as holes, preferring instead to understand a positron as an electron running backwards in time. This second root is more important for the present paper.

3. Quivering Electrons as Brownian Processes

Feynman’s PhD thesis under Wheeler started from a divergence-free classical electrodynamics that couldn’t, however, be formulated in terms of a Hamiltonian. Since canonical quantization was thus unavailable, he developed his illustrious path-integral method, for an action functional could still exist where the Hamiltonian did not. Even though the intended quantization of the Wheeler-Feynman electrodynamics was unsuccessful, Feynman applied the same strategy to QED. “The paths involved are, therefore, continuous but possess no derivative. They are of a type familiar from study of Brownian motion” (Feynman 1948, p. 376). Having thus attained a mathematical object, an action functional, Feynman “was after something like a model or a mechanism of the processes which the formal apparatus was supposed to describe” (Wüthrich 2018, p. this volume). Enter the quivering (or zig-zagging) electron of Breit and Schödinger, which Feynman thus understood as a quantum version of a Brownian motion. In the manuscripts that Wüthrich analyzes, Feynman used such microstructural considerations “to derive the essential part of the solution, which was the Green’s function or kernel associated with the equations” (Wüthrich 2018, p. this volume). Feynman’s subsequent shift from a consistently microstructural to a modular approach, which left open many details about the electron’s behavior, has been seen as a consequence of the pragmatism required by war-time work (cf. Galison 1998; Kaiser 2005). But this shift in style, Wüthrich argues, also “was brought about by a specific theoretical difficulty” (Wüthrich 2018, p. this volume).

While Wüthrich interprets Feynman’s modular approach as an instance of mechanistic modeling at different levels, the present paper analyzes Feynman’s criteria for solving the problem to obtain a satisfactory model for the action functional formulation of a stochastic process that had no local Hamiltonian, against the backdrop of similar attempts in the long and checkered history of action principles and stochastic processes. A radical solution, but not without precedent, would have been to argue that the action functional was simply the more fundamental quantity, and to list similar cases in classical physics ranging from classical mechanics to general relativity. Max Planck and David Hilbert, the main advocates of the Principle of Least Action could have agreed whole-heartedly (Stöltzner 2003). But this would have required understanding the action functional as a survivor of the quantum revolution or to assign a precise (and measurable) physical meaning to the quantity of action in quantum physics. Feynman, instead, took a Maxwellian turn and was looking for a microscopic “mechanical” model of a Brownian process. But this was certainly a tall order given that he had no idea about the scale and physical nature of such a model. The atomists of Maxwell’s and Boltzmann’s generation knew at least the scale on which atomic phenomena would be dwelling if they existed, and chose a statistical approach to relate this theoretical microworld to the observable macroworld.

Was there another way to understand stochastic processes than Feynman’s? Let us take a closer look at the original motivations of Breit and Schrödinger for introducing the quivering electron. Breit intends to “associate definitive physical quantities with Dirac’s” matrix operators, which renders them “operational matrix-representations” of velocity vectors in perfect analogy to Pauli’s spin matrices (Breit 1928, p. 554). He also discusses how this velocity could be measured at all. Schrödinger (1930) interprets Breit’s result as a fluctuation in the sense that he distinguishes a macroscopic velocity of the electron’s center of mass and a microscopic velocity of the electron, its “Zitterbewegung” (quivering motion). He also gives an upper bound for the postulated new phenomenon. “For an electron of a sufficiently definite macroscopic velocity, the deviations of the center of mass from the straight orbit are much smaller than the extension of the charge cloud” (Schrödinger 1930, p. 423; my translation). As did Breit, Schrödinger viewed this motion in analogy to spin, but he considered the state of affairs still as inconclusive. But he emphasizes that, if such a link could be established, “one would be inclined to consider the … position statistic [Lagenstatistik] as the actual [eigentliche] model of the electron ‘after separating off the translation’” (Schrödinger 1930, p. 424). This means that Schrödinger was willing to consider a suitably modeled fluctuation as a genuine physical phenomenon even without possessing a “mechanical” model of the microlevel from which such fluctuations would emerge. This had been his view all along, as expressed in several philosophical writings. But it also had been his scientific practice when he saw no difference in principle between a Brownian motion—where a microlevel was known—and Schweidler fluctuations of radioactive decay, where such a microlevel was then unknown and would be excluded in principle on the basis of quantum mechanics (Stöltzner 2012).3

Fluctuations do not have to involve a causal model or a micro-level that is specified in more detail than by its inducing fluctuations of macroscopically observable quantities. Neither are such fluctuations linked to the separation between classical and atomic physics. Instead fluctuations arise whenever two levels are sufficiently separated, such that they can be assessed in the mathematical analysis of a theory as a measurable statistical phenomenon. The scale on which micro-phenomena occur, if they are not completely random, may be inferred from the macrotheory and observations by mathematical models. It seems to me that this poses rather modest ontological requirements on statistical models such as the quivering electron, perhaps too modest for the Feynman of the late 1940s to appear as anything more than a technique. Moreover, there is no experimental evidence for a Zitterbewegung.

Still it seems to me possible to apply some of these insights from fluctuation physics for a present-day interpretation as to what and how Feynman diagrams represent. However, we will be less driven by Feynman’s original concerns, but by Schrödinger’s understanding of the Zitterbewegung as a mathematically grounded fluctuation phenomenon.

4. Do Feynman Diagrams Represent?

Thus far, philosophers have mainly addressed the question as to whether Feynman diagrams are merely a bookkeeping device or whether they represent real or virtual physical processes. Most have concluded that they do not represent at all, for a variety of reasons. Robert Weingard (1988), who gave one of the earliest philosophical analyses, argued that only the in- and out-states far away from the domain of interaction can be given a particle interpretation. Moreover, the virtual processes drawn in Feynman diagrams cannot be measured individually and violate a principle as fundamental as energy conservation. Adding Faddeev-Popov ghost fields, which arise in certain quantization schemes, even violates the condition that physical reality must not depend on the choice of gauge. All that can be measured—and eventually can serve as the object of a representation relation—is what corresponds to the quantum-theoretical superposition of all the infinitely many processes of higher and higher order that appear in the expansion of the S-matrix. Some philosophers have tried to circumvent such negative conclusions. Meynell (2008), for instance, effectively weakens the requirements for representation by separating representation from actual denotation.

Another line of reasoning starts from the insight that, in virtue of the Feynman rules, each diagram is assigned a definite mathematical expression that is a term in an infinite series expansion of the S-matrix. James Brown, for one, holds that Feynman diagrams “do not picture any physical processes at all. Instead, they represent probabilities (actually, probability amplitudes)” (Brown 1996, p. 265), which are purely mathematical objects. They are accordingly a formal tool mediating between physical reality and mathematics, a prescriptive flow chart. (Brown 2018) In the same vein, Dorato and Rossanese consider Feynman diagrams as “interpreted, non-representational devices constructed in a given context by the particle physics community” (Dorato and Rossanese 2018, p. this volume). They contemplate viewing them as models, as a quasi-physical tertium quid between mathematics and physics but emphasize that models or representations in general are essentially perspectival. Feynman diagrams, accordingly, are best understood through Hughes’s DDI account of models that distinguishes: (i) the denotation of the physical scattering event in terms of a mathematical model, (ii) the mathematical deduction following the Feynman rules, and (iii) the physical interpretation of the product of this calculation.

Michael Redhead argues that the infinite series “is a mathematical expansion, rather like Fourier analyzing the motion of a violin string. It can only be cashed out physically in terms of probability amplitudes for observing” (Redhead 1988, p. 20) a given term in this expansion at time t. This, however, would require switching off the interaction at t. Thus, the infinite series “is just a mathematical expansion with no direct physical significance for the component states. To invest them with physical significance is like asking whether the harmonics really exist on the violin string” (Redhead 1988, p. 20). Given that there is a one-to-one correspondence between diagrams and series terms, Brown contemplates whether Feynman diagrams can be interpreted as a homomorphism between a physical and a mathematical structure. But since the infinite series—even though typically asymptotic—diverges, it cannot coherently and correctly represent any physical process. “Therefore, diagrams cannot represent any physical process (… in any reasonable sense of the term)” (Brown 2018, p. this volume). On the basis of a pragmatic account of representation Valente effectively tries to block Brown’s argument. He rejects the idea that only the infinite series expansion of the S-matrix is physically meaningful. When describing scattering processes in QED, due to the limited domain of applicability of the theory, physicists “do not [simply] ‘stop the perturbation progress due to practical reasons’” (Valente 2011, p. 45). In some experiments, one can actually “single out the lowest-order Feynman diagram and give operational meaning to the virtual quantum exchange” (Valente 2011, p. 50). In such cases, a physicist can safely black box the local processes because their contributions are negligible. While quite a few examples of such an approach can be found in physics textbooks, there are also cases where one has to consider the full theory.

Thus, to my mind, answering the question as to what Feynman diagrams represent involves combining both a pragmatic attitude towards infinite series of Feynman diagrams—that is, taking account of what physicists actually do—with the insight, stressed by the same physicists, that “the full set of all Feynman graphs is the theory,” or more precisely, that the Feynman rules define the theory (Bjorken and Drell 1964, p. vii). This is the main project of the present paper. I will however not provide a universal account of what Feynman diagrams represent but develop a more nuanced conception of the representational commitments that allows the diagrams to operate between mathematics and physics in a more flexible fashion than according to the DDI account. This conception also includes Valente’s pragmatic approach as one of four aspects.

5. Models Mediating between Mathematics and Physics

The representative commitments characteristic of the contemporary understanding of models are less stringent than commitments to real or virtual particles. The Feynman diagram may thus depict a model that is itself not an elementary physical process, but an abstract model of the Dirac equation, the quivering electron understood as a Brownian process. Modern elementary particle physics contains quite a few such abstract models that figure prominently in the explanation of particle signatures. Among them is the standard textbook version of the Higgs mechanism that does away with the unphysical Goldstone bosons. As a matter of fact, this process can also be depicted as a Feynman diagram.

Referring to the quivering electron, Wüthrich (2010, p. 178) mentions the concept of model developed by Cartwright and Giere, who abandoned the traditional picture that takes models—in a logical or syntactic perspective—as mathematical objects fulfilling the axioms of a theory. Their so-called semantic approach understands the theory as the set of all models, quite in the spirit of Bjorken and Drell, quoted above. I consider the later “Models as Mediators” approach (MaM), set forth by Morgan and Morrison (1999), as most instructive for the case of Feynman diagrams. Following this approach, models are autonomous because they function—and often are construed—as partially independent of any high-level theory; they thus develop representative features in their own right without referring to, or denoting, some given entities. What is required, though, is that the models’ adequacy can be tested either theoretically—e.g., by a thought experiment—or experimentally. In Stöltzner (2014), I have used the MaM approach for an analysis of the variegated model landscape of elementary particle physics; typically, these models are formulated and analyzed by means of Feynman diagrams. Talbert (2011) has shown how Feynman diagrams themselves can be subsumed under the criteria of the MaM approach.

In what follows I focus on an important feature of MaM, to wit, that models can partake in a complex representational relationship that is not a simple isomorphism between a model and what it represents, but rather plays out on different interconnected levels. For instance, Morrison’s standard example, Prandtl’s water tunnel, was both an experimental device to study non-turbulent flow and a test for his own theoretical model, the boundary layer theory (or model). The latter separated the flow in a pipe into an ideal (friction-less) liquid in the center flow from a thin boundary layer in which friction dominates. Feynman diagrams, in the same vein, can be understood as a model that refers both to an abstract model of the physics, similar to the quivering electron, as well as a diagrammatic model for mathematical expressions, which helped to maintain some physical intuition throughout the long calculations.

Moreover, the applications of Feynman diagrams beyond the narrower confines of QED in the 1950s show that physicists indeed began to experiment with them in the sense of MaM. New physical models of meson physics were tested as to whether they could be consistently expressed in terms of Feynman diagrams. Discounting the simple errors made by those who had not yet mastered the new method, early meson theory—as reconstructed by Kaiser—reveals bona fide disagreements about how to extend the mathematical side of the model to new physical processes. Many of these experiments with Feynman diagrams turned out to be successful, but not all of them. For example, the S-matrix program of Geoffrey Chew and his school took Feynman diagrams as basic objects without requiring that a new model be connected to a quantum field theory by some new Feynman rules. This severing of ties with quantum field theory helped to fuel a boom of new research, but the S-matrix program eventually fell short of the great initial expectations. This does not contradict the fact, emphasized by Kaiser, that many techniques from those days have survived in the practice of particle physics.

The S-matrix program and the rampant use of Feynman diagrams in ever new domains of elementary particle physics also attracted criticism from mathematical physicists and motivated the search for a mathematically rigorous framework. As Streater and Wightman put it, “the Main Problem of quantum field theory turned out to be to kill it or cure it: either to show that the idealizations involved in the fundamental notions of the theory (relativistic invariance, quantum mechanics, local fields, etc.) are incompatible in some physical sense, or to recast the theory in such a form that it provides a practical language for the description of elementary particle” ([1964] 1989, p. 1). This was not Feynman’s cup of tea: “The mathematical rigor of great precision is not very useful in physics. But one should not criticize the mathematicians on this score… They are doing their own job. If you want something else, then you work it out for yourself” (Feynman 1965, p. 56f.). It seems to me that this division of labor, together with the neat separation of mathematical and physical ontologies, also stands behind the kind of neither fish nor fowl assessment that renders Feynman diagrams mere tools.

I shall argue instead that the contemporary debates about models may help us to avoid a rigid dichotomy into axiomatically well-defined mathematical concepts—some of which, as mathematical models, can be related via isomorphism, to a physical model—and physical models—whose formulation merely avails itself of mathematical means. My objective here is not philosophical generality, to advance an indispensability argument. Instead I want to break a lance for taking seriously theoretical physicists’ dealing with mathematical objects whose status is not well-established yet but constitutes a possible instance of what Arthur Jaffe and Frank Quinn (1993) have aptly baptized “Theoretical Mathematics.”4 There are at least partial results where the theoretical mathematics could be turned into proven mathematics (cf. Glimm and Jaffe 1987).

6. Local Modification and Global Invariance

Dyson, in Wüthrich’s reconstruction, used Feynman diagrams to obtain an adequate model of the fundamental equations of QED in which the divergences cancelled each other. “To some extent,” he writes, “the model of QED phenomena that he provided using his specific representation, explains why there were uninterpretable divergences in the unmodified [unrenormalized] theory, while also explicating the physical content inherent in the modification of the theory” (Wüthrich 2010, p. 188). What had previously resembled scientific black magic could now be explained on the basis of the structures of the S-matrix, as depicted in the Feynman diagrams and laid down in the Feynman rules. With this came the recognition that, because the microscopic Hamiltonian was meaningless, one could locally introduce certain processes of higher order that leave the global features invariant—apart from the total energy. This insight emerged from Feynman’s interpretation of scattering processes as a scattering of probability waves, which, in the local region of interaction, can also propagate backwards in time. (Figure 3) The circles Feynman drew demarcate what Gross aptly calls “zones of quantum ignorance, the limits of external observers” (Gross 2012, p. 190). Since external observers are macroscopic devices, the argument bears similarities to Schrödinger’s upper bound for the Zitterbewegung that would show up only as a fluctuation of a macroscopic quantity. The main point of this analogy is that one does not need any ontological commitments about a microlevel: it could be physically real or merely virtual, a mere assumption to derive the observed fluctuations.

The fact that Feynman diagrams are introduced to define local processes that merely influence the overall energy, to Wüthrich’s mind, provides an instance, where “problems are not solved in the usual sense of the word but are rather made to disappear by using a symbol system that appropriately represents an adequate model” (Wüthrich 2010, p. 189). Precisely this elasticity, both in regard to the underlying abstract physical model as well as its mathematical expression, was what enabled Feynman diagrams to extend their sway beyond QED. This indicates, I believe, that to understand the role the diagrams play, an even further departure from traditional representational commitments is required.

A helpful parallel is the use of minimal models in quantum field theory that Robert Batterman has analyzed by way of several examples. Such minimal models—for instance, integrable models in quantum statistical mechanics—are considered by theoretical physicists not because they share some common features with a target system, but rather “because of a story about why a class of systems will all display the same large-scale behavior because the details that distinguish them are irrelevant” (Batterman and Rice 2014, p. 349). Model builders proceed by first “showing that various factors are irrelevant. The remaining features will then be the relevant factors” (Batterman and Rice 2014, p. 363). Batterman’s minimal models derive their role from specific, well understood mathematical properties, the likes of which are unavailable in large parts of elementary particle physics. This is the reason why my proposal below must be considered as an instance of “theoretical” mathematics.

This comparison with minimal models suggests that we should not try to understand Feynman diagrams as an approximation to a given overall scattering process. Instead I propose to undertake a philosophical analysis that begins with certain specific Feynman diagrams. These have characteristic features, together with a set of local modifications—loops, ghosts, particle-anti-particle pairs—that leave this structure invariant and show up only in the total energy. These modifications are, in Batterman’s terminology, irrelevant for the large-scale behavior of the scattering process. I am not talking here about the in- and out-states, but rather about specific channels, say of Higgs creation and Higgs decay, where, as in (Figure 1), the Feynman diagram actually serves as the representative for, or rather as the leading term of, a partial series. This strategy is supported by two key features. First of all, it reflects what experimental physicists are setting out to measure, not only at the Large Hadron Collider (LHC), but also in the analysis of hydrogen atoms in experiments far more precise than those involved in the course of discovering Lamb shift. Second, on the mathematical level of the Feynman diagrams, it draws on the fact that the S-matrix expansion is not the only example in mathematical physics where the reordering of an infinite series takes place under the purview of a physical interpretation.

7. Representation Types and the Reordering of Infinite Sequences

Let me now return to the representation problem and suggest that there is no universal answer that covers all aspects of physicists’ employment of Feynman diagrams. Instead their representative features depend on their respective employment as models. It is important to distinguish: (i) a single Feynman diagram viewed in isolation; (ii) a set of diagrams corresponding to certain types of physical processes—sometimes called an effect such as the Lamb shift—that physicists denote by a single Feynman diagram of leading order (such as in (Figure 1), which in actual fact stands for an infinite sub-series); (iii) the expansion of the S-matrix including all processes up to a certain order of loop corrections; (iv) the whole infinite series of Feynman diagrams which, if one follows Bjorken and Drell, simply is the theory.5 In many cases, (iv) cannot be turned into a well-defined mathematical object because doing so would require a rigorous proof of the convergence of the S-matrix expansion. For this reason, one cannot understand the infinite series of Feynman diagrams as an approximate model—in the traditional syntactic sense—that in the limit becomes isomorphic to the physical scattering process. Hence, a certain autonomy of the model Feynman diagram is warranted.

To be sure, a single diagram by itself does not enjoy any autonomy. Case (i) basically amounts to a diagrammatic representation of a mathematical object, a single term in the infinite series of the S-matrix. The Feynman rules describe an isomorphism between the diagram and a mathematical expression, but they do not relate the single diagram thus understood to any real physical process or any measurable effect, precisely because such processes always involve higher order processes to be described in terms of further Feynman diagrams. Yet writing down a single Feynman diagram can be understood in two different ways. While Figure 2 is intended in the sense of case (i)—for this reason, the Green’s functions are pictured at each line—LHC physicists understand Figure 1 to stand for a partial series, in the sense of (ii), corresponding to a channel of Higgs production and decay. The process depicted in this Feynman diagram is the process of leading order in the respective partial series.6

Loops typically occur as a quantum correction term of higher order in other processes, for instance, the production of an electron-positron pair. But in certain cases, the loop-induced process corresponds to the leading perturbative order. This is the case in Figure 1 because there is no direct coupling of the Higgs boson to gluons. The large number of gluons (the gluon luminosity) at the LHC over-compensates the nominal loop-suppression and actually makes the process depicted in Figure 1 of the dominant Higgs production mode at the LHC. There are also other ways to generate a Higgs that are described by different Feynman diagrams. This shows that one may select partial series that have a certain physical characteristic that one wants to emphasize and that such properties are more important in treating the S-matrix expansion than the overall order of the process.7

Reordering of Feynman diagrams and endowing a partial series with a physical interpretation is also applied in fields other than elementary particle physics. Figure 4 shows a table from a paper in experimental spectroscopy (Biraben 2009) listing various parts of the series expansion as effects contributing to the observed Lamb shift. Not all infinite series can be rearranged in a physically interesting way, even those whose convergence can be rigorously proven and is fast. Take, for instance, the Taylor series expansion of the sine or cosine function. Of course, any series expansion can simply be stopped at a certain order in the sense of case (iii) if the precision is high enough for calculation purposes. Doing so involves no specific modeling and corresponds to the pragmatic account of representation advocated by Valente (2011). On the other hand, reordering a sequence under the purview of a leading term in the sense of (ii) amounts to an additional level of modeling that contributes to the explanation of a process and can be tested experimentally, either directly or indirectly as a perturbation. Hence, there is a certain autonomy in the subseries as a model of a physical sub-process within the overall scattering process. This seems to me reconcilable with the mathematical fact that the reordering of a series is only unproblematic if it converges. The partial series should not be seen as simple parts of the infinite series; they involve additional modeling.

Figure 4. 

A summary of the calculations of the 1s Lamb shift in hydrogen separated into (in principle) measurable effects and loop orders (Biraben 2009, p. 113).

Figure 4. 

A summary of the calculations of the 1s Lamb shift in hydrogen separated into (in principle) measurable effects and loop orders (Biraben 2009, p. 113).

Similar cases can be found elsewhere in mathematical physics. Take, for instance, the Lissajous figures that represent expansion terms from the superposition of sine waves. Or recall the term schemata that Wüthrich considered an important motivation for the Feynman diagrams. They denote the energy levels of an atom which, in the case of a simple hydrogen atom, can be described as a product of two orthogonal polynomials, the Laguerre polynomial R and the associated Legendre polynomial Y. For quantum numbers n, l, m, the wave function factorizes as Ψnlm = Rnl(r)Ylm(ϑ, ϕ). On the other hand, it is possible to expand any function that has a finite Hilbert space norm, viz. a wave function, in terms of an infinite series of Laguerre and Legendre polynomials quite in the same fashion as a periodic function can be expressed as a Fourier series in terms of harmonics. This is of course not a way to analyze an atom with two or more interacting electrons. And one may also wonder in what sense interferences between two Feynman diagrams, e.g., two channels of Higgs production, transcend the classification proposed here.

This allows us to return to the question that Redhead rightly raised at the example of an infinite Fourier expansion. Neither Fourier harmonics nor partial series of the S-matrix labeled by their leading Feynman diagram are thus taken to exist in an isolated fashion, as an independent entity. Instead their representative features emerge within a model of the vibrating string or within the Feynman diagram analysis of an S-matrix. Yet both of them are physically reasonable enough for experimental physicists to attempt to measure individual subseries, be it higher harmonics, sub-processes in the S-matrix, or effects being part of the Lamb shift. Such modeling might even run against our natural ontological categories. For instance, each realistic pulse of light of a single color contains higher harmonics that are not falling into what we would typically consider as the same color; this can best be seen by a Fourier decomposition (cf. Danne 2018).

8. Conclusion

Let me conclude that a plurality as described by (i)–(iv) is not a problem if there is a coherent tradition of the use of Feynman diagrams that includes awareness of their complex representational features. While in the early days there were quite a few mishaps and bona fide disagreements, the practice of Feynman diagrams has now been well established, including the identification of their mathematical virtues and vices. Feynman diagrams are not only a paper tool, but also a complex model that partakes in the physical and mathematical realm, not only as the symbolical representation of a model system, such as the quivering electron, but also by continuing pictorial and analytical traditions in mathematical physics.

Notes

1. 

For a history of QED, see the seminal book by Schweber (1994).

2. 

There are differences of style, it seems. While the announcement of the Higgs discovery by the CMS group in 2012 featured a very large number of Feynman diagrams, ATLAS used only a few (cf. https://indico.cern.ch/event/197461/).

3. 

In the following year, Schrödinger (1931) modified the Dirac equation and recasts these fluctuations as disturbances of the fine structure of the hydrogen atom.

4. 

For my own take on this debate, cf. Stöltzner 2005.

5. 

Note that the authors of this early textbook believed that the use of Feynman diagrams was “flexible enough to deal with phenomena of non-perturbative character” and “may well outlive the elaborate mathematical structure of local canonical quantum field theory” (Bjorken and Drell 1964, p. viii).

6. 

As a matter of fact, there were quite a few other channels investigated by the detectors ATLAS and CMS.

7. 

Thanks to Robert Harlander for indicating this point to me.

References

Batterman
,
Robert
, and
Colin
Rice
.
2014
. “
Minimal Model Explanations
.”
Philosophy of Science
81
:
349
376
.
Biraben
,
François
.
2009
. “
Spectroscopy of Atomic Hydrogen, How Is the Rydberg Constant Determined?
European Physics Journal
(
Special Topics
)
172
:
109
119
.
Bjorken
,
James D.
, and
Sidney
Drell
.
1964
.
Relativistic Quantum Mechanics
.
New York
:
McGraw-Hill
.
Breit
,
Gregory
.
1928
. “
An Interpretation of Dirac’s Theory of the Electron
.”
Proceedings of the National Academy of Sciences
14
:
553
559
.
Brown
,
James R.
1996
. “
Illustration and Inference
.” Pp.
250
268
in
Picturing Knowledge. Historical and Philosophical Problems Concerning the Use of Art in Science
. Edited by
Brian
Baigrie
.
Toronto
:
University of Toronto Press
.
Brown
,
James R.
2018
. “
How Do Feynman Diagrams Work?
Perspectives on Science
26
(
4
):
423
442
.
Danne
,
Nicholas
.
2018
. “
A Problem with Objectifying Surface Spectral Reflectance
.”
Under review
.
Dorato
,
Mauro
, and
Emanuele
Rossanese
.
2018
. “
Feynman’s Diagrams, Pictorial Representations and Styles of Scientific Thinking
.”
Perspectives on Science
26
(
4
):
XXXX [in this volume]
.
Feynman
,
Richard P.
1948
. “
Space-Time Approach to Non-Relativistic Quantum Mechanics
.”
Reviews of Modern Physics
20
(
2
):
367
387
.
Feynman
,
Richard P.
1965
.
The Character of Physical Law
.
Cambridge, MA
:
MIT Press
.
Galison
,
Peter
.
1998
. “
Feynman’s War: Modelling Weapons, Modelling Nature
.”
Studies in History and Philosophy of Modern Physics
29
:
391
434
.
Glimm
,
James
, and
Arthur
Jaffe
.
1987
.
Quantum Physics. A Functional Integral Point of View
.
Berlin/New York
:
Springer
.
Griffiths
,
D.
1987
.
Introduction to Elementary Particles
.
Hoboken
:
Wiley
.
Gross
,
Ari
.
2012
. “
Picture and Pedagogy: The Role of Diagrams in Feynman’s Early Lectures
.”
Studies in History and Philosophy of Modern Physics
43
:
184
194
.
Jaffe
,
Arthur
, and
Frank
Quinn
.
1993
. “
‘Theoretical Mathematics’. Toward a Cultural Synthesis of Mathematics and Theoretical Physics
.”
Bulletin of the American Mathematical Society
39
:
1
13
.
Kaiser
,
David
.
2005
.
Drawing Theories Apart. The Dispersion of Feynman Diagrams in Postwar Physics
.
Chicago
:
The University of Chicago Press
.
Klein
,
Ursula
.
2003
.
Experiments, Models, Paper Tools: Culture of Organic Chemistry in the Nineteenth Century
.
Stanford
:
Stanford University Press
.
Meynell
,
Laetitia
.
2008
. “
Why Feynman Diagrams Represent
.”
International Studies in the Philosophy of Science
22
:
39
59
.
Morgan
,
Mary
, and
Margaret
Morrison
(Eds.).
1999
.
Models as Mediators: Perspectives on Natural and Social Science
.
Cambridge
:
Cambridge University Press
.
Redhead
,
Michael
.
1988
. “
A Philosopher Looks at Quantum Field Theory
.” Pp.
9
23
in
Philosophical Foundations of Quantum Field Theory
. Edited by
Harvey R.
Brown
and
Rom
Harré
.
Oxford
:
Clarendon Press
.
Schrödinger
,
Erwin
.
1930
. “
Über die kräftefreie Bewegung in der relativistischen Quantenmechanik
.”
Sitzungsberichte der Preussischen Akademie der Wissenschaften
:
418
429
.
Schrödinger
,
Erwin
.
1931
. “
Zur Quantendynamik des Elektrons
.”
Sitzungsberichte der Preussischen Akademie der Wissenschaften
:
63
72
.
Schweber
,
Silvan S.
1994
.
QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga
.
Princeton
:
Princeton University Press
.
Stöltzner
,
Michael
.
2003
. “
The Least Action Principle as the Logical Empiricist’s Shibboleth
.”
Studies in History and Philosophy of Modern Physics
34
:
285
318
.
Stöltzner
,
Michael
.
2005
. “
Theoretical Mathematics – On the Philosophical Significance of the Jaffe-Quinn Debate
.” Pp.
197
222
in
The Role of Mathematics in Physical Sciences. Interdisciplinary and Philosophical Aspects
. Edited by
Giovanni
Boniolo
,
Paolo
Budinich
, and
Majda
Trobok
.
Dordrecht
:
Springer
.
Stöltzner
,
Michael
.
2012
. “
Erwin Schrödinger – Vienna Indeterminist
.” Pp.
481
495
in
Probabilities, Laws, and Structures
. Edited by
Marcel
Weber
et al
Dordrecht
:
Springer
.
Stöltzner
,
Michael
.
2014
. “
Higgs Models and other Stories about Mass Generation
.”
Journal for the General Philosophy of Science
45
:
369
384
.
Streater
,
Raymond F.
, and
Arthur S.
Wightman
.
[1964] 1989
.
PCT, Spin and Statistics, and All That
.
Princeton
:
Princeton University Press
.
Talbert
,
Jim
.
2011
.
Lines, Squiggly Lines, and Dots --- ∼∼∼ … The Feynman Diagram as a Model?, Senior Thesis
.
South Carolina Honors College
.
Valente
,
Mario Bacelar
.
2011
. “
Are Virtual Quanta Nothing but Formal Tools?
International Studies in the Philosophy of Science
25
:
39
53
.
Warwick
,
Andrew
.
2003
.
Masters of Theory. Cambridge and the Rise of Mathematical Physics
.
Chicago
:
The University of Chicago Press
.
Weingard
,
Robert
.
1988
. “
Virtual Particles and the Interpretation of Quantum Field Theory
.” Pp.
43
58
in
Philosophical Foundations of Quantum Field Theory
. Edited by
Harvey R.
Brown
and
Rom
Harré
.
Oxford
:
Clarendon Press
.
Wüthrich
,
Adrian
.
2010
.
The Genesis of Feynman Diagrams
.
Dordrecht
:
Springer
.
Wüthrich
,
Adrian
.
2018
. “
The Exigencies of War and the Stink of a Theoretical Problem. Understanding the Genesis of Feynman’s Quantum Electrodynamics as Mechanistic Modelling at Different Levels
.”
Perspectives on Science
26
(
4
):
501
520
.

Author notes

My work on Feynman diagrams started with Jim Talbert’s Honors Thesis and a visit at the University of Bielefeld during the summer of 2012, where I had long discussions with Jim Brown and Mauro Dorato. Subsequent work took place within the context of the DFG-research unit “Epistemology of the LHC.” I am also indebted to Dawid Rowe, Robert Harlander and Adrian Wüthrich for their comments on an earlier version of this paper.