What is noise? Common sense tells us it is a disturbance, an invasion of our perceptual space, a nuisance. But this is only part of a more complex story that the sciences and modern technologies might help us unravel. ‘Noise’ has a contextual meaning, but it also points at something ‘in nature’ (or in society)—and something that might also have a function and/or beneficial effects. In this article I show that what is categorized as ‘noise’ is there not necessarily to be removed or to be dispensed with, but to be used and taken advantage of.1
What is noise? A tumultuous crowd is noisy or, more cheerfully, a group of students on holiday, or a flock of migrating birds. A loud conversation or loud laughter can be noisy if we are reading a philosophy article, or we are performing a physics experiment, or we are concentrating on a yoga exercise. In all such cases, noise is something that others do and that we unwillingly suffer, something that we perceive as an invasion of our perceptual space, or an interference with it.2
We instinctively associate noise with an idea of impediment, or obstacle—with a source of distraction that prevents us from accomplishing the task or occupation we are focusing on. These associations are confirmed by dictionary, where a range of synonyms spans from the more specific (outcry, hubbub, clamor, protest) to the more general (disturbance, interference).
There are two features that immediately come to mind when we think of noise:
noise has a contextual/relative meaning
noise is a value-judgment.
In what follows I will ask:
Does noise have, besides a contextual/relative meaning, also an inherent, essential one? Does it make sense to distinguish between the two meanings?
Could noise (either contextual or essential) be good rather than (or besides being) bad? And if so, good in what sense?
However, given that by being contextual noise is not recognized as such (that is as ‘noise’) by everybody indiscriminately, the negative connotation that is normally associated with it is itself contextual and relative. This is true not only in ordinary discourse, but also in more specific fields of knowledge and communication. As has been noticed, “Uncovering mysteries of natural phenomena that were formerly someone else’s ‘noise’ is a recurring theme in science” (Bedard and Georges 2000, p. 33). Here ‘noise’ is used, somehow metaphorically, to argue that what one scientist might take to be irrelevant information, or a meaningless blur, might well become a source of knowledge for somebody else, or upon entering a different research project.
How does this shift (from ‘noise’ to knowledge) happen? First, seeing (perceiving) something instead of ‘noise’ entails being able to extract it from a background, select features relevant to it and separate out aspects which we interpret as being functional to the structure we aim to recognize or bring forward. Doing this largely depends on our capacity, partly psychological, partly epistemological (and partly also technological) of discovering, assembling, or attributing some cohesive form to what occupies our perceptual fields (pattern recognition). There is an element of construction (or reconstruction) in any factual discovery.5 Secondly, being able to recognize a pattern (an object, a phenomenon, some data) depends on possessing or acquiring knowledge that is appropriate to the task of recognition. Having such knowledge itself depends on being in a particular context (of discourse, of research) that allows us to value what has no meaning for someone else in a different context, or at least no interest or focus.
However, arguing for a positive value of ‘noise’ in its contextual meaning is to some extent an easy target. What might prove harder is to show that noise as noise also has a positive role: in other words that, within the same context—say a biological organism, a market, a type of motion—we can identify something as being essentially noise, and yet recognize a beneficial function to its presence as noise. If we look at digital technologies essential noise is indeed deemed a real feature of a system. However, in this case it is taken to be a bad feature. In the field of music reproduction, for instance, the promise made by these technologies is to offer absolutely intact sound: no scratches, no static, in one word no noise. Digital technologies are all about purity, stability, perfection, and the way to achieve this is precisely by keeping noise at bay. There are however other fields, such as biology, physics, economics, climatology, or electrical engineering, where a good function for inherent noise can be discerned.
In what follows a series of examples will show how it is in fact either thanks to the presence of some noise as essentially noise, or else despite it, that certain beneficial goals or results are achieved. More generally it will become apparent how essential noise is not necessarily, and across the board, a detrimental feature to be kept at bay, or removed. Once this is granted, it becomes possible to rehabilitate the term ‘noise’ to connote those situations in which various forms of material interference or potential disturbance can be seen as instrumental to pattern recognition and knowledge acquisition. I will then draw some concluding remarks regarding the necessity to redress the balance of judgement regarding ‘noise’, and to reconsider some of the possible roles and functions of noise in the light of a more positive appreciation of its contributions to knowledge, communication and discovery.
2. From Semantics to Ontology: Essentially Noise
When taken in its contextual meaning there is nothing intrinsically bad in ‘noise’. Contextual noise is a matter of temporary semantic connotation not grounded on the ontology of a phenomenon, or the intrinsic state of a system. Bad noise can become good (not ‘noise’ any longer, or less ‘noise’) simply by moving context. Is there, on the other hand, something intrinsically good in non-contextual, essential noise? There are two parts in this question: 1) is there such a thing as essential noise (that is, something that escapes the/a context of interpretation); 2) is such a thing (could it be) intrinsically good? There is an area in the field of sound where both parts of the question are addressed, and only the former is answered affirmatively.
Digital technologies purport to sever precisely from what is essentially noise, assuming that good sound is by definition noise-free sound.6 The underlying recipe for the execution of these technologies of purity is relatively simple: complex and convoluted strings of sound or of image are decomposed in discrete units; the units are repackaged in such a way that transport of information is made easy; the units are finally recomposed in cleaned up versions, much better and purer than the original ones.
Digital technologies go then beyond expectations: they do not simply generate a perfect reproduction of an original (as in the old concept of high fidelity). They actually produce a better version of it (i.e., a digitalized version). It is in this way that pure digitality achieves perfection: by separating sound (the proper signal) from the natural process that brings it into existence it proves possible to eliminate the whole amount of interference and potential for corruption that goes with that process. This does not mean that digital (re)production is absolutely free from discrepancy or irrelevant signals (thoroughly pure digitality is never really achieved in practice). Yet, one of the extraordinary aspects of digital circuits is their capacity for containment: they are able to control discrepancies to such an extent that they do not proliferate and propagate. The memory in our laptops is refreshed at least 60 times per second, in order to avoid the information stored in our machines becoming corrupted. When a signal becomes a possible source of disturbance, the signal is reconfigured, cleaned up and reintroduced in the system—while maintaining the digital illusion that perfection is easily given, not striven for.
By so doing these technologies promise no decay; good sound is constantly reproduced and maintained, and it becomes timeless. “Perfect sound forever,” as Sony publicity recites. Essential noise is continuously, and successfully, kept at bay.
However, is essential noise only a negative, interfering residue to be eliminated any time it appears? Should we only defend ourselves from noise? Is there any possible good in noise? To answer these questions we ought to move from the field of sound to other areas where analogues of acoustic noise have been recognized throughout twentieth century science and technology, and which can be used to enhance and widen our understanding of this somewhat elusive phenomenon. Noise in this broader perspective appears indeed widespread in nature, as well as in society—in fact, more widespread than we might expect, and in forms that challenge our common sense, and often over critical beliefs. Different forms of noise can be detected in different types of systems: biological cells, quantum measurements and information, non-equilibrium systems, the stock market, etc. Interestingly, by looking at the effects that noise phenomena have on these different systems, we do not necessarily infer that noise is an undesirable interference on the intended operations.
2.1. Thanks to Noise (Taking Advantage of It)
A first example comes from biology, where cellular noise has recently become a field of interest in its own right. Cells routinely perform their several tasks in noisy environments (within and without their walls)7, affected by random variations and fluctuation, themselves affecting vital fundamental processes in cellular biology. By making use of statistical and experimental methods biologists can study both the nature and the consequences of cellular noise.8 By measuring levels of noise within cells and in between cells it was realized how fundamental this phenomenon is for the functioning of cellular machinery. If cells have developed regulatory mechanisms to reduce or control noise, they have also partly evolved mechanisms to take advantage of noise. This is, for example, the case of “bet-hedging” in bacterial populations, where
(…) genetic “switches” within bacteria respond to random cues, so that some members of a population are switched into an active, infectious phase and others into a robust, quiescent phase. Antibiotic treatments may kill many of the active bacteria, but the robust quiescent subset of the bacteria survives for longer, allowing the infection to weather the storm and propagate in the future. (Johnston 2012, p. 19)
Also in the field of synthetic biology10 the functional role of noise is increasingly acknowledged. In an interview dated 1 February, 2012 the MIT bioengineer James Collins (one of the founders of synthetic biology) notes:
In molecular biology in particular, the systems that we’re dealing with are intrinsically very noisy. And many of us have explored and characterized the noise (….) thinking about ways how you could filter it, but I think what we’ve seen is now a shift towards recognizing that it’s a feature and not a bug of the system. And that it may be best to accommodate it by acknowledging it’s there, (…) using it as a feature or property of the system. That could produce additional functionalities such as the ease of switching and exploring different stable states. (Knuuttila and Loettgers 2014, p. 84)
A second example comes from a completely different field: finance economics. In this context, if on one side noise gets defined as random fluctuation that makes a market’s behavior difficult to predict, on the other it is arguably the very reason why there is trading in a market. Price fluctuations, for example, are not always due to well-informed data but also to random changes, sentiments and ungrounded choices of investors. There is a distinction, originally made by F. Black (1986), between rational/informed traders and so-called noise traders. The dynamic relation between the two has prompted the formulation of strategies of investment for separating out noise from signal, and the acknowledgement that asset prices do not always reflect the actual true value of securities (the noisy market hypothesis).
It has been studied, experimentally, that traders with stochastic and sometimes erroneous beliefs affect assets and prices and might earn higher return, by comparison with more informed or rational traders, as a consequence of their bearing of large amounts of the risks that their presence introduces in the market. Though their presence in the market affects the ability of the market itself to adjust to new information, noise traders provide some benefit, for example by increasing market volume and depth, and by reducing “bid ask spread and the temporary price effects of trades allowing liquidity traders to reduce their losses when noise traders are present” (Bloomfield et al. 2009, p. 2301).
A third example comes from climatology. In the 1980s, in an attempt to account for the occurrence of ice ages, the idea of stochastic resonance was put forward. It developed from the observation that, when random noise is added to a system, the system reacts as a consequence with a change in its behavior. The change, interestingly, is for the better rather than the worse, namely the quality of, say, a signal’s transmission or of a system’s performance increases rather than decreases (as we would indeed expect, should noise be only a factor of interference).
Here is how the idea emerged according to one description:
Thirty years ago climatologists asked their physicist friends to explain the almost periodic occurrence of the ice ages, or how a small change of one parameter out of many in the earth orbit around the sun can cause a shift of the climate as dramatic as the ice ages. (…) the physicists’ puzzling response was thought-provoking. Climate supports two stable states, one at a lower temperature (an ice age) and one at a higher temperature (…); fluctuations attributable to geodynamical events can cause random transitions between the two states. The external small, periodic modulations of the earth orbit bias the random transitions towards times where such transitions are most likely. If the fluctuations are too small, the transition occur too infrequently and out of tune with a given modulation of the earth orbit; if the fluctuations are too large, the random transitions would be too frequent and, therefore, also out of tune. Hence, at an optimal amplitude of the fluctuation, depending on the modulation frequency, periodic transitions can be driven by random noise, a phenomenon known as stochastic resonance. (Marchesoni 2009, np)
Two applications of noise might be interesting to mention in view of showing the advantageous function of essential noise. The first occurred in the field of electrical engineering: the study of the behavior of vacuum tubes. A typical example of a vacuum tube is the electric light bulb. Current passing through the filament heats it up so that it gives off electrons. These, being negatively charged, are attracted to the positive plate. A grid of wires between the filament (or cathode) and the plate is negative, which repels the electrons and hence controls the current of the plate (Harper 2003). Vacuum tubes, like all electronic and electrical devices, produce random noise, so analyzing the forms that this noise takes was a way to understand how these devices work and how they could be improved (vacuum tubes proved crucial in developing the technology of radio, television, radar and computers, among others). For example, they produce thermal noise, but also what is known as flicker noise (a noise which decreases with frequency) and separation noise (which occurs when some electric current chooses to follow the path of the screen grid rather than that of the plate, producing a slight random variation in the plate current).
The second application occurred in physics, with a device known as matched filter. This was invented during the Second World War by J. H. Van Vleck and D. Middleton as a way to detect possible signals in a background of noise (Van Vleck and Middleton 1946). The underlying idea of this device was to correlate a known signal with an unknown one in an attempt to detect a similarly known signal in the unknown one. Studying the types of noise that can be added or injected in the device in order to disguise a signal is part and parcel of this technique (Middleton 1996; North  1976).
It could be objected that the examples of noise described in this section fall overall under the general category of random fluctuation, which would suggest that they are examples of essential phenomena or properties of systems for which, after all, the different sciences have acquired a specific descriptive category (other, and more precise, than noise). So ultimately they are not really instances of noise (as the unspecified critical sense conveyed by common language usage points at). Two related points can be made in response. On one side, the objection focuses on using the term ‘noise’ to describe phenomena and properties that science still can measure, model mathematically or make statistically significant. However, being able to calculate or model noise does not necessarily eliminate the phenomenon as such. Random fluctuations are indeed ‘noisy’ occurrences within a system: how, or how much so, can be calculated or modelled. In other words, referring to such occurrences as ‘noisy’ does not amount to a temporary lack of a better term (a sort of metaphor in the sense of catachresis). If there is a better (scientific) term, such as ‘random fluctuation’, part of its meaning still arguably includes features and connotations of the common language term ‘noise’. On the other side, the objection contends that describing noise as ‘random fluctuation’ makes noise disappear. This part of the objection though entails a particular view of what noise essentially is (ungovernable chaos, meaningless disturbance). Instead, as already pointed out, there is more to noise than ‘noise’ in this sense. For example, processes such as thermal fluctuations or random variations can themselves generate – as investigated by the fast-growing areas of granular physics and phase separation – structured states or patterns similar to those produced by stable, orderly natural systems: for example, crystal growth, honeycomb manufacture and floret evolution (Shinbrot and Muzzio 2001, p. 251).
Nonetheless, if the examples of tamed noise reviewed so far in this section fail to convince, a question concerning the existence of essential, potentially good noise still stands: are there instances of genuine, irreducible noise (so looked upon by science itself) that as such can play a positive role or function? The discovery of cosmic background radiation and its impact on the acceptance of the Big Bang theory might offer some of the evidence we need in view of strengthening the argument. Further evidence can be evinced from the story of the so-called Geiger-Müller counter. I will deal with both stories in turn.
As to the first,11 in 1962 a very powerful antenna built by Bell Labs for commercial purposes as part of an early satellite transmission system was dismissed on the grounds that it had become obsolete. Having then been made available as a research technology apparatus, the antenna was used as a radio telescope by two employees at Bell Labs, the radio astronomers Arno Penzias and Robert Wilson. When they started using the antenna Penzias and Wilson noticed that a persistent noise level was detected by the apparatus (a uniform electromagnetic signal in the microwave range), regardless of the direction in which the antenna was pointed. They first attributed the source of the noise to some internal malfunctioning of the telescope and began to review all the possible explanations that came to their mind, there included circumstantial intervening factors. For example, they pointed the antenna towards the city of New York, to check if the disturbance was caused by urban noise. They removed a “white dielectric material” left by a family of pigeons that lived within the giant antenna. They considered possible interferences from recently run nuclear tests and other radio sources from within the solar system. All the checks tested negative. The antenna still detected some noise. They therefore considered the hypothesis that the source of this noise, apparently not internal to the antenna or due to random perturbations, was instead coming from outer space.
At the same time, at Princeton University a team of cosmologists led by astrophysicist Robert Dicke had calculated that, if a Big Bang had occurred at the beginning of our universe (a hypothesis that in those days was far from being accepted), this would reach us in the form of a residual background radiation in the microwave range coming from all directions in space. To explain this prediction in simple terms: light has a finite speed, and everything that we can observe in space appears not as it is now, but as it was when the light that we observe was emitted billions of years ago. Thus, the light (or radiation) that comes to us from very distant objects indicates how the universe was when the light was emitted a number of years ago equal to the distance of the object from us in light-years. In the specific case studied by Dicke and his team, a radiation that reaches us from a distance of more than 13 billion light-years depicts the universe as it was 13 billion years ago, shortly after the explosion (the big bang) that created the universe.
According to the majority view at the time of the facts we are here recounting, the universe had no beginning, and it always was like the universe that we observe today in our astronomical proximities: largely empty and sparsely dotted by large structures like stars and galaxies. If this theory of the universe were correct (the so called steady state theory), when observing light coming from very distant sources we would expect to observe a universe substantially indistinguishable from ours. On the contrary, if Big Bang theorists were right, around 13.8 billion years ago the whole mass and energy of today’s universe was concentrated in a miniscule portion of space that began then to expand thanks to an explosion. As the universe expanded and cooled, sparse stars and galaxies gradually started to look like those in today’s universe. Before this, by following this hypothesis, the universe did not look anything like today’s universe: the sources of radiation back then would be densely and uniformly distributed across space, rather than being sparse and scattered like stars on a night sky.
Dicke and his team predicted that, if the Big Bang hypothesis were correct, we should expect to observe a background radiation, coming from the most distant space, of exactly the kind and intensity as the one that Penzias and Wilson, totally independently of what was going on at Princeton and unaware of Dicke’s predictions, had been observing with their antenna at Bell Lab. But it was only a matter of time that the teams became aware of each other’s results. Penzias, frustrated by the ineliminable noise from the antenna, found out about Dicke’s work via an MIT fellow and friend (Bernard Burk), and finally got in touch with Dicke. Interestingly, when Dicke was invited to Bell Labs to share Penzias’ and Wilson’s findings, he was planning to devise experiments to test his predictions by building an antenna to detect signals of early radiation. He is reported to have sadly commented to his colleagues: “we’ve been scooped.”
Penzias and Wilson’s discovery (together with Hubble’s discovery that the universe is in expansion) provided us with enough evidence for the Big Bang Theory to become the standard model within a decade.12 It is ironic that many theoretical and experimental physicists had observed the same noise time and time again before Penzias and Wilson but never tried to find out what was causing it. Indeed Penzias and Wilson themselves first tried to get rid of this noise. They didn’t immediately realize that it hid behind what many scientists consider the greatest discovery of the twentieth century.
A similarly instructive story can be recounted by looking at the role played by an instrument known as the Geiger-Müller counter in the discovery of cosmic rays.13 The instrument consists of a tube filled with gas connected to high voltage. When radiation enters the tube a phenomenon of ionization occurs such that gas molecules split into smaller particles (electrons, ions, etc.). These are detected in the form of pulses that allow measurement of the passage of these particles in the tube, giving an indication of how much radiation is passing through the tube.
The story becomes interesting for us when, due to it being highly sensitive, the instrument started picking a much higher counting rate than observed with previous experimental apparatus. This anomaly revealed an unexpected amount of wild pulses, or stray discharges, able to interfere with the readings of the needle in the instrument and creating an effect of disturbance. This disturbance was first attributed to the apparatus itself, to the way it was designed, and a great deal of research went into trying to modify it in such a way that such an effect could be as far as possible driven out, or eliminated, or made negligible. However, given the persistence and irreducibility of this disturbance, the idea that some radiation was coming from outside started taking form—first as some radioactive substance in the higher atmosphere, a suggestion soon discarded in favor of the notion of a primary cosmic radiation coming from outer space.14 While working on the operation of the point counter of the instrument (a task assigned to him while he was completing his dissertation with Geiger, throughout 1927), Müller persistently observed a substantial increase of spontaneous discharges which he brought to the attention of Geiger, who then started calculating “the expected number of discharges on the assumption that they might be due to the radioactive source” (Trenn 1986, p. 127). Calculation indicated that the order of magnitude obtained (500 per minute) was correct, so the idea that these discharges were of external origin, rather than being internally generated, started gaining credence. By further testing, and by repeating tests in a number of similar counters, Geiger and Müller found out that their counts matched recent results on cosmic rays penetrating at sea-level (Trenn 1986, p. 133).15 With confidence they presented a paper at the 90th meeting of German Scientists and Physicians in Hamburg, where they claimed that what they referred to as “residual radiation” was largely due to cosmic rays. A full report of their findings appeared in (Geiger and Müller 1928).
Here is how Trenn comments on this story:
The spontaneous discharge which had to some extent inhibited the development of such electronic counters over the years proved to be at least partly due to the hard cosmic rays. (…) Only a device with [the Geiger-Müller instrument’s] sensitivity could have detected them. But the premature detection of these hard cosmic rays before such electronic sensitivity had become recognized had inhibited the development of the very device required for their proper identification. (Trenn 1986, p.135)
Of course stories of this kind interestingly raise the wider issue of how, and by means of what tools (theoretical and material), it can be decided whether what is detected during an experiment is a real phenomenon, or whether what is produced by an experiment is a valid result. To borrow Allan Franklin’s distinction, how can we separate a fact, obtained by a proper “use of an apparatus to measure or observe a quantity” from an artefact “created by the experimental apparatus”? (Franklin 1990, p. 3). Indeed such a distinction becomes relevant when we want to assess whether noise is an irreducible, real phenomenon detected by an experimental apparatus, rather than created artificially by it as a by-product or an effect of some (mal)functioning of it (plainly an interference to be identified as such and discarded). It is also relevant in view of studying, in the case of noise being artificially created by the apparatus (as in the case of vacuum tubes, and matched filter), whether its presence might still be profitably exploited. Some types of noise might be both artificially created and intentionally added to a system in order to single out some other aspects of that system. They are welcome disturbances, so to speak. In the case of stochastic resonance adding white noise to a system (a noise with a wide spectrum of frequencies) will amplify the signal rather than blurring it. By mutually resonating, frequencies will make themselves detectable in such a way that the signal-to-noise ratio will increase in favour of the former (signal), while the latter (noise) will be eventually filtered out.16
It is finally interesting to note here that noise proves also instrumental to studying the limits of precision of the measurement processes in physics instruments (Niss 2016). The limit to the sensitivity of these instruments, that in 1920s and 1930s was set in terms of an unavoidable presence of Brownian motion in those instruments, came later to be interpreted as due to the presence of intrinsic noise (for instance the physicist N. F. Asbury referred to the “inherent ‘noise level’ of a galvanometer,” as quoted in Niss 2016, p. 13). This is what made van der Ziel (1954) point out that noise (or random, “spontaneous” fluctuations, as he prefers to call it) has a specifically practical function, namely it plays an essential part in assessing the precision of measuring instruments. The fact that semiconductors produce inherent noise, that the bandwidth of light in lasers is due to fluctuations, that a vacuum is never empty but always fluctuating (and therefore a random process), all this and more also further stress the practical importance of the study of noise in the context of scientific instrumentation and experiment.
So, against the backdrop of the illustrations, discoveries and applications just discussed, we have reason to believe that noise—either inherent to natural, social or technological systems or artificially/intentionally injected or created—is (can become) an endemic property of systems and instruments, and part of the way they function. In biology, as well as in physics or in economics or in electrical engineering, it proves to have a constructive and/or heuristic role in bringing forward essential underlying features or aspects of the systems under study, or to provoke reactions in the systems that might be functional to its existence and/or survival. Sometimes, as the example of cosmic radiation shows, it can even be the form taken by the existence of phenomena (cosmic radiation) well beyond our capacity of observing, and sometimes even imagining.
2.2. Despite Noise
Inherent, essential noise can also appear to be an objective impediment, a residue to be discarded. Yet, despite this connotation, it might be instrumental to discovery. The story of Charles Wilson’s cloud chamber applications might be used to bring forth some interesting points in this direction.
This story is recounted by, among others, Peter Galison.17 If one asks any physicist in the twentieth century “what is a cloud chamber?,” they would describe it as the first particle detector, that instrument which for the first time made it possible to see the interaction of elementary particles. The chamber creates a phenomenon of condensation in the form of traces of fog, and these traces condensate precisely along the trajectories of the ionized atoms of the particles responsible for condensation. By doing this, the chamber revealed positrons and mesons to the physicist Carl Anderson, and was used by the Nobel laureates John Cockcroft and Ernest Walton to demonstrate the existence of nuclear transmutation. The machine was also used as a prototype for other particle detectors, such as nuclear emulsion stacks or the bubble chamber.
Curiously enough, though, its inventor C. T. R. Wilson was not a particle physicist. Looking at his early studies in 1895, to which he returned towards the end of his career, Wilson was attracted by meteorological phenomena. Wilson invented the cloud chamber, not as an instrument of discovery in the field of transcendental physics (as in those days analytic research into the basic structure of matter was sometimes called), but to study more mundane, yet not less complex phenomena such as clouds, fog and rain. Wilson was interested in understanding the principle of condensation that lies behind these phenomena, and the best way to do this was to try to recreate artificially, in a lab-type situation, the effects of condensation. It was this interest that eventually made him construct his instrument, and that determined its initial use.18
Soon enough it became clear to Wilson that condensation alone was not sufficient to produce real rain, so his interest shifted onto understanding the process supposedly responsible for the formation of raindrops. Wilson’s research efforts were then re-directed to trying to see how drops took shape. It was during these attempts that Wilson came across some extraordinary high velocity pictures of drops falling on liquid surfaces, taken by Worthington and Cole (Worthington 1908). It was this photographic technique that provided him with an essential clue as to how to unravel the basic processes involved in condensation. By taking pictures of the artificial clouds produced by his machine Wilson discovered that they hid a whole range of traces: these, he soon realized, were due to the passage of ionized particles. By March 1911 Wilson was able to single out individual rays, chiefly radiating but also running in all sorts of directions. His first paper on the topic shortly after appeared, where he exhibited his photographs of alpha rays and the clouds where they appeared (Wilson 1911, p. 287). As Galison sums up: “… within his special science of condensation physics Wilson oscillated between thunderstorms and atoms. (…) his technical success with the photography of rain formation led him from meteorology into ion physics” (Galison 1997, p. 109). It was then thanks to Wilson’s physics friends working at the famous Cavendish Laboratory in Cambridge, excited by the newly revealed applications, that the machine came to be used as a full-time particle detector. The cloud chamber had moved from condensation physics to ionic physics, to the study of subatomic matter, and then eventually to nuclear physics. Asking how drops of water materialize gives way to questions concerning the energy of gamma rays involved in the production of electrons, or about the distribution of alpha particles, or about the mass and interaction of the particles detected by the machine.19
Let us focus on the photographs of the particle tracks that so excited Wilson and colleagues after him, and that were to become crucial in the discovery of particles. These photographs, as Schaffer and Lowe (2000) argue, were full of physical noise—using the term somehow metaphorically (but not because of that less justifiably) to point at all the stuff that interfered with the clear vision of the tracks. Patrick Blackett—a famous British cloud-chamber physicist, who won the Nobel prize for his discoveries in the field of nuclear physics and cosmic radiation—drew attention to the skills required to develop visual techniques:
An important step in any investigation using [the visual techniques] is the interpretation of a photograph, often of a complex photograph, and this involves the ability to recognize quickly many different types of sub-atomic events. To acquire skill in interpretation, a preliminary study must be made of many examples of photographs of the different kinds of known events. Only when all known types of events can be recognized will the hitherto unknown be detected. (Blackett 1952, p. vii)
Ever since the 1930s it was known that certain types of emulsions used in photography could also be used to track down various types of micro particles. Marietta Blau, a marginalized and yet influential figure in the rising field of emulsion physics, had pioneered a method for tracking cosmic rays using nuclear emulsion. Cecil Powell, a student of Wilson’s, was pushing the cloud chamber technique further and further in the direction of tracking particles’ trajectories while, at the same time, developing newer emulsion techniques.
It was soon clear that the production of suitable emulsions was beyond the capacity of individual scientists or of the resources available in university labs. So, after World War II Powell, as a member of the ‘Photographic Emulsion Panel’,20 started putting pressure on Illford to get them to develop new emulsions. The turning point occurred in 1948 when Kodak and Illford together announced that they had manufactured an emulsion so sensitive that it would be able to register the tracks of any possible charged particles. So what ultimately, and effectively, became available to the scientific community was a very powerful nuclear physics detecting instrument, able to compete with the particle accelerators already in use in America. Needless to say, the European physicists—though well aware of all the constraints and legal difficulties of collaborating with industrial production or commercial chemistry (clauses of secrecy attached to the sale of the emulsions, patenting, etc.)—jumped on this opportunity and signed a contract with Illford and Kodak (Galison 1997, pp. 189–92). All the same, a period of exciting discoveries followed this controversial and yet decisive collaboration. Pions, kaons, the anti-lambda-zero, the sigma plus, a myriad of new decay patterns, all came to the fore of science and drew the borderlines of what was to become the new field of elementary particle physics. Cecil Powell himself—who eventually won the Nobel Prize in 1950 for his photographic identification of the ‘meson pi’—poignantly claims in his autobiography that all of a sudden it was like breaking into a walled orchard full of all types of exotic fruits.
However, excitement also bore a great deal of anxiety. Indeed, one of the interesting aspects of this way of discovery for the story we are trying to extract here is that the detection of this extraordinary micro-world of particles did not happen on a neutral, or stable background. The emulsion of photographic pellicles changed from one to the next, producing an effect of instability and interference on the photographic images. During development and drying, emulsion and paper backings would bend and distort tracks, or they would make one track difficult to separate from another. So, the photographic apparatus, which was a necessary technical means and condition for the detection of the particles, was at the same time a source of disturbance, of distortion: a background of tangible noise, in the sense here of Schaffer and Lowe (2000). The photographic method was, as a consequence, perceived as itself fragile. As Galison perspicuously points out, this anxiety, partly induced by the unreliability of the method and partly by the instability of the apparatus, was nonetheless productive:
For with each move to stabilize the method, nuclear emulsions became more capable of sustaining claims for the existence and properties of new particles. At each moment, the film appeared to be unstable: at one moment the photographic plate appeared to be selective in what particles it would reveal; at another it was obscured with fog, distorted by development, or uneven in drying. Reliability was threatened by the chemical and physical inhomogeneity of a plate or a batch of plates, and by other difficulties in scanning or interpreting the photomicrographs. Without cease, the struggle to stabilize the emulsion method was a response to the anxiety of instability. Anxiety and the material, theoretical and social responses to it were eventually constitutive of the method itself. (Galison 1997, pp. 237–38)
3. From Ontology back to Semantics
To put it in Bart Kosko’s words, noise has “a head and a heart” (Kosko 2006, p. 3). The head is the scientific part: noise is a phenomenon (in nature, in society). The heart is the value-laden part: noise is (often) a phenomenon we don’t like.22 This points at two conclusive thoughts, in tune with what I have been arguing for in this essay.
First, by being a phenomenon in its own right, noise is not just the effect of a contextual displacement (what is noise for me might not be for you), nor always the equivalent of a fuzzy state, devoid of function or, arguably, of any kind of significance. Secondly, being a phenomenon we do not like is not a matter of fact but a matter of judgement. What brings us to form such a judgement, and most importantly in what circumstances this judgement might not apply, are aspects that deserve attention if we want to understand the complex nature and role of noise.
To pursue such an understanding I suggested in this essay to address two preliminary questions. First, does noise have, besides a contextual/relative meaning, also an inherent/essential one? Second, could noise (either contextual or essential) be good rather than (or besides being) bad? The evidence presented in this essay brings us to consider that a positive answer to both questions is at least reasonable to pursue.
In addressing the first question, I suggested to look for seemingly clear-cut cases of noise phenomena. This made me search among instances in between the field of sound (e.g., noise from cosmic background radiation, or in a match filter) and other fields where, throughout the twentieth century, more and more cases of noise were detected as analogues to acoustic phenomena. In science we have indeed witnessed a widening of the range of phenomena whose scientific descriptions and perceptions endorse some of the connotations we would normally attribute to noise (e.g., physical noise as random fluctuation; stochastic resonance; stray discharges or pulses in a particle counter). Because of its historically acquired malleability, noise also proves useful to describe instances of visual blur and material instability as detected for example on early photographic images of particle tracks (Wilson’s high velocity pictures of alpha rays and the clouds where they appeared; emulsion techniques of particle detection). In all these examples the underlying challenge (both practical and epistemological) was to establish noise in its own right, and certainly not as an expendable side effect of the experimental or technological set ups where it makes its appearance, in a range of forms and for different reasons (fact vs. artefact).
In addressing the second question, I moved beyond common sense usage, questioning whether other or more than the negative connotations of noise could be singled out. To be able to pursue this question, a sense in which noise can be good had to be identified, and in such a way that its role and function would become explicit. To this end I distinguished two categories of description: the thanks to category, and the despite of category. According to the former, it could be argued that there are systems, cells, organs that function the way they do because of noise. They behave, and modify their behavior, as a consequence of it—and the so instigated change is often for the better. Noise is a pervasive characteristic of cortical activity, or a feature of cellular development and reproduction. It is a tool for increasing volume and depth of assets in financial markets. Sometimes it is altogether the form taken by the appearance of certain phenomena, such as background cosmic radiation. According to the latter category, it could be claimed that even noise in the sense of an objective impediment, or an accidental residue, can sometimes acquire crucial heuristic value. Patterns of recognition (e.g., the image of a particle track) never happen on a neutral background, and it is indeed this background, with all its instability, fragility and chaotic prompts, that paves the way to discovery.
By no means do I here conclude that noise is always good. Noise can indeed be a form of pollution (more or less metaphorically, or contextually speaking). But this is only part of a far more complex story—a story that, with the help of scientific and technological imagination, cries for redressing its balance.
The bulk of this paper was presented to different audiences and benefitted from interdisciplinary insight. Special thanks go to Simon Schaffer, Ramon Frigg, Nancy Cartwright, and Emiliano Boccardi. I also thank the two anonymous referees for bringing my attention on some controversial aspects of my exposition of the idea of noise. The discussion presented in a recent special issue on noise in this journal (C.P. Yeang and J. L. Bromberg eds. 2016, Perspectives on Science 24, 1) also proved useful in finalizing this version of the paper.
Noise is a legally recognized form of pollution or of public nuisance, and as such is addressed by various parts of common law.
Equally in science, a strong magnetic field is, for example, ‘noise’ by reference to two bar magnets suspended by fine threads close to each other at the same level: it prevents them from arranging themselves in a straight line.
In the English language of five centuries ago ‘noise’ meant ‘news’, as in ‘I heard a noise’, which is stronger than the contemporary expression ‘I heard a rumor’. ‘Rumor’ stands for a not yet verified type of information, which at the very end might turn out to be untrue, and therefore not a piece of information at all. In the late middle ages, ‘noise’ was instead associated with a specifically informative content: this is how the word was used in common discourse.
This does not imply that any scientific discovery is just a construction (in the sense of being fictional).
Ideas and examples of noise related to digital technologies are taken from Schaffer and Lowe 2000.
Intrinsic noise refers to random differences within a cell; extrinsic noise refers to cell-to-cell differences. Both types are inherent to the functioning of cells (Johnston 2012, p. 19).
There are conceptual and practical difficulties related to bet-hedging strategies, and controversial evidence about its evolutionary underpinnings (Simons 2011).
Synthetic biology is that branch of biology that purposefully designs biological systems. It is highly interdisciplinary, combining biotechnology, molecular biology, biophysics and evolutionary biology. In its more applied versions it is robustly driven by genetic engineering.
I here follow Balbi 2008, pp. 44–46.
Penzias and Wilson received the Nobel Prize for their discovery in 1978.
Hans Geiger was a German nuclear physicist (1882–1945), deeply influenced by Ernest Rutherford, with whom he worked on radioactive emissions at the University of Manchester. In 1908 Geiger designed an instrument able to stream and measure alpha particles (the Geiger counter). Walther Müller was Geiger’s PhD student (1925) at the University of Kiel, North Germany. He improved on Geiger’s instrument, turning it into a more powerful and yet still simple apparatus for detecting radioactive radiation (what came to be known as the Geiger-Müller tube).
For example Millikan and Cameron in 1926 obtained convincing results constituting “new and quite unambiguous evidence for the existence of very hard ethereal rays of cosmic origin.” (Millikan 1926, p. 851, quoted in Trenn 1986, pp. 123–24). Primary cosmic rays are mostly protons that by bombarding the upper atmosphere produce electrons and gamma rays, neutrons, mesons, etc. (secondary cosmic rays).
Trenn here refers in particular to E. Steinke’s experiments as reported in Steinke 1928.
The area of philosophy of science that focuses on experiment is replete with examples of theoretical and material struggles to separate out the real from the illusionary (or in our terminology, signal from noise), and with suggestions of strategies for identifying what constitute valid experimental results (Franklin 1990, 1999). It is a field that also crucially cuts across the divide between realist views of science and more constructivist approaches (Hacking 1983; Gooding et al. 1989). Dealing with these issues would take us far beyond the borderlines and focus of this article.
What follows is a simplified version of the much more complex story of Wilson’s cloud chamber recounted in chapter 2 of Galison 1997. Its significance and relevance in terms of a history of noise is not Galison’s conjecture (and certainly not Wilson’s own, who never used the term ‘noise’ in describing, as we will see, the struggle to identify particle tracks on high-velocity photographs of water drops), but it takes insightful cues once more from Schaffer and Lowe 2000.
There are indeed differences between Wilson’s chamber and an analogous instrument built by a contemporary of his, John Aitken (Galison 1997, pp. 91ff).
This does not mean that Wilson himself turned into being a particle physicist. For him, the interest in ions was always subservient to his main scientific concern with the natural phenomenon of condensation. Yet, Wilson’s interest, not only in imitating nature (which was at the heart of the tradition he belonged to) but also in dissecting nature in view of explaining its functioning, is what makes his machine such a controversial and fascinating tool of discovery in the history of science, and places it at a cross road between different traditions and styles of practicing science (Galison 1997, pp. 136–7).
A panel established by the Cabinet Advisory Committee on Atomic Energy to encourage the production of more sensitive films for the detection of particles.
Famously Kant argued that the limits of knowledge, the limits of what we can know, are what allow us to establish what we can indeed know for certain. They are the positive assumptions on which we build our intuitions, theories, etc. Noise could then be viewed as a limit in this enabling sense.
I here use the more general (for the purpose of the present description) term ‘phenomenon’ rather than ‘signal’, as in Kosko 2006.