Abstract

Sonification of time series data in natural science has gained increasing attention as an observational and educational tool. Sound is a direct representation for oscillatory data, but for most phenomena, less direct representational methods are necessary. Coupled with animated visual representations of the same data, the visual and auditory systems can work together to identify complex patterns quickly.

We developed a multivariate data sonification and visualization approach to explore and convey patterns in a complex dynamic system, Lone Star Geyser in Yellowstone National Park. This geyser has erupted regularly for at least 100 years, with remarkable consistency in the interval between eruptions (three hours) but with significant variations in smaller scale patterns between each eruptive cycle. From a scientific standpoint, the ability to hear structures evolving over time in multiparameter data permits the rapid identification of relationships that might otherwise be overlooked or require significant processing to find. The human auditory system is adept at physical interpretation of call-and-response or causality in polyphonic sounds. Methods developed here for oscillatory and nonstationary data have great potential as scientific observational and educational tools, for data-driven composition with scientific and artistic intent, and towards the development of machine learning tools for pattern identification in complex data.

Constructing scientific knowledge involves a process of first gathering measurements, dematerializing these measurements into numbers, then rematerializing the numbers in a representational form that can be interpreted. For the physical sciences, typical representations are visual images (e.g., graphs) and mathematical objects (predictive models), but can also include temporal representations, in the form of sounds and animations. The rematerialization process necessarily introduces a subjective human element into scientific knowledge; for example, to plot data on a graph, one must choose symbol color and size. Aesthetic choices are typically deemed acceptable as long as key elements of the data are clear and reproducible. For visual representations of data, such choices have been carefully considered and studied (e.g., Tufte 1983). On the other hand, aural representations of data, as a newer domain, have not been as thoroughly explored as visual ones, but it is increasingly acknowledged that they have great potential (Hermann, Hunt, and Neuhoff 2011). Recent work on representation of global seismic wave fields (Holtzman et al. 2014) demonstrated that the combination of sonic and visual representation of data brings far more understanding of patterns than either alone.

In this article, we develop a sonification and visualization approach that aims to illuminate patterns in multivariate time series data. We incorporate multiple streams of data that reflect tightly coupled aspects of a complex dynamic process—eruptions of a hydrothermal geyser. The sonification and animation methods presented here are broadly adaptable to other scientific, educational, and artistic aims. By expressing the data as sound combined with animations, new avenues for identifying common patterns and anomalies within multiple temporal data streams become possible. Such representation techniques also offer unique opportunities for teaching tools, as the quantitative aspects of complex data are easy to understand without technical training. The innate search for causality among sights and sounds in the human auditory and visual systems become activated in ways that can extend the scientific search for understanding.

The aims and methods outlined in this work provide a natural link to concepts of ubiquitous music (“ubimus”), as proposed by Keller, Lazzarini, and Pimenta (2014) and addressed in this special issue of Computer Music Journal. Ubiquitous music conceptually seeks to open any kind of natural data (measurements of spatial or temporal patterns from human or nonhuman phenomena) to become a source of composition. Sonification of environmental data is related, having the intent to use generative sound to convey, illuminate, and reveal patterns encoded in data. A goal of ubimus is to develop opportunities for musical creativity for musicians and untrained participants alike, outside a traditional studio setting (Pimenta et al. 2014). There are parallel aims in the sonification and visualization of scientific data, as the tools of scientific data acquisition and public access become cheaper, more widely available, and easier to use (Beyreuther et al. 2010; Given et al. 2014). As long as the methods are transparent, relatively simple data transformations and display methods make complex patterns in data perceptible by different audiences (e.g., trained scientists and the general public).

Earth scientists studying natural phenomena are frequently challenged by subjects and processes that occur on spatial and time scales outside the human experience. Adapting data into representational forms within our range of perception is our rematerialization task. Volcanic eruptions are examples of this situation: They occur relatively rarely and highly episodically in time, preceded by magma motions that are hidden beneath the surface and difficult to measure, hence poorly understood (Poland and Anderson 2020). Indirect measurements are generally made during periods of volcanic unrest. Deformation of the solid earth occurs in response to magma movement, temperature at the surface constrains heat transported from magma reservoirs at depth, and gas emissions reflect volatile elements released during magma ascent. When eruptions do occur, other data are available such as physical samples of erupted material, videos, and atmospheric measurements. Simultaneous collection of multiple data streams is a primary goal of volcano monitoring, but this is challenging because volcanoes are often remote, difficult to instrument, and their activity is strongly unsteady in time. Furthermore, there is no unifying theory with which to connect observations to processes, so volcano science—and hazard assessment—relies heavily on empirical correlations.

Scale models (made of clay, tinted liquids, etc.), known as “analog systems” in the realm of earth science, are thus a valuable resource for understanding volcanic processes. Geysers represent such a natural analog system. They erupt more frequently, more regularly, and with less hazard to humans than volcanoes, making them test beds for sensor-network design and model development that can potentially inform volcanic hazard assessment (Hurwitz and Manga 2017). Here, we use data collected at Lone Star Geyser in Yellowstone National Park to develop a technique combining sonification and visualization to represent the multiparameter data typical of volcanoes and many other environmental settings.

In what follows, we first provide contextual information on geysers and an overview of time series data collected during two separate experiments four years apart at Lone Star Geyser. We then describe the sonification and visualization procedures applied to each data set and between experiments. We end with a discussion of what we can learn from this technique for data representation and assess the effectiveness of our approach.

Figure 1

Regional map, showing Yellowstone National Park, with Lone Star Geyser marked by a star (a). Shaded relief map of topography around the Lone Star Geyser cone, showing locations of instruments from the 2010 and 2014 experiments (b).

Figure 1

Regional map, showing Yellowstone National Park, with Lone Star Geyser marked by a star (a). Shaded relief map of topography around the Lone Star Geyser cone, showing locations of instruments from the 2010 and 2014 experiments (b).

Experiments at Lone Star Geyser

Geysers are relatively rare features that occur in geologically active areas exhibiting a combination of thermal and hydrologic conditions that drive localized, episodic eruption of hot water and steam into the atmosphere (Hurwitz and Manga 2017). Although some geysers are human-made (e.g., Rudolph et al. 2012), natural geyser occurrence is restricted to a few areas worldwide, with over half occurring in the Upper Geyser Basin of Yellowstone National Park.

Lone Star Geyser in Yellowstone National Park exhibits a three-hour cycle of eruptions that has maintained a quasi-regular period since at least the year 1920. As illustrated in Figure 1, two similar experiments in 2010 and 2014 characterized geyser activity using a variety of instruments that are also common to magmatic volcano monitoring: Visible video cameras and infrared sensors captured the water and steam venting from the geyser cone (a “sinter” mound built by precipitation of silica dissolved in erupting hydrothermal waters, Figures 1b, c), and stream gauges characterized the total liquid output, while platform tilt meters (five instruments deployed in 2010) and broadband seismometers (one with sampling rate of 100 Hz in 2010, five with sampling rates of 250 Hz in 2014) measured ground deformations associated with fluid and steam flow between the subsurface storage zone and the surface. The absence of platform tiltmeters led to some important differences in the data that inform our work, as will be described in the following section. A ground-based Light Detection and Ranging instrument was deployed to precisely survey the 3-D geometry of the geyser vent area in 2010 (Figures 2 and 3).

Based on 32 consecutive eruptions over a four-day period in 2010, Karlstrom et al. (2013) proposed a four-stage cycle that defines the repeating nature geyser activity over a three-hour period. These stages are: (1) eruption, characterized by vigorous jetting of water and steam that decays in strength gradually over a period approximately 20 minutes; (2) posteruption relaxation, characterized by steam venting and high frequency seismic signals interpreted as boiling water at the base of an emptied conduit; (3) quiescent conduit refilling, characterized by no surface discharge, slow ground deformation, and no significant seismic signals; and (4) preplay, which consists of sporadic and unpredictable spurts of water from the geyser vent that precede a new eruption. Vandemeulebrouck et al. (2014) explored the subsurface signatures of this eruption cycle, suggesting that polarization of seismic displacements (particle motions) tracked water movement between the geyser cone and a subsurface water reservoir that is offset from the vent.

Figure 2

Schematic of the Lone Star Geyser experiment, depicting the physical processes being recorded by the infrared sensor, tiltmeters, and seismometers. Note that there is no vertical or horizontal scale.

Figure 2

Schematic of the Lone Star Geyser experiment, depicting the physical processes being recorded by the infrared sensor, tiltmeters, and seismometers. Note that there is no vertical or horizontal scale.

We are interested in comparing multiple eruption cycles captured in the two different experiments with a combined audiovisual approach to data analysis. We therefore focus on a subset of the data collected in each experiment (see Figure 2), describing each data set and any preprocessing steps that were taken prior to sonification and animation.

Deformation of the ground surface in response to water and steam transport in the subsurface at long (1–100 sec) periods was recorded through tilt (angular surface displacements), recorded by different types of instruments between 2010 and 2014. Higher frequency (1–100 Hz) ground deformation arising from fluid motions coupled to the solid earth was captured by broadband seismometers in both years. Eruption of water and steam was captured by a spot infrared sensor pointed above the geyser vent. The voltage output from this infrared sensor is a good proxy for water and steam discharge (Karlstrom et al. 2013).

Data Representation Methods

A spectrum of “direct” to “indirect” sonification methods are applicable to the data sets described here. Direct sonification (i.e., audification, cf. Dombois and Eckel 2011) is the simplest: Oscillatory data, like seismic signals, can be sped up or slowed down to the desired audible range. One must choose how to filter the data, the speed factor (or new sampling rate), whether to apply compression, and so forth, but the sonification procedure is unambiguous. In contrast, indirect sonification involves more possibility in mapping data to parameters of sound (e.g., pitch, timbre, or volume); the sonification procedure in this case depends on the application. In direct or indirect sonification, aesthetic choices are inevitable but are carefully performed such that the structure of the data is respected.
Figure 3

Three orthogonal components of ground deformation, recorded by a broadband seismometer, for one eruption cycle (a). The inset shows how the direction towards the seismic source θ(t), which varies over time, is inferred from the relative amplitudes of the two horizontal components of the seismic data, n and e. Data preprocessing and sonification procedure (b). The function θ(t) determines the weighting of the seismic sound between left and right channels. The lower panel shows the correlation between seismic signals in left and right channels and the stages of the eruption cycle, with the preplay stage characterized by short bursts of quickly moving noise. Spectrogram of sonified data (c).

Figure 3

Three orthogonal components of ground deformation, recorded by a broadband seismometer, for one eruption cycle (a). The inset shows how the direction towards the seismic source θ(t), which varies over time, is inferred from the relative amplitudes of the two horizontal components of the seismic data, n and e. Data preprocessing and sonification procedure (b). The function θ(t) determines the weighting of the seismic sound between left and right channels. The lower panel shows the correlation between seismic signals in left and right channels and the stages of the eruption cycle, with the preplay stage characterized by short bursts of quickly moving noise. Spectrogram of sonified data (c).

In the following, we proceed through the three data sets, ordered from direct to progressively indirect sonification requirements, namely, from seismic, to tilt, to infrared data. Within each section, we first describe the sonification method and then the animation designs. We developed a simplified representational scheme for the animations, which requires an underlying interpretation of the data that is not needed for the sonification. In particular, the indirect measurements of ground deformation (seismic and tilt measurements) require assumptions about physical processes in order to derive visual representations.

Seismic Data

Broadband seismic data were collected at a seismometer approximately 20 m from Lone Star Geyser. Following Vandemeulebrouck et al. (2014), we filter the seismic data to the range 5–22 Hz, a frequency band commonly attributed to bubble cavitation and boiling water known as hydrothermal “tremor.” The root mean square (RMS) amplitude of the tremor varies systematically during the different eruption stages. It reaches its highest values during the preplay events before an eruption, maintaining a moderate level in the posteruption relaxation phase, and reaching its lowest values in the quiescent phase, as shown in Movie 1 from the supplementary materials to this article. (All movies accompanying this article are available at https://dx.doi.org/10.1162/comj_a_00551.)

Vandemeulebrouck and colleagues found that the location of the tremor source also varied systematically over the course of an eruption cycle. We assume that the location of the tremor source is a proxy for subsurface water motions. To calculate this location we carry out a polarization analysis of particle motion, which essentially uses the ratio of amplitudes of the two horizontal channels to determine the direction of the tremor source, following the method of Vidale (1986) with a 12-sec window for smoothing, as shown in Figure 3. For most of the eruption, the tremor source is located about 20 m from the geyser vent. During preplay and eruption, however, the source migrates towards the vent in a series of pulses, correlated with spikes in the RMS seismic signal amplitude (Movie 1).

Sonification

We use direct sonification for the seismic data. Because it is inherently oscillatory, we simply speed up the filtered seismic data by factors of 199 (for Movie 2) and 597 (for Movies 3 and 4), which places it in our auditory range (Kilb et al. 2012; Peng et al. 2012; Holtzman et al. 2014), occupying the upper middle of the hearing band (about 3 kHz) to separate from other sonified data streams. To hear the tremor migrate spatially, we apply a linear weighting of the left and right stereo channel amplitudes according to the polarization analysis of particle motion (Figure 3). In Movies 2, 3, and 4, the sweeping motion from right to left is directly derived from the data, as illustrated in Figure 3.

Animation

The exact mechanism behind the hydrothermal tremor signal is unknown. This type of signal has been seen at other geysers (e.g., Wu et al. 2017), however, and has often been attributed to the collapse of vapor bubbles, giving rise to impulsive pressure waves. We use this interpretation as a starting point for the visualization of the tremor data, and create a migrating cloud of clustered dots (Movie 1). This cloud grows in size and opacity with increased RMS amplitude and migrates according to the particle polarization analysis. The cloud is located away from the vent during the entire eruption sequence, but becomes more laterally extensive during the preplay events and eruptions, and in some cases reaches the vent. Aesthetic choices were made in the extent of the scatter and random motion in the dot clouds, tuned to reflect an impression of the dynamics and the uncertainty in the location and the cause.

Tilt

In 2010, ground deformation at periods of around 1 to 10 sec was recorded by five platform tiltmeters, instruments that directly measure angular displacements of the ground. There does not appear to be a clear temporal or spatial pattern common to all the tiltmeters. We focus on data from one instrument (T03), which shows the clearest correlation with other data discussed here (Figure 1b).

In 2014 no tiltmeters were deployed, so we process broadband seismic data to retrieve the long-period ground tilt signal (Figure 1b). Conversion of seismic displacements to tilt requires deconvolution of translation and is generally an underdetermined problem (Fournier, Jolly, and Miller 2011). Horizontal seismic accelerations at low frequencies are dominated by tilt, however (Wiens et al. 2005). Therefore we approximate tilt with the horizontal seismic data (corrected for instrument response to acceleration, which amplifies low frequencies), decimating from the original 250-Hz to 10-Hz sampling. We then multiply by the gravitational acceleration -1/g to use the unit radians. This filtering removes sharp tilt steps such as those in the 2010 observations, but variations on periods relevant to eruption phases can still be compared between the two experiments. We rotate the xy data to a radial displacement for analysis and plotting here.

Sonification: Chord Sweep

The method introduced here is, to our knowledge, original. The idea of the “chord sweep” is to map time series data smoothly to a discrete series of pitches, as determined by patterns in the data. The process is different from synthesis with frequency modulation, as it is not a smooth sweep through a range of frequencies. Instead, smooth transitions between pitches of our choosing (selected to be distinct from the pitch sets of other sonified variables) are made by cross-fading, building one note's loudness at the expense of the other.

To do this, we build a set of weighting functions wi that reflect the occurrence of particular values in the data, called dj here (see Figure 4; the chord sweep for the 2014 data is shown in in Figure 5). We define a set of values wic that form the centers of these weighting functions and are discrete values within the range of the data (or the projected or potential range in a real-time situation). These functions will be used to create envelope functions ej from the data. The subscript j indicates that these functions will have the same dimensionality as the data. The subscript i indexes our weighting functions (number of degrees of freedom in our mapping). The envelope matrix Eij is composed of each ej created for each value of wic. These weighting functions and the resulting envelope functions can have any form, and the envelope functions can be used to modulate any aspect of the sonification.

For this application, the raw data is the tilt τ, shifted into positive values and then normalized
dj=τj-min(τj)max(τj-min(τj)),
such that dj[0,1]. Here, the weight centers are wic[0,1] as well, as evenly spaced N values, such that dw=1/N, or more specifically (max(d)-min(d))/N. The weighting functions wi are used to create a set of envelopes Eij=dj*wi(dj), using the outer product with wi being a function of dj.
In the present application, the wi are triangle functions (see Figure 4b). The envelopes are constructed according to the algorithm:
Eij=dj*0ifdj<wic-dw;ordj>wic+dw,01ifwic-dw<dj<wic,10ifwic<dj<wic+dw.
This structure is actually achieved in the code with linear interpolation of the nonzero values of wi. This process is conservative in the sense that dj=iEij. In other words, the overlapping triangles sum to a rectangle of height 1.0. An extra scaling factor can, however, be introduced for each envelope, ai, to balance perceptual variations in loudness curves, for example, making the sum nonconservative.
To generate the sound (Figure 4e), we build an oscillator bank with a set of frequencies fi of the same dimensionality as wic. Each row of the envelope matrix Eij is assigned to an oscillator frequency and modulates the amplitude of that oscillator (using the RTcmix sound synthesis language, cf. Garton and Topper 1997). If Fij is the oscillator bank, the sound matrix Sij is the element-wise multiplication of Fij and Eij (symbolized by ). The resulting monophonic track sj is the sum of all the oscillators:
sj=iSij=i(FijEij).
This part of the process is performed in RTcmix.

The sequence of tones in the oscillator bank can be any desired set. In the case of the tilt data from Lone Star Geyser, we have chosen an octatonic scale, with equal-tempered intervals [2, 1, 2, 1, 2, 1, 2], starting at fi=0= 77.78 Hz (D#1), as illustrated in Figure 4d. This octatonic scale was chosen due to its symmetry, which does not bias the listener towards a harmonic center. Note that the choice of the number of intervals in the data range is important—it determines how long the sound rests on a particular tone. In Movie 2 (slow, single cycle), the shift between tones is clear; in Movies 3 and 4 with the same number of wic but with time compressed, the perception of discrete tones is reduced; we do not change the number of intervals between movies to give the listener a sense of the relative rates of the two movies.

Figure 4

Method for tilt sonification. Normalized tilt data from the 2010 cycles (a). For each value of the weight function wi, an envelope is created by finding the values of the data that fall within the triangular function, ranging from 0 to 1, indicated by the horizontal lines connected to the vertical lines (b). As an example, the thicker triangle centered at wc=0.38 (in b) is connected by horizontal lines to the data (in a), and vertical lines showing the time span of its first intersection with the data. Envelopes generated from the weight functions (c). The envelope with a thicker line corresponds to that generated by wc=0.38. Each wic value is applied to an oscillator frequency (d). For example, wc=0.38 is mapped to 130.81 Hz. The corresponding envelope (from c) is applied to the oscillator's amplitude, and the enveloped oscillators are summed to make the sound. The resulting spectrogram (e).

Figure 4

Method for tilt sonification. Normalized tilt data from the 2010 cycles (a). For each value of the weight function wi, an envelope is created by finding the values of the data that fall within the triangular function, ranging from 0 to 1, indicated by the horizontal lines connected to the vertical lines (b). As an example, the thicker triangle centered at wc=0.38 (in b) is connected by horizontal lines to the data (in a), and vertical lines showing the time span of its first intersection with the data. Envelopes generated from the weight functions (c). The envelope with a thicker line corresponds to that generated by wc=0.38. Each wic value is applied to an oscillator frequency (d). For example, wc=0.38 is mapped to 130.81 Hz. The corresponding envelope (from c) is applied to the oscillator's amplitude, and the enveloped oscillators are summed to make the sound. The resulting spectrogram (e).

Animation

In eruptive systems, tilt data are often interpreted to reflect ground deformation in response to migrating fluids at depth. For example, an inflation of the ground may indicate increased transport of fluids to an area and, consequently, elevated pressure. We have chosen to visualize this as a point source of pressure in an elastic solid with a flat surface. This gives rise to a pattern of ground deformation that is greatest immediately above the pressure source, and decaying with radial distance from this source (Movie 1). We assume that the pressure source is colocated with the persistent location of tremor offset from the vent, since both data are sensitive to the presence of fluids. We note that multiple physical processes, such as progressive loading of the surface by water erupted from the geyser, could also be encoded in the data. We do not attempt to deconvolve these. Because vertical changes in ground elevation would be too small to perceive at the actual scale, we use an arbitrary vertical scaling, as the animation elements are not rigorously scaled.

Figure 5

Results of the Chord Sweep method for 2014 cycles. Normalized tilt data, with lines showing the values of the weights, wi (a). Mapping of to oscillator bank frequencies (b). Spectrogram showing the resulting synthesized sound (c).

Figure 5

Results of the Chord Sweep method for 2014 cycles. Normalized tilt data, with lines showing the values of the weights, wi (a). Mapping of to oscillator bank frequencies (b). Spectrogram showing the resulting synthesized sound (c).

Infrared Sensor Data

The infrared sensor data tracks variation in the eruptive flux of steam and water from the geyser vent. As shown by Vandemeulebrouck et al. (2014), it effectively defines four phases of the eruption cycle that can also be identified in ground displacements and acoustic emissions. We infer eruption stages from the infrared data, then use those stages to drive parameter mapping in sonification and visualization. Thus, of the three data streams considered, the most interpretive preprocessing of the data occurs here, but we strive to develop quantitative and objective processing that respects published scientific interpretations (cf. Karlstrom et al. 2013).

Sonification: Granular Synthesis

Granular synthesis is a procedure for generating sounds from localized wave packets with compact support (wavelets), often called “acoustic quanta” or “grains” in an audio context, with duration of 1–100 msec (Gabor 1946; Roads 1988, 1995). Combinations of many grains per second can both mimic a range of natural sounds (Keller and Truax 1998) and produce complex synthetic sounds based on specified data tables (even in real time, cf. Truax 1988).

We use the Gransynth time-varying granular synthesis instrument in RTcmix. Many of these parameters can be controlled with a data table (they are “pfield enabled”), making granular synthesis a powerful tool for conveying structures in data. RTcmix's so-called linear octave notation (also known as octmidi) is used to specify pitches: a logarithmic mapping of frequencies in which 8.00 corresponds to middle C, 9.00 is the C an octave above, and so on. Fractions represent a mapping onto notes of the scale between octaves. For example, the tritone F-sharp is represented as 8.5. We note that RTcmix's “linear octave” (1 octave is from 8.00 to 9.00) differs from its “octave.pitch-class” notation (1 octave is from 8.00 to 8.12, whereby the latter value is equivalent to 9.00, with each 0.01 increment representing a semitone). Note also that there are different approaches for identifying pitches with number spaces (e.g., Tymoczko 2006), and this particular mapping is not as general as some.

Figure 6

Granular synthesis parameters based on infrared sensor data. Infrared sensor data, tracking water and steam ejected from the geyser (a). Parameter mapping for the granular-synthesis instrument Gransynth (b), showing pitch center (upper panel), sound amplitude, and pitch jitter (bottom panel). The resulting spectrogram (c).

Figure 6

Granular synthesis parameters based on infrared sensor data. Infrared sensor data, tracking water and steam ejected from the geyser (a). Parameter mapping for the granular-synthesis instrument Gransynth (b), showing pitch center (upper panel), sound amplitude, and pitch jitter (bottom panel). The resulting spectrogram (c).

The infrared sensor records higher voltages when hot water (as liquid or steam) is coming out of the geyser cone, and the magnitude and relative steadiness of the time series effectively differentiate four stages of the Lone Star eruption cycle. To emphasize the characteristics of water discharge present in different eruption phases (Karlstrom et al. 2013), we specify distinct pitch centers for the phases of the cycle when either water or steam is exiting the vent, with grains added or subtracted randomly from this pitch center in time to mimic the variability of discharge (“pitch jitter”). A number of granular synthesis parameters dictated by thresholds contribute to this sonification of geyser eruption cycle phases, shown in Figure 6.

  1. 1.

    Pitch center:

    Based on the average amplitude of a 75-sample (23.2-sec time step) backwards moving average of the infrared time series, we assign a pitch center of 8.4 to the eruption phases in which liquid water generally dominates output (the “preplay” phase and the main eruption phase), a pitch center of 7.5 to the posteruption relaxation phase dominated by steam output, and a pitch center of 7.0 to everything else.

  2. 2.

    Amplitude:

    We assign amplitude values to each phase based on amplitude of the infrared signal alone, muting times when activity is minimum (determined by a seven-sample running standard deviation). The eruption phase is the loudest, preplay phase is generally less loud, posteruption relaxation is less loud still.

  3. 3.

    Pitch jitter:

    We base the parameter for pitch jitter (how far from the center frequency a grain is randomly assigned) on the relative steadiness of the water eruption phases. For preplay, in which jets of water are intermittent, we choose a larger value for pitch jitter than during the main eruption, when water output is more consistent in time.

  4. 4.

    Grain duration:

    The grain duration (a random number distributed uniformly on an interval) is based on the running standard deviation of the infrared sensor, which further emphasizes the preplay stage through larger spread in grain duration.

Animation

As with the sonification, we use the magnitude and relative steadiness of the infrared data to inform the visualization of the geyser discharge—water and steam (Figure 1 c showed a photo of the real eruption plume). Water jets are represented as a cluster of vertical lines with a distribution of heights, the average of which is determined by the magnitude of the infrared signal. Steam is represented as a cloud of low-opacity dots, extending from the geyser vent to the height of the water jet. When no jet is present (during the post eruption relaxation phase) the steam is concentrated around the vent (Movie 1).

In this way we are able to highlight the different phases of the eruption cycle. During preplay, only water jetting occurs and steam is absent. For an eruption, the water jet becomes higher and is accompanied by steam. The following posteruption relaxation phase consists of a low level of steam, which then disappears during the final phase of quiescence.

Synthesis of Methods

We created a series of five movies to highlight different aspects of the experiments. Movie 1 is an explanatory key, isolating each data type in sequence to show the audience how to interpret the different sounds and visuals. In Movie 2 we focus on a single eruption cycle from 2010, compressing three hours of data into one minute. This movie highlights the different phases of the eruption cycle, and allows the different signals between data types to be compared. Movies 3 and 4 are the entire three cycles (about nine hours of data compressed into one minute) for 2010 and 2014, respectively. In these two movies the repetitious nature of these geyser eruptions becomes clear. Finally, Movie 5 is a side-by-side comparison of the 2010 and 2014 sequence of eruptions. We mix the sound for each year down to a monophonic track, sending the 2010 sonification to the left channel and the 2014 sonification to the right. This movie highlights the repeatability of the main features in the eruption cycles, even four years apart, while also illuminating differences in the finer-scale features.

In designing the multivariate sonification, we represent each data stream in separate frequency bands, and with distinctive tonal structures and timbres. The tilt sounds are low frequency, smooth simple variations underlying the more rapidly moving and dynamic infrared and seismic data. The seismic data sonification is a high-pitched hissing with intermittent bursts in amplitude and spatial location. The infrared data sonification is tonal and intermittent as well, but outside of its bursts of activity it is completely silent. In the movies, the animation serves to explain the sources of the sound via fairly simple visual representations, with tight correlation between the sounds and their events. In our experience, the temporal relationship between different data types and the possibility of causality comes through in the sound; if the movie were silent, perceiving these patterns would be much more difficult.

Discussion

We first evaluate the effectiveness of the movies. In a quantitative perception experiment, the first-order hypothesis to test would be that the patterns are easier to perceive when the animation has an accompanying sonification. Although such an experiment is beyond the scope of this study, we describe some observations and questions that the combination of sound and animation brings to light. Then we compare the 2010 and 2014 experiments in terms of both data acquisition and the natural processes on display in the movies. Finally, we comment on data representation more generally and discuss connections to the field of ubiquitous music.

Interpretation of Combined Sounds and Visualization

The movies representing our sonification and visualization results are an alternative to standard graphical techniques for interpreting data. Do these movies provide insights to researchers who hope to learn more about the mechanics of geyser eruptions? Or is their primary utility a device for rendering technical data more intuitive for nonscientists, as an aesthetic tool for understanding the natural world? We believe that both outcomes are on display: The ability to hear time-evolving structures in multiparameter data permits the rapid identification of relationships that might otherwise be overlooked or require significant processing to find. As an example, a close correspondence between preplay and eruption venting of water from the geyser, seismic amplitude, and polarization, and ground deformation (tilt) was recognized by Vandemeulebrouck et al. (2014). These relationships are presented, through graphical comparison of various subsets of the data, over Figures 36, 9, and 10 of that work. In effect, a major component of the text is dedicated to arguing that recorded signals covary and may be related to one another. Our polyphonic sonification and animation of these same data accomplish the same goal much more efficiently: During preplay episodes, tilt variations (pitch and volume of “chord sweep”) correspond precisely with the onset of water jetting (pitch, volume, and grain randomness of granular synthesis) and seismic amplitude (spatialization and loudness of audified waveforms). Tremor lags slightly behind the infrared signal. We can simultaneously interrogate the sign of temporal change through volume and pitch variations that increase when the data increases.

The phases of the eruption cycle itself defined by Karlstrom et al. (2013) are also clearly discernible in the movies, through judicious choice of sonifying and animating particular attributes of the time series. Thresholds for granular-synthesis amplitude and for water-eruption animation were determined by a running standard deviation of filtered data, so that thresholds in volume and pitch jitter correspond to variation in the data, as was shown in Figure 6. Such processing is an integral component to graphical analysis of multicomponent data, and represents an equally powerful tool for sonification.

The process of building these movies may well be considered as part of the research process itself, integral to the working of data into observations and interpretations. The combined sonification and visualization techniques on display in our movies are in effect a dimensionality-reduction technique, which have the potential to sharpen patterns buried in noisy, multicomponent data. This could be said for any representation technique in which we operate through reducing and reducing again the object of study, from its “wild” complexity to its “domesticated” simplification that can be visualized and therefore analyzed (Latour 1999). An appealing aspect of this approach, compared with standard techniques such as principal component analysis, is the intuitive nature of the presentation. Nonspecialist and expert researcher alike can see and hear how multiparameter data inform the way in which geysers work by watching these movies and are well posed to ask research questions afterwards: Why is the eruption cycle both so repeatable (regular three-hour primary eruption events, cf. Karlstrom et al. 2013)? And yet why is the cycle so highly variable in the smaller-scale events of the intereruption dynamics (e.g., preplay events are different from cycle to cycle)? What triggers eruptions?

Comparison of 2010 and 2014 Lone Star Experiments

Another powerful illustration of the sonification and visualization method is to compare the experiments at Lone Star Geyser from 2010 and 2014, in particular towards separation of differences in physical processes from differences in data acquisition. As detailed earlier there were some key differences in scientific instrumentation between these experiments. For example, in both cases an infrared sensor constrains water discharge from the the vent, but in 2014 there were no platform tiltmeters, so long-period ground deformation must be derived from seismometers. Can we hear these differences? Does this processing change our perception of the Lone Star eruption cycle?

To seek differences aurally, we combine the two tracks generated for the 2010 and 2014 experiments. We change the generating function in the wavetable for 2010 versus 2014 in the infrared sensor and tilt sonification to separate sounds that occupy the same frequency range. A triangle wave versus a square wave produces distinct timbral qualities that permit aural differentiation of these two data sets between years while retaining the mapping characteristics that connect to the data (Movies 3–5). These sonic signatures are preserved through all movies, even when the experiments are presented by themselves (Movies 3 and 4). This might be considered the analogue of maintaining a consistent symbol color or size in multiple graphical depictions of the same data. In this realm of scientific data sonification, the choice of musical instrumentation is used to reflect differences in scientific instrumentation that record the same underlying process.

A remarkable similarity of eruptive dynamics (four years apart) is clearly on display: Aligning the onset of the first eruption between experiments, the second eruption occurs almost precisely at the same time later in 2010 and 2014. The third eruption onset in the sequence does not align temporally, however, and preplay jetting events are highly variable between eruptions and between years. Comparison of animations and sonifications for these two experiments (Movie 5) thus demonstrates how the geyser-eruption cycle is repeatable in its large-scale structure despite irregularities that seem stochastic.

Comparison of the tilt data sets, recorded by different instruments located at different places around the geyser vent, nonetheless exhibits similarities at the largest (approximately one-hour) scale. An inflation limb that begins with the onset of preplay, and a deflation limb that begins with the onset of the primary eruption. This large-scale pattern can be heard clearly in the combined score files—amplitude and frequency increase and decrease in a coherent pattern that is audible despite more rapid variations superimposed onto the 2010 data in which small abrupt tilt steps coincide with individual jetting events (cf. Figures 4 and 5). This suggests that, despite instrumental recording differences, the method of sonification and visualization effectively reveals a common underlying process driving tilt. Whether this process arises physically from inflation of the water reservoir connected to the geyser cone (see, e.g., Rudolph et al. 2012), or whether the ground deformation might result partially from transient loading of the surface by falling water that erupted from the geyser, we cannot say at this stage. But the question is illuminated for future researchers.

Data Representation in Science, Connections to Ubiquitous Music

Visualization—as lists, tables, sketches, plots, graphics, diagrams, maps, etc.—has a long history, and scientists are trained to use it, for instance, as a proof in rhetorical arguments, according to the “belief that a written inscription must be believed more than any contrary indications from the senses” (Latour 1986). Bruno Latour goes even further in saying that “no scientific discipline exists without first inventing a visual and written language which allows it to break with its confusing past” (p. 13).

In a later essay, Latour (1999) describes how scientists (in his case, botanists, pedologists, and geographers) build successive abstractions to progressively reduce their object of study (the soil and its material and spatial characteristics) to color chips and numbers that can be brought back to the lab for analysis. In other words, the object of study is transferred from the “world of things” to the “world of signs.”

This “rematerialization” and simplification of the phenomena allows scientists to “present absent things,” in fact many things, at once (much more than available and manageable in the field), comprehensible through one sensory modality only (Latour 1986). Only then can scientists think and discuss, establishing relations between the symbols in the symbolic system, but also between the things they refer to (Ivins 1938), and constructing, negotiating, and establishing scientific objectivity and knowledge. Citing Latour (1986, p. 18) once again:

Contradiction … is neither a property of the mind, nor of the scientific method, but is a property of reading letters and signs inside new settings that focus attention on inscriptions alone.

Starting from the process of “dematerializing” geyser data into digitized data (which is “amodal” in the sense that it is not related to any sensory modality), sonification offers another way of rematerializing the data that is neither less true nor less legitimate: making it apprehensible to the ears. In the light of the discussion above, the temporal extension and transitory characteristics of sounds make them appear as weaker candidates for being a medium upon which debates are held and consensuses can be found. A striking example is that scientific articles about sound include waveforms and spectrograms, but no sound recordings (in fairness, this has been due in part to technical limitations). But our auditory system has specific abilities (e.g., temporal resolution, noise separation, inference of causality, source separation and localization, or the possibility of hearing while not intentionally listening) that have proven successful in the scientific process of exploring data and formulating hypotheses (Landi et al. 2012; Paté et al. 2017). The simultaneous visual information in the animation as applied here provides a constant reminder of the meaning of different sonic elements, to “de-abstract” them, but the real work of pattern recognition is performed by the auditory processing. Geyser eruptions are an ideal application of these methods, in part because they represent a rare process in geology that really can be perceived in real time by humans. But only one of the three data streams that we display (the infrared sensor) corresponds to direct visual observations of the eruptions. And even that requires nontrivial interpretation to correlate with actual water and steam discharged from the vent (Karlstrom et al. 2013).

Now sonification studies have to tackle the issue of identifying the perceptual and cognitive processes that are triggered by a different way of presenting, and representing, the “same” data, and more importantly whether and how sound can be used as a medium to illuminate patterns, correlations, and causality in data, as well as elicit constructive debates and reach a scientific consensus. We anticipate that this will require many repetitions, trials, and adjustments, as well as rigorous standardization processes. The geyser data set presented here represents one step along this path towards a quantitative and aesthetic basis for generating scientific understanding through sound.

In the context of “creativity support tools” as embodied in the ubimus concept (Keller, Lazzarini, and Pimenta 2014), scientific sonification provides both raw material and a methodological starting point for pattern design. Just as music is ubiquitous in society and there is an opportunity for creative participation that is open to all (Keller, Schiavoni, and Lazzarini 2019), environmental signals are ubiquitous, and the ability to record and access this data is increasingly possible. This access provides an opportunity for music creation that can enter into artistic, educational, and scientific fields.

Keller and coworkers posed the question of whether the contrasting expectations of different audiences or participants would require tailoring of design initiatives for specific user profiles. Although there may be some cases in which modification of data processing or aesthetic choices would be desirable, we believe that for the purposes of revealing patterns and relationships in complex, multivariate data, the same sonification methods may be used regardless of audience. Our general approach is to use the same data representation methods, but change the nature and technicality of language for different audiences. For example, in an outreach setting, such as an exhibit on geysers at Yellowstone National Park, the short videos presented here can be linked with explanatory slides and question prompts. For the purposes of musical composition, musicians can piece these raw videos together in whatever way suits their purpose. In research and educational settings, deeper technical information relating to geysers can be included to generate scientific discussion. In the setting of an interactive science museum, an exhibit could incorporate all of these elements, from carefully crafted data visualizations and sonifications, to improvisational interaction of sonification and visualization parameters for musical play or for exploration of different patterns embedded in the data.

The bounds and general structure of the data in our study were known a priori, so the sonification methods could be developed accordingly. An interesting future direction involving real-time sonification of monitoring data would need to dynamically adapt the algorithms in response to unpredictable changes in the data. Such a feature could benefit both the hazard-monitoring community (those studying, e.g., volcanic, seismic, or weather hazards) and the ubimus community. For the latter group, benefits might be in, e.g., managing musical improvisation among audience and performers (van Troyer 2014) or in human–computer interaction (Nika, Chemillier, and Assayag 2017).

Acknowledgments

The Lone Star experiments were conducted under Yellowstone National Park research permit YELL-2015-SCI-5826. The work developed here began as the first author's project in Ben Holtzman's “Sonic and Visual Representation of Data” class at the Computer Music Center, Columbia University, along with Henry Towbin and Bar Oryan. Subsequent work was supported by a Collaboratory grant from the Data Science Institute at Columbia, and Leif Karlstrom's NSF CAREER grant 1848554.

Arthur Paté is with Institut d'Électronique, de Microélectronique et de Nanotechnologies (IEMN), joint laboratory between Centre National de la Recherche Scientifique's Unité Mixte de Recherche (UMR) 8520, Université de Lille, Centrale Lille, Université Polytechnique des Hauts-de-France, and Junia. Avinash Nayak was supported by NSF 1724986 during the 2014 experiment. We thank Shaul Hurwitz, Michael Manga, and Adam Roszkiewicz for discussions.

REFERENCES

Beyreuther
,
M.
, et al
2010
. “
ObsPy: A Python Toolbox for Seismology
.”
Seismological Research Letters
81
(
3
):
530
533
.
Dombois
,
F.
, and
G.
Eckel
.
2011
. “Audification.” In
T.
Hermann
,
A.
Hunt
, and
J. G.
Neuhoff
, eds.
The Sonification Handbook
.
Berlin
:
Logos
, pp.
301
324
.
Fournier
,
N.
,
A. D.
Jolly
, and
C.
Miller
.
2011
. “
Ghost Tilt Signal during Transient Ground Surface Deformation Events: Insights from the September 3, 2010 Mw7.1 Darfield Earthquake, New Zealand.
Geophysical Research Letters
38
(
16
):Art.
[PubMed]
.
Gabor
,
D.
1946
. “
Theory of Communication.
Journal of the Institution of Electrical Engineers
93
(
3
):
429
457
.
Garton
,
B.
, and
D.
Topper
.
1997
. “
RTCmix: Using Cmix in Real Time.
” In
Proceedings of the International Computer Music Conference
, pp.
309
402
.
Given
,
D. D.
, et al.
2014
. “
Technical Implementation Plan for the ShakeAlert Production System: An Earthquake Early Warning System for the West Coast of the United States.
” Open-File Report 2014-1097. Reston, Virginia: US Department of the Interior, US Geological Survey.
Hermann
,
T.
,
A.
Hunt
, and
J. G.
Neuhoff
.
2011
.
The Sonification Handbook
.
Berlin
:
Logos
.
Holtzman
,
B.
, et al.
2014
. “Seismic Sound Lab: Sights, Sounds, and Perception of the Earth as an Acoustic Space.” In
M.
Aramaki
et al., eds.
Sound, Music, and Motion
.
Berlin
:
Springer
, pp.
161
174
.
Hurwitz
,
S.
, and
M.
Manga
.
2017
. “
The Fascinating and Complex Dynamics of Geyser Eruptions.
Annual Reviews of Earth and Planetary Sciences
45
(
1
):
31
59
.
Ivins
,
W. M.
1938
.
On the Rationalization of Sight: With an Examination of Three Renaissance Texts on Perspective
.
New York
:
Metropolitan Museum of Art
.
Karlstrom
,
L.
, et al
2013
. “
Eruptions at Lone Star Geyser, Yellowstone National Park, USA, Part 1: Energetics and Eruption Dynamics
.”
Journal of Geophysical Research
118
(
8
):
1
15
.
Keller
,
D.
,
V.
Lazzarini
, and
M. S.
Pimenta
.
2014
. “Ubimus through the Lens of Creativity.” In
Ubiquitous Music
.
Berlin
:
Springer
, pp.
25
48
.
Keller
,
D.
,
F.
Schiavoni
, and
V.
Lazzarini
.
2019
. “
Ubiquitous Music: Perspectives and Challenges.
Journal of New Music Research
48
(
4
):
309
315
.
Keller
,
D.
, and
B.
Truax
.
1998
. “
Ecologically Based Granular Synthesis.
” In
Proceedings of the International Computer Music Conference
, pp.
117
120
.
Kilb
,
D. L.
, et al
2012
. “
Listen, Watch, Learn: SeisSound Video Products
.”
Seismological Research Letters
83
(
2
):
281
286
.
Landi
,
E.
, et al
2012
. “
Carbon Ionization Stages as a Diagnostic of the Solar Wind
.”
Astrophysics Journal
744
(
2
):
100
110
.
Latour
,
B.
1986
. “
Visualisation and Cognition: Drawing Things Together.
” In
H.
Kuklick
and
E.
Long
, eds.
Knowledge and Society: Studies in the Sociology of Culture Past and Present
.
Greenwich, Connecticut
:
JAI
, pp.
1
40
.
Latour
,
B.
1999
. “Circulating Reference: Sampling the Soil in the Amazon Forest.” In
Pandora's Hope: Essays on the Reality of Science Studies
.
Cambridge, Massachusetts
:
Harvard University Press
, pp.
24
79
.
Nika
,
J.
,
M.
Chemillier
, and
G.
Assayag
.
2017
. “
ImproteK: Introducing Scenarios into Human–Computer Music Improvisation.
Computers in Entertainment
14
(
2
):Art.
[PubMed]
.
Paté
,
A.
, et al
2017
. “
Auditory Display of Seismic Data: On the Use of Experts' Categorizations and Verbal Descriptions as Heuristics for Geoscience
.”
Journal of the Acoustical Society of America
141
(
3
):
2143
2162
.
Peng
,
Z.
, et al
2012
. “
Listening to the 2011 Magnitude 9.0 Tohoku-Oki, Japan, Earthquake.
Seismological Research Letters
83
(
2
):
287
293
.
Pimenta
,
M. S.
, et al.
2014
. “Methods in Creativity-Centred Design for Ubiquitous Musical Activities.” In D.
Keller
,
V.
Lazzarini, and
M. S.
Pimenta
, eds.
Ubiquitous Music
.
Berlin
:
Springer
, pp.
25
48
.
Poland
,
M. P.
, and
K. R.
Anderson
.
2020
. “
Partly Cloudy with a Chance of Lava Flows: Forecasting Volcanic Eruptions in the Twenty-First Century.
Journal of Geophysical Research
125
(
1
):Art.
[PubMed]
.
Roads
,
C.
1988
. “
Introduction to Granular Synthesis.
Computer Music Journal
12
(
2
):
11
13
.
Roads
,
C.
1995
.
The Computer Music Tutorial
.
Cambridge, Massachusetts
:
MIT Press
.
Rudolph
,
M. L.
, et al.
2012
. “
Mechanics of Old Faithful Geyser, Calistoga, California.
Geophysical Research Letters
39
(
24
):Art.
[PubMed]
.
Truax
,
B.
1988
. “
Real-Time Granular Synthesis with a Digital Signal Processor.
Computer Music Journal
12
(
2
):
14
26
.
Tufte
,
E.
1983
.
The Visual Display of Quantitative Information
.
Cheshire, Connecticut
:
Graphics Press
.
Tymoczko
,
D.
2006
. “
The Geometry of Musical Chords.
Science
313
(
5783
):
72
74
.
van
Troyer
,
A.
2014
. “Repertoire Remix in the Context of Festival City.” In
D.
Keller
,
V.
Lazzarini
, and
M. S.
Pimenta
, eds.
Ubiquitous Music
.
Berlin
:
Springer
, pp.
51
63
.
Vandemeulebrouck
,
J.
, et al
2014
. “
Eruptions at Lone Star Geyser, Yellowstone National Park, USA, Part 2: Constraints on Subsurface Dynamics
.”
Journal of Geophysical Research
119
(
12
):
8688
8707
.
Vidale
,
J. E.
1986
. “
Complex Polarization Analysis of Particle Motion.
Bulletin of the Seismological Society of America
75
(
5
):
1393
1405
.
Wiens
,
D. A.
, et al.
2005
. “
Tilt Recorded by a Portable Broadband Seismograph: The 2003 Eruption of Anatahan Volcano, Mariana Island.
Geophysical Research Letters
32
(
18
):Art.
[PubMed]
.
Wu
,
S.-M.
, et al
2017
. “
Anatomy of Old Faithful from Subsurface Seismic Imaging of the Yellowstone Upper Geyser Basin
.”
Geophysical Research Letters
44
(
20
):
10240
10247
.

Supplementary data