Abstract

This article presents a musical interface that enables the sonification of data from the human microbiota, the trillions of microorganisms that inhabit the human body, into sound and music. The project is concerned with public engagement in science, particularly the life sciences, and developing cultivation technologies that take advantage of the ubiquitous and accessible nature of the human microbiota. In this article we examine the collaboration between team members proficient in musical composition and those with expertise in biology, sonification, and data visualization, producing an individualized piece of music designed to capture basic biological data and user attention. Although this system, called Biota Beats, sonifies ubiquitous data for educational science projects, it also establishes a connection between individuals and their bodies and between a community and its context through interactive music experiences, while attempting to make the science of the human microbiome more accessible. The science behind standardizing sonified data for scientific, human analysis is still in development (in comparison to charts, graphs, spectrograms, or other types of data visualization). So a more artistic approach, using the framework of musical genres and their associated themes and motifs, is a convenient and previously established way to capitalize on how people naturally perceive sound. Further, to forge a creative connection between the human microbiota and the music genre, a philosophical shift is necessary, that of viewing the human body and the digital audio workstation as ubiquitous computers.

Introduction

Beats are the compositions of hip-hop, a loop-based form that extracts and manipulates preexisting media, processing with modern production machines, and programming by using one's own unique sense of rhythm, arrangement, and aesthetic taste. It is anything but classical. It is radically modern not only because of its cultural impact but also because of its creative use of specific technologies. Today, its tools are the digital audio workstation (DAW), samples triggered through MIDI, and the decentralized recording space known as the bedroom studio, networked via file-sharing practices. This marriage of contemporary composition strategies with contemporary technologies helps create a platform for sociopolitical narratives fitting for the times. In short, such sounds give cultural movements, comprising individuals and communities, an aesthetic and epistemological understanding of themselves and their relationship to their context. These connections are often ubiquitous in how people come to accept and identify such sounds as expressive of their own ideals. The challenge, and what music can do shockingly well, is to take such implicit connections, and turn them into an opportunity to fortify a common culture (Keller, Schiavoni, and Lazzarini 2019). In a way, it is the sonification of cultural data.

What if there were a musical instrument that could “sample” your biological state, sonifying such data into a composition that articulates emotional insight into your physical body? Imagine playing this instrument not with your hands, but rather allowing it to analyze you, and generate something for you. Such a connection situates itself in the lineage of biomusic, yet takes a distinct departure from the sample-based usage of natural sounds—like field recordings of bird or whale songs—to tracking biome growth and converting it into MIDI data for compositional use. In this way, “sampling” is to be understood analogically, by applying the language of modern music production (in which the term refers to the extraction of recorded sounds) to the excavation of biological data. Mobile technologies have enabled music-production technologies to be a seamless part of the everyday, furthering a participatory music landscape to lay musicians. For biological sampling to be effective in this contemporary music landscape, a musical interface must be both technologically accessible and culturally relevant, particularly to further the advancement of public science education.

To explore these ideas, we developed the Biota Beats project, which is a new type of musical interface that enables the sonification of data from the human microbiota—the collection of trillions of microorganisms found throughout the human body (Grice and Segre 2011)—into sound and music. This project was a collaboration between two groups: EMW Street Bio and the Community Biotechnology Initiative. EMW Street Bio is a program of EMW Community Space (www.emwbookstore.org), an arts, technology, and community center based in Cambridge, Massachusetts. The Community Biotechnology Initiative is part of the MIT Media Lab. Biota Beats is a system that establishes a connection between individuals and their bodies through music and attempts to make this interactive experience with biotechnology more accessible via the aesthetic of hip-hop's compositional form. Biota Beats thus samples an individual's microbiota, images the growth of microorganisms using computer vision techniques, processes microbial growth metadata, and converts it to MIDI and an accompanying visualization. Through this process, Biota Beats creates a workflow connecting developments in hardware, wet-laboratory biology, image processing, sonification, and data visualization.

The core of Biota Beats' mission is twofold: (1) to communicate and democratize an understanding of the human microbiome, specifically through the compositional aesthetics of hip-hop; and (2) to widen the understanding of ubiquitous computing from its current focus on machines to the human body itself, utilizing sonification techniques to analyze microbiome data and produce socioemotional expression via MIDI triggering. Our focus on the microbiome, the collection of genetic material of all 2 kg of microorganisms living on and in the human body, is specifically due to its sensitivity to environmental factors. Such sensitivity opens possibilities towards a therapeutic route with significant potential to alter complex phenotypes, such as mental states. The gut microbiome in particular has received special attention, with mounting evidence for the gut–brain connection (Wekerle 2016); early trials have demonstrated transferring gut bacteria between healthy and ill patients via fecal matter transplant could aid in metabolic diseases such as obesity or other digestive problems, such as those caused by Clostridium difficile (Temple 2017). But for those outside of biology in academia or industry, obtaining a first-hand understanding of such recent developments on the study of the microbiome can be extremely difficult. Understanding, then, necessitates a creative solution.

Studies have shown that parsing audio signals is comparable to understanding numerical data (Turnage et al. 1996). Other studies have gone a step further and argued that audio can provide a more engaging learning experience, thus serving as an effective medium to better process and communicate data (Kramer 1994). Sonification effectively bridges these two concepts, complementing other ways to process data such as visual cues. In sonification we convert data to sound, without words or language, to communicate information. In the case of musical compositions based on Biota Beat data, what one “hears,” particularly as it relates to rhythmic and melodic complexity, is both the growth of biomes and the pedagogical moment of connecting biological growth to music. This process utilizes human sensitivity to variations in the dimensions of sound, such as pitch, amplitude, and time, highlighting patterns in data in a more intuitive way (Hayashi and Munakata 1984). In other words, the large amount of complex biological data being generated in the study of the microbiome makes it an appropriate field in which to apply sonification.

Metaphorically, we view the microbiome as parallel to MIDI, an underlying language utilized to trigger sounds, arrange compositions, and to process temporal structures. Perhaps like the microbiome, MIDI itself may simply carry biological data that the body uses to communicate, coordinate, and interpret, like an input/output (I/O) trigger. Yet our creative interests rest in how to shift mental states using such biological data, similarly to how one selects the melodic and harmonic content that MIDI triggers, which may in turn affect the activity of the microbiome. In this way, Biota Beats roots itself in data music, not intentionally at odds with data sonification, but our focus is on creating tools that empower creative expression while generating conversations about biotechnology. Utilizing hip-hop's aesthetic understanding of temporal structuring, we posited that the way in which microbiomes grow—and the timing with which they move—can be sonified in a creative and expressive manner that opens therapeutic possibilities into altering our mental states, or as can be experienced with our “Uni-Verse” biocomposition and visualization, a thousand-person community's collective state of being (we discuss the Uni-Verse project in greater detail in the section “Project History”).

But why choose hip-hop as the aesthetic medium to explore the microbiome? Central to the cultural expression of hip-hop is the beat, which refers in this context to a rhythmic backing track or loop in the African American music aesthetic that is made up of a “high density of events in a relatively short time” (Keller and Capasso 2006). This density is of particular fascination, in that within a repetitive musical structure a dense amount of information can be efficiently and emotionally processed. But this density is not just for analyzing and communicating data, it is simultaneously a space for carving aesthetic identities, of manipulating underlying temporal structures with one's particular sense of rhythm, aesthetic, and sample choice—a slice of preexisting media constrained to a loopable pattern. In short, it is one's intuitive and performative sense of time that differentiates one identity from another. This is precisely why hip-hop, as a music genre, reuses popular samples to create new compositions, taking what the public ubiquitously understands and attempting to carve a unique aesthetic into how one cuts, splices, and mixes a preexisting work of media. In this way, the beat of hip-hop is in harmony with the ecocomposition approaches started in the 1970s, and between which Biota Beats situates itself (Keller and Capasso 2006).

History and Motivation

Since the work by Hayashi and Munakata (1984) on sonifying DNA sequences base by base, ever more researchers are exploring the intersection between biology and sonification. Scientists have also sonified protein sequences, writing software to expedite the process and experiment with different mappings. In the same decade, Mark Weiser coined the term “ubiquitous computing” in 1988, describing an emerging theory and practice of computing that extends processing possibilities to the unconscious—this passive collecting of data within the mind provides an opportunity for computing ubiquitously (cf. Weiser 1991). These two movements pave a particular path for ubiquitous music, namely, that the unconscious processing lends itself effectively towards creating a malleable connection between the body, the machine, and the mental state governing such interactions—and hence, where Biota Beats attempts to make a dent. There are other ways to highlight such connections between sound and life. German scientists, for instance, built a “nanoear” capable of detecting the vibrations of microorganisms, capturing the “music of life” at a microscopic level (Ohlinger et al. 2012). Such practices in sonification and ubiquitous computing are still far from standardized, however, and there is much debate over the best practices to maximize comprehension and minimize overload. Data visualization, on the other hand, is a much more developed field. Research has shown that providing information through multiple senses can improve perception, and pairing sonification with visuals can also assist in providing new users with a framework for interpreting the sonification (Effenberg 2005).

Since the completion of the Human Genome Project in 2003, development in biotechnology has continued at a rapid pace. From the genome, research has now entered the proteome, metabolome, epigenome, and more. Among these is the microbiome, the collection of genetic material of all the microorganisms living on and in the human body. There is an incredible diversity of life living with our bodies—even a single organ like the skin can be divided into regions by moisture and oil levels, each hosting different populations of microorganisms (Grice and Segre 2011).

Exploring the microbiome, specifically through the perspective of ubiquitous computing, may help to bring back some of the pleasure of decoding the mysteries of life from computers. Sonifying such mysteries through music may provide not only insight into the body but also an interactive experience. To facilitate this first-hand experience, Biota Beats establishes a cultural connection to people via music, namely, hip-hop. Given that hip-hop is a globally popular music form, there is now an intuitive sense of how it should sound, which often points to particular conventions of beats per minute, time signatures, and harmonic content as it relates to the history of the genre. Sonification techniques make use of such expectations, communicating complex information in a way that is easier to understand, given the conventions of particular genres. Biota Beats exercises the hip-hop aesthetic of sampling—utilizing and manipulating preexisting media information to create an original musical composition. Yet whereas hip-hop may use vinyl records, Biota Beats chooses to use sonified biological samples.

Methods

In this section, we present the entire pipeline of music generation and visualization, starting from swabbing bacteria from parts of the human body.

System Overview

In forming the body–music connection of Biota Beats, data about the human microbiota was sonified by sampling bacteria on body parts of interest and characterizing those samples in data streams that parameterize the output music. The output music itself was then related back to the bacteria by visual means. The multisensory experience of sonification and visualization created a connection between a particular sound and the bacteria of its origin.

Whether sampling an individual to compose a personalized listening experience or a mass gathering to create an interactive audiovisual installation, four fundamental processes were utilized: (1) a dedicated technical setup, (2) bacterial sampling and growth, (3) conversion of bacterial data into music, and (4) data visualization. In the following sections, we will explore the evolution of our hardware.

Technical Setup

The visual component to the musical metaphor is a record and a record player. The record is in the form of a Petri dish. Petri dishes allow microorganisms to grow into colonies visible to the naked eye and a standard camera lens. A custom Petri dish with a 12-inch diameter, the size of a traditional vinyl LP record, was laser cut from acrylic plastic. In addition to the plastic pieces forming the border of the dish, more segments were cut to divide the area of the dish into five subsections, or sectors, that were equal in surface area. After sterilizing the plates with ethanol, a medium to support bacterial growth—solid 1.5% lysogeny broth (LB) medium—was poured into the plates, dubbed “biota records.”

Figure 1

Assembly of the acrylic “biota record” using a solvent acrylic glue, applied via syringe (a), and a fully assembled biota record (b).

Figure 1

Assembly of the acrylic “biota record” using a solvent acrylic glue, applied via syringe (a), and a fully assembled biota record (b).

Figure 2

An example Biota Beats prototype with a biota record featuring cultured microorganisms integrated with a DJ setup.

Figure 2

An example Biota Beats prototype with a biota record featuring cultured microorganisms integrated with a DJ setup.

Originally, two designs with alternative answers as to how to distribute the sites of different body parts across the biota record were proposed: first, concentrically, from the center outwards (as shown in Figure 1a), or consecutively, linearly as it rotates (as shown in Figures 1b and 2). The concentric design has the benefit of a stronger resemblance to the iconic look of a traditional vinyl LP record. The sonification in this design allows the algorithm to collect data from one area of the human body over a longer period of time. The consecutive model has the advantage of rapidly juxtaposing the different body sites over time, so one might discern the differences more distinctly. In both designs, each track in the final composition corresponds to one body site, extracted from one section of the biota record. Both were produced for testing, but the second model proved more suitable for producing distinctly circular bacterial growths, which was critical to the final algorithm. Additionally, the concentric circle grooves seen in Figure 1a were removed in final designs, as the groove patterns were disrupting the circle-detection algorithms.
Figure 3

The original prototype for the Biota Beats incubator and imaging station, retrofitted from a conventional record player.

Figure 3

The original prototype for the Biota Beats incubator and imaging station, retrofitted from a conventional record player.

To create music, the bacteria would need to be cultured until they were visible under camera, and then photographically captured. Thus, the plates would need to be stored in an incubator with the ability to photograph its contents over time. The first prototype extended the metaphor to a physical record player, and acted as both an incubator and camera support (see Figure 3), whereas later prototypes were adapted for mobility in order to process microbial data at conferences, performances, and other nontraditional laboratory venues (see Figure 4). The heating component was composed of nichrome wires covered by ceramic beads, attached to the underside of the platter with Kapton tape. These were connected to a bang-bang controller used to keep temperature between 35$∘$C and 40$∘$C. To trap heat, acrylic was cut and assembled into a box to place on top of the “record player,” and a camera could be placed on top of the box to image the interior. For imaging, the prototype used a Raspberry Pi and a camera add-on to capture the top view of the plate as the colonies grew. We used a simple intervalometer program to time this imaging.
Figure 4

Biota Beats members Gautam Salhotra and Alexandria Guo use a camera support to image plates for the Uni-Verse project, described in detail in the section “Project History.”

Figure 4

Biota Beats members Gautam Salhotra and Alexandria Guo use a camera support to image plates for the Uni-Verse project, described in detail in the section “Project History.”

The model proved impractical for larger-scale projects, however, and we ultimately chose to handle the two processes separately, to ensure that incubation occurred under standard conditions and that the quality of the photos taken was consistent.

3.3  Bacterial Sampling and Imaging

The sampling process involved using a kit of sterile cotton swabs and deionized water. People being sampled would take the cotton swab, dip it in deionized water, and swab the body part of interest (e.g., armpit, tongue) as many times as necessary to thoroughly cover the surface of the medium. In practice, popular sampling sites were from places on the skin that are often exposed, from the feet and hands to various parts of the face. We encouraged people to sample nose and mouth, as these are still noninvasive and quite different environments from the rest of the skin. In demos with toy data, however, we chose mouth, armpits, belly button, genitalia, and feet, to span the length of the human body and test varying conditions for microorganismal life. While transferring the sample from the cotton to the dish, we asked that participants try to cover the surface area of the plate with their sample as evenly as possible, to minimize swabbing pattern effects on colony growth patterns. The plate was left for up to 36 hours in a 37$∘$C incubator when the final photo was collected, after growth had completed.

The plates of bacteria were left at room temperature when being photographed, regardless of whether or not they had been stored in incubators before. Several variations of the setup were deployed, depending on the specifics of the application. Generally, however, the plates were placed on black backgrounds, and a camera mounted on a tripod was placed over the plate, with additional lighting sources (600 W; color temperature of 5500$∘$ K) placed to remove glare. The camera used was a Canon DSLR EOS D6 with a 24–100 mm f4.0 lens and a Canon intervalometer attached.

3.4  Data Processing and Sonification

The original metaphor of a record player influenced our interpretation of how each plate photograph would be interpreted as a staff in music notation. Just as a needle would sweep around a vinyl LP record, our algorithm swept around each sector of the plate. When a circular colony was encountered, the algorithm adds a note to the output MIDI file, as detailed below.

Figure 5

Schematic of extracted parameters and an example mapping of body part to section of a plate.

Figure 5

Schematic of extracted parameters and an example mapping of body part to section of a plate.

The first step was to perform colony detection on plate images. Importing the open-source OpenCV2 library in Python (Bradski 2000), our image-detection script used the SimpleBlobDetector() function, Canny edge detection (Canny 1986), and Hough transformations to identify circular shapes within a photo (Hough 1962). This is because the bacterial colonies we sampled were primarily circular. The images first had a Gaussian blur applied and were then binarized, using thresholds in the range of 0 to 255, chosen depending on the lighting conditions in which the series of photos being processed were taken. Each circle identifying a colony was saved, as well as the coordinates of its center and radius, as shown in Figure 5. Given colony center coordinates, the size of the overall dish, and a standardized initial position for the plates, we were able to calculate the angular position and the plate sector, assigned to a body part, to which each colony belonged.

This data could then be converted to a MIDI file. The mapping of colony features to musical parameters was performed with the goals of maximizing auditory distinctiveness and preserving information. We arranged each colony or note temporally (with a resolution of 4.3 $μ$sec computed by the MIDI library), based on distance from the center of the dish. Thus, the music could be visualized as a wave radiating out from the plate core, hitting notes successively from closest to farthest from the center.

The sonification function we wrote took as input the number of octaves spanned and total duration of the output clip, both parameters that we could tune. The main output parameters for each MIDI note were pitch (varying by octave) and timing. Then we assigned each detected colony to one of the pitch classes of a C-major pentatonic scale, based on a mapping connecting C, D, E, G, and A to the five corresponding sectors of the plate. To add more variation to the pitches heard, we mapped the diameter of the colony to octave register—the larger the colony, the lower the octave. The equations for determining pitch from colony data are
$sectorcol=angleposcol2π5octavecol=octavespan·diametercol-diametermindiametermax-diametermin-octavespan2pitchcol=(12·octavecol)+60,sectorcol=062,sectorcol=164,sectorcol=267,sectorcol=369,sectorcol=4$
where sector, angle position, octave, and diameter are calculated per colony; $diametermax$, $diametermin$ are summary figures calculated from the data; and octave span is a parameter set during sonification. The MIDI note numbers 60, 62, 64, 67, and 69 are associated with C4, D4, E4, G4, and A4, respectively.

The generated MIDI file was agreeable enough to listen to, owing to the harmony of the pentatonic scale notes. The rhythm of the notes does not adhere to compositional conventions such as time signatures, beats, or bars, however, because the rhythms are determined by the growth and density of microbiota. Additionally, the output file had a limited set of notes we could produce, because the pitches were based solely on colony parameters and the sector of the plate to which the colony belonged. Thus our MIDI file, on its own, would not have been an effective musical output that would have been listenable in the popular music or hip-hop aesthetic, and would not have satisfactorily fulfilled the artistic aspect of the project's mission. Hence, we worked with Biota Beats' music producer, Charles Kim, to postprocess and improve the musical output of our project.

MIDI to Music

Hip-hop's compositional strategies rest in how one selects, creates, and mixes loops—repeatable and catchy musical concepts. The process utilizes music technologies such as DAWs, which enable sequencing MIDI for harmonic and rhythmic content, arranging underlying temporal structures according to one's intuitive sense of time, and the selection and sampling of preexisting audio media sources. Rather than capturing live acoustic sounds in a singular environment, the Internet has enabled the distribution and sharing of sounds captured and processed all over the world. This music production process has helped facilitate a “dissolution of the division of labor between composers, performers, and audience” (Keller, Schiavoni, and Lazzarini 2019). This ubiquitous music process is an exciting one, in which the performer can compose in real time via samples, the audience can become part of the performance via field recordings and other connective technologies, and the composer becomes the audience via experiencing the created music simultaneously and publicly. Biota Beats intentionally situates itself within this lineage, exploring the possibilities of ubiquitous music in practice, perspective, and philosophy.

The creative edge used by Biota Beats is found in the MIDI data itself, composed “organically” from the tracking and analysis of the microbiomes. Given that there is no preset tempo or pitch grid into which the MIDI is quantizing, Biota Beats leans on the growth and movement of the microbiomes themselves to assert the temporal structure of the note inputs as seen in Figures 5 and 6. It is within this peculiar density—signifying microbial growth as sonified from the biota record—that a version of one's sonic identity emerges, supplying unique compositional data to inspire new musical constraints and strategies.

The microbiome-extracted MIDI data serves as a “musical source code” and thus acts as a sample that can be loaded into a DAW to be spliced, sequenced, and assigned to synthetic instruments, curated audio clips, or other MIDI-enabled instruments. In Figure 6a, the Ableton Live program shows that there are various groupings of MIDI slices. Additionally, within each of these groupings, not all the notes are perfectly aligned on the rhythmic grid.

Yet, this is the beauty of biological data: In the strict musical confines of I/O, the microbiomes present their own, quirky sense of time. The creative task is then to frame such sequences into a composition that enlivens the source media that the biology provides, communicating emotional information and potentially altering mental states.

In composing a beat, deciding which sample to use, at what time to cut, and how to mix with other sounds is how one expresses artistry. There is a fair amount of interpretation at play in choosing how to adapt the microbiome MIDI data. For example, a few microbiome-specific compositional strategies arose from Biota Beats experimentations: (1) because the microbiome MIDI has its own temporal structure, adjusting the tempo can provide varying rhythmic concepts, as seen in Figure 6b; (2) melodic phrases can be cut and looped, offering a human-like sense of movement and musical phrasing, processed through various hardware and software synthesizers; and (3) field-recorded audio from various environments can be loaded, sliced according to transient markers, and replayed via microbiome MIDI.

Simply put, creative interpretation of data is what binds sonification with compositional strategy. In other words, how one chooses to interpret the data affects the sonic outcome of the music, potentially shifting how an audience may react to the microbiome data itself.

Figure 6

A screenshot of cutting, splicing, and mixing microbiome MIDI data in Ableton Live (a), in which microbiome MIDI do not obey temporal constraints, providing nonstandard rhythmic data (b).

Figure 6

A screenshot of cutting, splicing, and mixing microbiome MIDI data in Ableton Live (a), in which microbiome MIDI do not obey temporal constraints, providing nonstandard rhythmic data (b).

Data Visualization

A dedicated Biota Beats visualization tool was designed to connect all the different components of the work. The addition of a visualization engages the viewer's visual cortex, thereby creating a stronger cognitive connection with the user than could be achieved with audio input alone. The visualization had to meet the following four goals: (1) to show all body parts from which bacteria were collected, (2) to show the Petri dish and bacterial colonies, (3) to visualize and play the music, and (4) to visually connect the previous goals.

Additionally, the visualization tool needed to be interactive and embedded on a website. The tools used were JavaScript, specifically the d3.js library, on top of an overall structure built using HTML/CSS. The visualization was split into three main components: body, Petri dish, and music. For the body visual we used the “I, Virus” artwork by artist Charis Tsevis (see Saey 2014). The body sections of interest were highlighted by graying out the body and maintaining the color in the area of interest. The Petri dish and its sections were drawn using scalable vector graphic (SVG) paths. The coordinates and radii of the bacterial colonies were uploaded as a comma-separated values (CSV) file and translated to the correct locations within this petri dish. To play the music, the Web Audio API was used (a high-level JavaScript API to process and synthesize audio for web-based applications). The music was visualized using a bar graph that functioned as an equalizer by feeding it the frequency output of the audio in real time and adjusting the bar heights accordingly, shown in Figure 7.
Figure 7

Screenshot of the online interactive Biota Beats visualization, displaying the body part of interest (armpit), the corresponding plate sector and abstract representation of the bacterial colonies, and the music “equalizer.”

Figure 7

Screenshot of the online interactive Biota Beats visualization, displaying the body part of interest (armpit), the corresponding plate sector and abstract representation of the bacterial colonies, and the music “equalizer.”

For interactivity, buttons were added per body part. Clicking a button would show the body part of interest, highlight the corresponding section on the Petri dish and its bacterial colonies, and trigger the audio composed from the bacteria of the given region. A “symphony” button was added that cycles the visualization through all body parts, playing the musical sections first in a linear order, then layering all sections together to present a “body symphony.” For an example, see http://biotabeats.org/visualization.html.

Discussion: Culturing the Microbiome

In studies of the human microbiome, researchers take samples of microorganisms from the human body and typically culture these organisms on various types of growth media for subsequent analysis, which can include morphological studies, colony counts, and genetic sequencing. Despite significant advances in sequencing techniques, we focused our efforts on optical analysis techniques via imaging technologies that were more easily accessible to our research team. Utilizing sequencing and DNA analysis of microbial colonies will be the subject of future research, and is described later in this article.

Although our methods suited our purposes, the proportion of bacteria that grew well on the plate, or “biota record,” were not necessarily representative of the individual's full microbial population. In the mouth especially, microorganisms that typically grow in biofilms would not be able to effectively compete for survival on a solid medium. Additionally, given that our sonification algorithm was looking for circular shapes, colonies growing in films or any other shape would not have been easily detected or characterized with a “diameter.” Our algorithm also took in spatial data of colonies on the plate to determine the temporal arrangement of the notes. Although the microbiome of an individual is constantly changing, our methods, over identical trials, would likely not yield the same microorganisms dominating a plate and surviving to form a colony, as resources are limited and the diversity of the full population cannot be sustained. Thus, our MIDI output is reproducible from the same image, although the stochastic nature of bacterial colony growth means that an additional sample from the same person will result in different growth patterns on the plate.

The choice of solid LB medium, without any antibiotic, was intended to provide an environment able to sustain as many different microorganisms as possible (and convenient in terms of access and procurement). LB was originally optimized for culturing Escherichia coli, however, and although we did not detect any E. coli in one of our 16S sequencing experiments to identify what species of bacteria were growing, it is possible we selected for bacteria with similar food sources and other needs. The populations of microorganisms present are dependent on the body part sampled, in addition to other factors defining particular sections where swabbed. For example, the skin is a large and diverse organ that can be divided into regions by moisture and oil levels.

Project History

Since 2016, the Biota Beats team has worked with both national and local science institutions, acclaimed musicians, and pioneering scientists, as well as presenting in a wide range of venues, from museum exhibitions to youth science conventions. In the following, we present a brief history of our projects. For more information, see http://biotabeats.org.

The iGEM Project

Biota Beats was conceived by the EMW Street Bio team in 2016 with the vision to make biology more accessible to the public through popular music. To share the Biota Beats project with the global community, the project was submitted to the Hardware track of the 2016 International Genetically Machines (iGEM) competition. The iGEM competition is a worldwide contest in synthetic biology in which teams representing high schools, universities, and community labs like EMW Street Bio exhibit innovative, biologically based systems they have created to have an impact on humanity. The Biota Beats prototype was accompanied by a poster and iGEM wiki documenting its development and outreach (see http://2016.igem.org/Team:EMW_Street_Bio). It was presented to participants and judges and was awarded a bronze medal, motivating the team to continue expanding upon the initial design.

Youth Science Initiative

As a part of bringing Biota Beats to the community and with the rise of integrated STEAM (science, technology, engineering, art, and math) curricula, we engaged with local middle school–aged youth by pioneering the EMW Youth Science Initiative. This initiative seeks to educate, inspire, and expose underrepresented local youth not only to the content but also to the creative and critical thinking aspects of the sciences, as well as examining the sciences through a social lens.

The first session was hosted by Ginkgo Bioworks and took place on 15 October 2016, with a focus on microorganisms. The youth were shown how microbiomes cultured on an agar plate resembled classic vinyl records and were given a demonstration as to how a new type of record player could play music from our cultured microbiota. Soon after the demonstration, participants swabbed different parts of their body and inoculated agar plates, essentially composing their personal microbiome records.

EMW Gallery Exhibit: Culturalizing Science

To further engage with the local community in Cambridge, Massachusetts, Biota Beats was on display at the EMW Gallery exhibit “Culturalizing Science” on 4 April 2017, which featured artwork at the intersection of scientific exploration and artistic expression in a context accessible to a broad audience (see Figure 8).
Figure 8

Biota Beats featured at the EMW Gallery exhibition Culturalizing Science in April 2017.

Figure 8

Biota Beats featured at the EMW Gallery exhibition Culturalizing Science in April 2017.

The Culturalizing Science exhibition also provided the backdrop to teach students from English High School in Boston, Massachusetts, about the microbiome in the gallery space. The Biota Beats team designed a program for the students, including an introductory video on the human microbiome, a lesson describing specific bacteria (Listeria, Salmonella, E. coli, and Staphylococcus) using giant plush bacteria toys, and a crafting session where the students created their own bacteria from paper and wool. The story of Biota Beats was also told, followed by swabbing of the participants to create their personal biota records.

Uni-Verse at iGEM

In 2017, the Biota Beats team members were invited back to iGEM as collaborators on a special project to create a biota beat reflecting the collective, international microbiome of the year's iGEM participants. With over 100 teams competing, representing regions around the world, our goal was to obtain microbiota samples from each team and group them based on location, mapping each team to their continent of origin and to an assigned instrument, all of which would work together to create one, cohesive sound (see Figure 9). Inspired by David Byrne's quote, “you might say that the universe plays the blues,” we found a fitting theme that united our global participants and our utilization of the blues convention of the pentatonic scale, and we titled the song: “Uni-Verse,” in the sense of “one song” (Byrne 2012).
Figure 9

David Kong, Biota Beats team leader, presenting the song “Uni-Verse” at the 2017 International Genetically Machines conference (a), with the venue turned into an aural-visual installation (b).)

Figure 9

David Kong, Biota Beats team leader, presenting the song “Uni-Verse” at the 2017 International Genetically Machines conference (a), with the venue turned into an aural-visual installation (b).)

The first challenge was managing to collect and incubate 131 microbiome Petri dishes, one per iGEM team, and process the corresponding images. Several volunteers helped collect the samples from teams (see Figure 10), and iGEM incubators were used to accelerate growth for about 24 hours before imaging with the photographic setup shown in Figure 4. Once we took all the plate images, we then had to reconfigure our metaphor and algorithms to handle the increased input to generate a single piece of music.
Figure 10

During the sampling process, one team from Australia samples their oral microbiome (a), as the biota samples are gathered and ready to be placed in incubators, 24 hours prior to the installation (b).

Figure 10

During the sampling process, one team from Australia samples their oral microbiome (a), as the biota samples are gathered and ready to be placed in incubators, 24 hours prior to the installation (b).

In the original prototype of Biota Beats, each plate was a “song,” each body part was linked to an instrumental sound and pitch, and each colony was a note. In “Uni-Verse,” the global sample of the microbiome from over a hundred teams created a single song. The continents from which they were visiting were linked to an instrumental sound and pitch, and each plate was a single note, as seen in Table 1. The change in model resulted in a modification of the sonification algorithm, and each plate was now summed up through a single parameter of density, of the percentage of the plate covered with bacterial growth. This value was calculated by binarizing each plate image with varying thresholds, and calculating the percentage of darkened pixels corresponding to growth (circular or not). This value was then used to calculate the varying octaves of each plate note, similar to our procedure with the individual Biota Beats tracks. With each plate coming from a specific iGEM team with ties to specific institutions, each colony and note had an associated location of origin; thus, with six tracks, one for each continent represented, the temporal arrangement of notes was decided based on longitude, from west to east.

Table 1:

Table of Uni-Verse Mappings

 Continent Sample Source Sound Africa Hands 808 kick with airy pulsing reverberation effect Asia Ears Synthesizer bass Europe Mouth Synthethizer lead melody North America Nose Atmospheric 1980s synthesizer Oceania Elbow Drum loop South America Scalp Percussion
 Continent Sample Source Sound Africa Hands 808 kick with airy pulsing reverberation effect Asia Ears Synthesizer bass Europe Mouth Synthethizer lead melody North America Nose Atmospheric 1980s synthesizer Oceania Elbow Drum loop South America Scalp Percussion

The visualization component was designed to communicate the new metaphor behind the Biota Beats song. Because geographic information was encoded into the music, the geographic locations of each of the participating teams were displayed on a spinning globe. The corresponding body parts were displayed by color-highlighting an adapted version of the “I, Virus” image. Finally, the equalizer displaying the audio was designed to look like an atmospheric layer wrapped around the globe.

For further details, see the STATNews and Uni-Verse videos (at https://youtu.be/tYzrhptDX6o and https://youtu.be/6s-x3NI8CZg, respectively).

Cambridge Science Festival

An event at the 2018 Cambridge Science Festival was hosted by EMW Street Bio. The Festival is a multicultural celebration taking place over several days, making science accessible to the public and highlighting the impact of STEAM in our lives. Titled “In/Out/Under/Transform,” the program featured interactive discussions and hands-on demonstrations from the Biota Beats team and from local startups that are discovering novel ways in which the microbiome can make an impact on society (see Figure 11).
Figure 11

Poster for 2018 Cambridge Science Festival event In/Out/Under/Transform.

Figure 11

Poster for 2018 Cambridge Science Festival event In/Out/Under/Transform.

Stedelijk Museum Breda

From March to August 2018, Biota Beats was part of the True Beauty exhibition at the Stedelijk Museum Breda, in the Netherlands. This exhibition explored the intersection of art and science. A Biota Beats record was mounted on a wall next to an interactive screen where visitors could choose which body parts' music they wanted to hear (see Figure 12). The selected tracks were then played in the exhibition space.
Figure 12

Museum attendees interact with Biota Beats at the Stedelijk Museum Breda, the Netherlands.

Figure 12

Museum attendees interact with Biota Beats at the Stedelijk Museum Breda, the Netherlands.

Liberty Science Center, New Jersey

From December 2018 to May 2019, Biota Beats was part of the Microbes Rule! exhibition at Liberty Science Center in New Jersey. The exhibition explored how the intersection of art and science in the fields of microbiology and synthetic biology can help people visualize and connect with the invisible world of microbes (see Figure 13). Biota Beats was chosen as the soundtrack of this exhibition.

Custom Biota Beats

We also developed custom Biota Beats from several collaborators, including both hip-hop icon DJ Jazzy Jeff and the cofounder of Hubweek, a Boston-based festival, Linda Henry (see Figures 14 and 15). For each custom beat, we sampled body parts of each individual, including the hands, nose, and mouth. These samples were inoculated on a biota record, and we used the collected data to generate a beat.

Future Work

The microbiome is intensely personal, constantly changing, and sensitive to changes in lifestyle and environment. Although the microbiome largely stabilizes after one grows out of childhood, it is in a state of dynamic equilibrium, changing from day to day (Lozupone et al. 2012). Thus, capturing snapshots of individuals' microbiomes at particular points in their lives could conceivably track their mental and physical journey in a new dimension.

Sequencing Data

Our current sampling process and sonification algorithm is based on visual and spatial data regarding growth of microorganisms on the biota record. In the interest of improving replicability, however, we are working on using an input data stream other than plate images. Current research can use cotton swabs to collect bacteria; but, instead of plating the samples, DNA purification and preparation procedures are used to prepare the sample for DNA sequencing. Sequencing 16S rRNA, which involves sequencing a specific subsection of a microorganism's ribosome, is the standard for classifying microbiota. Current limitations to implementation are, however, the cost of sequencing machines (or possible fees of sequencing companies) and reagents, as well as the time associated with each process (Osman et al. 2018).

Figure 13

Biota Beats at the Liberty Science Center. Exhibition design by Liberty Science Center Exhibition Development and Design, and Focus Lighting.

Figure 13

Biota Beats at the Liberty Science Center. Exhibition design by Liberty Science Center Exhibition Development and Design, and Focus Lighting.

Figure 14

DJ Jazzy Jeff with his biota record.

Figure 14

DJ Jazzy Jeff with his biota record.

We conducted preliminary testing in this direction by sequencing particular colonies from testing plates, and the populations of microorganisms we found matched up with those in the literature associated with skin. Initial attempts at sonifying classification data could be started prior to obtaining physical samples and sequencing results with the databases of microbiomes online (e.g., the Human Gut Project), either following previously established sonification models for genetic sequences, or using phylogenetic trees (Boutin and de Vienne 2017).

Kits and Online Repository

Making Biota Beats commercially available would reach a much wider audience, and it would allow users to interact and create music on their own. Creating a self-contained kit allowing users to grow, image, and sonify their own bacteria without the need for a certified biological laboratory space would place agency in learning about the microbiome into their hands. An online repository for everyone's creations would help foster community and dialogue about how to interpret and develop this technology.

Figure 15

Linda Henry holding her biota record (a), covered in bacterial growth (b).

Figure 15

Linda Henry holding her biota record (a), covered in bacterial growth (b).

The Biota Beats group also held a workshop through the MIT Independent Activities Period (IAP) to brainstorm other applications of the technology. One such proposed project was Biota Bonds, which involved sampling the mother's microbiome, which undergoes significant changes during the pregnancy. Given the simple structure of a lullaby, Biota Beats could construct an algorithm to produce a song that adheres to the constraints of the genre and which would create a personalized lullaby to which both mother and infant could listen. Other projects included ChoreoBiome and Biota Explore. ChoreoBiome highlighted the way a dancer would interact with the music, utilizing accelerometers to allow performers to manipulate the sounds as they translate and embody the movements of the microbiota with their own bodies. Biota Explore takes samples from the microbiome of a city, and gives users a chance to learn about their environment through a new lens that encourages curiosity about their surroundings. Overall, the technology of Biota Beats is fluid and can mold itself to the many interests and desires of various users and different data sets.

Figure 16

Members of the Biota Beats team at EMW Community Space.

Figure 16

Members of the Biota Beats team at EMW Community Space.

Acknowledgments

The EMW Community Space in Cambridge, Massachusetts, supported the work of all authors on this project. Additionally, Charles Kim was supported by the Teachers College at Columbia University, New York; Alexandria Guo by Wellesley College and the Massachusetts Institute of Technology Media Laboratory; Sara Sprinkhuizen and David Sun Kong by the MIT Media Laboratory; Gautam Salhotra by the University of Southern California at Los Angeles; and Keerthi Shetty by the Dana-Farber Cancer Institute in Boston, Massachusetts. Correspondence regarding this article should be addressed to David Sun Kong.

We gratefully acknowledge past members of the Biota Beats team, including Yixiao Xiang, Shannon Johnson, Thrasyvoulos Karydis, Viirj Kan, Rachel Smith, Mary Tsang, and Ani Liu, who were part of the EMW Street Bio team that participated in the 2016 iGEM competition and are pictured in Figure 16. We would also like to thank and acknowledge Mani Sai Suryateja Jammalamadaka, Sabrina Marecos, Lizza Román, Bart Scholten, Nicole Bakker, and Udayan Umapathi, who assisted with the Uni-Verse project at iGEM 2017, as well as the hundreds of students from the iGEM competition who provided microbiota samples, and members of iGEM headquarters, including Meagan Lizarazo, for their partnership. We would like to thank the participants in our MIT IAP Biota Beats workshop, including Russell Pasetes, Agnes Cameron, Rajeet Sampat, Angela Vujic, Shaheen Lakhani, Laya Anasu, Gary Zhang, Marius Hoelter, Gary Stillwell, Manuel Pelayo, Carole Urbano, Jabari King, Misha Sra, and Ilya Vidrin. Finally, we are grateful to the individuals who provided microbiota samples for our custom Biota Beats, including George Church, DJ Jazzy Jeff, and Linda Henry, and Jeffrey Cott for his help with the logo design and branding.

References

References
Boutin
,
H.
, and
D. M.
de Vienne
.
2017
. “
Sonification of Phylogenetic Trees: Listening to Evolution.
” In
Journées d'Informatique Musicale
. Available online at jim2017.sciencesconf.org/data/Henri_Boutin2017aa.pdf. Accessed October
2020
.
,
G.
2000
. “
The OpenCV Library.
Dr. Dobb's Journal of Software Tools
25
(
11
):
120
125
.
Byrne
,
D.
2012
.
How Music Works
.
San Francisco, California
:
McSweeney's
.
Canny
,
J.
1986
. “
A Computational Approach to Edge Detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence
8
(
6
):
679
698
.
Effenberg
,
A. O.
2005
. “
Movement Sonification: Effects on Perception and Action.
IEEE MultiMedia
12
(
2
):
53
59
.
Grice
,
E. A.
, and
J. A.
Segre
.
2011
. “
The Skin Microbiome.
Nature Reviews Microbiology
9
(
4
):
244
253
.
Hayashi
,
K.
, and
N.
Munakata
.
1984
. “
Letter to the Editor: Basically Musical.
Nature
310
(
5973
):
96
.
Hough
,
P. V.
1962
. Method and means for recognizing complex patterns. US Patent 3,069,654, filed 25 March
1960
. Issued 18 December
1962
.
Keller
,
D.
, and
A.
Capasso
.
2006
. “
New Concepts and Techniques in Eco-Composition.
Organised Sound
11
(
1
):
55
62
.
Keller
,
D.
,
F.
Schiavoni
, and
V.
Lazzarini
.
2019
. “
Ubiquitous Music: Perspectives and Challenges.
Journal of New Music Research
48
(
4
):
309
315
.
Kramer
,
G.
1994
. “Some Organizing Principles for Representing Data with Sound.” In
Auditory Display: Sonification, Audification, and Auditory Interfaces
.
:
Westview
, pp.
185
221
.
Lozupone
,
C. A.
, et al
2012
. “
Diversity, Stability and Resilience of the Human Gut Microbiota
.”
Nature
489
(
7415
):
220
230
.
Ohlinger
,
A.
, et al.
2012
. “
Optically Trapped Gold Nanoparticle Enables Listening at the Microscale.
Physical Review Letters
108
(
1
):Art.
[PubMed]
.
Osman
,
M.-A.
, et al.
2018
. “
16S rRNA Gene Sequencing for Deciphering the Colorectal Cancer Gut Microbiome: Current Protocols and Workflows.
Frontiers in Microbiology
9
:Art.
[PubMed]
.
Saey
,
T. H.
2014
. “
Beyond the Microbiome: The Vast Virome; Scientists are Just Beginning to Get a Handle on the Many Roles of Viruses in the Human Ecosystem.
Science News
185
(
1
):
18
21
.
Temple
,
M. D.
2017
. “
An Auditory Display Tool for DNA Sequence Analysis.
BioMed Central Bioinformatics
18
(Art.
[PubMed]
).
Turnage
,
K. D.
, et al
1996
. “
The Effects of Task Demands on the Equivalence of Visual and Auditory Representations of Periodic Numerical Data
.”
Behavior Research Methods, Instruments, and Computers
28
(
2
):
270
274
.
Weiser
,
M.
1991
. “
The Computer for the 21st Century.
Scientific American
265
(
3
):
94
105
.
Wekerle
,
H.
2016
. “
The Gut–Brain Connection: Triggering of Brain Autoimmune Disease by Commensal Gut Bacteria.
Rheumatology
55
(suppl. 2):
ii68
ii75
.