This article presents a musical interface that enables the sonification of data from the human microbiota, the trillions of microorganisms that inhabit the human body, into sound and music. The project is concerned with public engagement in science, particularly the life sciences, and developing cultivation technologies that take advantage of the ubiquitous and accessible nature of the human microbiota. In this article we examine the collaboration between team members proficient in musical composition and those with expertise in biology, sonification, and data visualization, producing an individualized piece of music designed to capture basic biological data and user attention. Although this system, called Biota Beats, sonifies ubiquitous data for educational science projects, it also establishes a connection between individuals and their bodies and between a community and its context through interactive music experiences, while attempting to make the science of the human microbiome more accessible. The science behind standardizing sonified data for scientific, human analysis is still in development (in comparison to charts, graphs, spectrograms, or other types of data visualization). So a more artistic approach, using the framework of musical genres and their associated themes and motifs, is a convenient and previously established way to capitalize on how people naturally perceive sound. Further, to forge a creative connection between the human microbiota and the music genre, a philosophical shift is necessary, that of viewing the human body and the digital audio workstation as ubiquitous computers.
Beats are the compositions of hip-hop, a loop-based form that extracts and manipulates preexisting media, processing with modern production machines, and programming by using one's own unique sense of rhythm, arrangement, and aesthetic taste. It is anything but classical. It is radically modern not only because of its cultural impact but also because of its creative use of specific technologies. Today, its tools are the digital audio workstation (DAW), samples triggered through MIDI, and the decentralized recording space known as the bedroom studio, networked via file-sharing practices. This marriage of contemporary composition strategies with contemporary technologies helps create a platform for sociopolitical narratives fitting for the times. In short, such sounds give cultural movements, comprising individuals and communities, an aesthetic and epistemological understanding of themselves and their relationship to their context. These connections are often ubiquitous in how people come to accept and identify such sounds as expressive of their own ideals. The challenge, and what music can do shockingly well, is to take such implicit connections, and turn them into an opportunity to fortify a common culture (Keller, Schiavoni, and Lazzarini 2019). In a way, it is the sonification of cultural data.
What if there were a musical instrument that could “sample” your biological state, sonifying such data into a composition that articulates emotional insight into your physical body? Imagine playing this instrument not with your hands, but rather allowing it to analyze you, and generate something for you. Such a connection situates itself in the lineage of biomusic, yet takes a distinct departure from the sample-based usage of natural sounds—like field recordings of bird or whale songs—to tracking biome growth and converting it into MIDI data for compositional use. In this way, “sampling” is to be understood analogically, by applying the language of modern music production (in which the term refers to the extraction of recorded sounds) to the excavation of biological data. Mobile technologies have enabled music-production technologies to be a seamless part of the everyday, furthering a participatory music landscape to lay musicians. For biological sampling to be effective in this contemporary music landscape, a musical interface must be both technologically accessible and culturally relevant, particularly to further the advancement of public science education.
To explore these ideas, we developed the Biota Beats project, which is a new type of musical interface that enables the sonification of data from the human microbiota—the collection of trillions of microorganisms found throughout the human body (Grice and Segre 2011)—into sound and music. This project was a collaboration between two groups: EMW Street Bio and the Community Biotechnology Initiative. EMW Street Bio is a program of EMW Community Space (www.emwbookstore.org), an arts, technology, and community center based in Cambridge, Massachusetts. The Community Biotechnology Initiative is part of the MIT Media Lab. Biota Beats is a system that establishes a connection between individuals and their bodies through music and attempts to make this interactive experience with biotechnology more accessible via the aesthetic of hip-hop's compositional form. Biota Beats thus samples an individual's microbiota, images the growth of microorganisms using computer vision techniques, processes microbial growth metadata, and converts it to MIDI and an accompanying visualization. Through this process, Biota Beats creates a workflow connecting developments in hardware, wet-laboratory biology, image processing, sonification, and data visualization.
The core of Biota Beats' mission is twofold: (1) to communicate and democratize an understanding of the human microbiome, specifically through the compositional aesthetics of hip-hop; and (2) to widen the understanding of ubiquitous computing from its current focus on machines to the human body itself, utilizing sonification techniques to analyze microbiome data and produce socioemotional expression via MIDI triggering. Our focus on the microbiome, the collection of genetic material of all 2 kg of microorganisms living on and in the human body, is specifically due to its sensitivity to environmental factors. Such sensitivity opens possibilities towards a therapeutic route with significant potential to alter complex phenotypes, such as mental states. The gut microbiome in particular has received special attention, with mounting evidence for the gut–brain connection (Wekerle 2016); early trials have demonstrated transferring gut bacteria between healthy and ill patients via fecal matter transplant could aid in metabolic diseases such as obesity or other digestive problems, such as those caused by Clostridium difficile (Temple 2017). But for those outside of biology in academia or industry, obtaining a first-hand understanding of such recent developments on the study of the microbiome can be extremely difficult. Understanding, then, necessitates a creative solution.
Studies have shown that parsing audio signals is comparable to understanding numerical data (Turnage et al. 1996). Other studies have gone a step further and argued that audio can provide a more engaging learning experience, thus serving as an effective medium to better process and communicate data (Kramer 1994). Sonification effectively bridges these two concepts, complementing other ways to process data such as visual cues. In sonification we convert data to sound, without words or language, to communicate information. In the case of musical compositions based on Biota Beat data, what one “hears,” particularly as it relates to rhythmic and melodic complexity, is both the growth of biomes and the pedagogical moment of connecting biological growth to music. This process utilizes human sensitivity to variations in the dimensions of sound, such as pitch, amplitude, and time, highlighting patterns in data in a more intuitive way (Hayashi and Munakata 1984). In other words, the large amount of complex biological data being generated in the study of the microbiome makes it an appropriate field in which to apply sonification.
Metaphorically, we view the microbiome as parallel to MIDI, an underlying language utilized to trigger sounds, arrange compositions, and to process temporal structures. Perhaps like the microbiome, MIDI itself may simply carry biological data that the body uses to communicate, coordinate, and interpret, like an input/output (I/O) trigger. Yet our creative interests rest in how to shift mental states using such biological data, similarly to how one selects the melodic and harmonic content that MIDI triggers, which may in turn affect the activity of the microbiome. In this way, Biota Beats roots itself in data music, not intentionally at odds with data sonification, but our focus is on creating tools that empower creative expression while generating conversations about biotechnology. Utilizing hip-hop's aesthetic understanding of temporal structuring, we posited that the way in which microbiomes grow—and the timing with which they move—can be sonified in a creative and expressive manner that opens therapeutic possibilities into altering our mental states, or as can be experienced with our “Uni-Verse” biocomposition and visualization, a thousand-person community's collective state of being (we discuss the Uni-Verse project in greater detail in the section “Project History”).
But why choose hip-hop as the aesthetic medium to explore the microbiome? Central to the cultural expression of hip-hop is the beat, which refers in this context to a rhythmic backing track or loop in the African American music aesthetic that is made up of a “high density of events in a relatively short time” (Keller and Capasso 2006). This density is of particular fascination, in that within a repetitive musical structure a dense amount of information can be efficiently and emotionally processed. But this density is not just for analyzing and communicating data, it is simultaneously a space for carving aesthetic identities, of manipulating underlying temporal structures with one's particular sense of rhythm, aesthetic, and sample choice—a slice of preexisting media constrained to a loopable pattern. In short, it is one's intuitive and performative sense of time that differentiates one identity from another. This is precisely why hip-hop, as a music genre, reuses popular samples to create new compositions, taking what the public ubiquitously understands and attempting to carve a unique aesthetic into how one cuts, splices, and mixes a preexisting work of media. In this way, the beat of hip-hop is in harmony with the ecocomposition approaches started in the 1970s, and between which Biota Beats situates itself (Keller and Capasso 2006).
History and Motivation
Since the work by Hayashi and Munakata (1984) on sonifying DNA sequences base by base, ever more researchers are exploring the intersection between biology and sonification. Scientists have also sonified protein sequences, writing software to expedite the process and experiment with different mappings. In the same decade, Mark Weiser coined the term “ubiquitous computing” in 1988, describing an emerging theory and practice of computing that extends processing possibilities to the unconscious—this passive collecting of data within the mind provides an opportunity for computing ubiquitously (cf. Weiser 1991). These two movements pave a particular path for ubiquitous music, namely, that the unconscious processing lends itself effectively towards creating a malleable connection between the body, the machine, and the mental state governing such interactions—and hence, where Biota Beats attempts to make a dent. There are other ways to highlight such connections between sound and life. German scientists, for instance, built a “nanoear” capable of detecting the vibrations of microorganisms, capturing the “music of life” at a microscopic level (Ohlinger et al. 2012). Such practices in sonification and ubiquitous computing are still far from standardized, however, and there is much debate over the best practices to maximize comprehension and minimize overload. Data visualization, on the other hand, is a much more developed field. Research has shown that providing information through multiple senses can improve perception, and pairing sonification with visuals can also assist in providing new users with a framework for interpreting the sonification (Effenberg 2005).
Since the completion of the Human Genome Project in 2003, development in biotechnology has continued at a rapid pace. From the genome, research has now entered the proteome, metabolome, epigenome, and more. Among these is the microbiome, the collection of genetic material of all the microorganisms living on and in the human body. There is an incredible diversity of life living with our bodies—even a single organ like the skin can be divided into regions by moisture and oil levels, each hosting different populations of microorganisms (Grice and Segre 2011).
Exploring the microbiome, specifically through the perspective of ubiquitous computing, may help to bring back some of the pleasure of decoding the mysteries of life from computers. Sonifying such mysteries through music may provide not only insight into the body but also an interactive experience. To facilitate this first-hand experience, Biota Beats establishes a cultural connection to people via music, namely, hip-hop. Given that hip-hop is a globally popular music form, there is now an intuitive sense of how it should sound, which often points to particular conventions of beats per minute, time signatures, and harmonic content as it relates to the history of the genre. Sonification techniques make use of such expectations, communicating complex information in a way that is easier to understand, given the conventions of particular genres. Biota Beats exercises the hip-hop aesthetic of sampling—utilizing and manipulating preexisting media information to create an original musical composition. Yet whereas hip-hop may use vinyl records, Biota Beats chooses to use sonified biological samples.
In this section, we present the entire pipeline of music generation and visualization, starting from swabbing bacteria from parts of the human body.
In forming the body–music connection of Biota Beats, data about the human microbiota was sonified by sampling bacteria on body parts of interest and characterizing those samples in data streams that parameterize the output music. The output music itself was then related back to the bacteria by visual means. The multisensory experience of sonification and visualization created a connection between a particular sound and the bacteria of its origin.
Whether sampling an individual to compose a personalized listening experience or a mass gathering to create an interactive audiovisual installation, four fundamental processes were utilized: (1) a dedicated technical setup, (2) bacterial sampling and growth, (3) conversion of bacterial data into music, and (4) data visualization. In the following sections, we will explore the evolution of our hardware.
The visual component to the musical metaphor is a record and a record player. The record is in the form of a Petri dish. Petri dishes allow microorganisms to grow into colonies visible to the naked eye and a standard camera lens. A custom Petri dish with a 12-inch diameter, the size of a traditional vinyl LP record, was laser cut from acrylic plastic. In addition to the plastic pieces forming the border of the dish, more segments were cut to divide the area of the dish into five subsections, or sectors, that were equal in surface area. After sterilizing the plates with ethanol, a medium to support bacterial growth—solid 1.5% lysogeny broth (LB) medium—was poured into the plates, dubbed “biota records.”
The model proved impractical for larger-scale projects, however, and we ultimately chose to handle the two processes separately, to ensure that incubation occurred under standard conditions and that the quality of the photos taken was consistent.
3.3 Bacterial Sampling and Imaging
The sampling process involved using a kit of sterile cotton swabs and deionized water. People being sampled would take the cotton swab, dip it in deionized water, and swab the body part of interest (e.g., armpit, tongue) as many times as necessary to thoroughly cover the surface of the medium. In practice, popular sampling sites were from places on the skin that are often exposed, from the feet and hands to various parts of the face. We encouraged people to sample nose and mouth, as these are still noninvasive and quite different environments from the rest of the skin. In demos with toy data, however, we chose mouth, armpits, belly button, genitalia, and feet, to span the length of the human body and test varying conditions for microorganismal life. While transferring the sample from the cotton to the dish, we asked that participants try to cover the surface area of the plate with their sample as evenly as possible, to minimize swabbing pattern effects on colony growth patterns. The plate was left for up to 36 hours in a 37C incubator when the final photo was collected, after growth had completed.
The plates of bacteria were left at room temperature when being photographed, regardless of whether or not they had been stored in incubators before. Several variations of the setup were deployed, depending on the specifics of the application. Generally, however, the plates were placed on black backgrounds, and a camera mounted on a tripod was placed over the plate, with additional lighting sources (600 W; color temperature of 5500 K) placed to remove glare. The camera used was a Canon DSLR EOS D6 with a 24–100 mm f4.0 lens and a Canon intervalometer attached.
3.4 Data Processing and Sonification
The original metaphor of a record player influenced our interpretation of how each plate photograph would be interpreted as a staff in music notation. Just as a needle would sweep around a vinyl LP record, our algorithm swept around each sector of the plate. When a circular colony was encountered, the algorithm adds a note to the output MIDI file, as detailed below.
The first step was to perform colony detection on plate images. Importing the open-source OpenCV2 library in Python (Bradski 2000), our image-detection script used the SimpleBlobDetector() function, Canny edge detection (Canny 1986), and Hough transformations to identify circular shapes within a photo (Hough 1962). This is because the bacterial colonies we sampled were primarily circular. The images first had a Gaussian blur applied and were then binarized, using thresholds in the range of 0 to 255, chosen depending on the lighting conditions in which the series of photos being processed were taken. Each circle identifying a colony was saved, as well as the coordinates of its center and radius, as shown in Figure 5. Given colony center coordinates, the size of the overall dish, and a standardized initial position for the plates, we were able to calculate the angular position and the plate sector, assigned to a body part, to which each colony belonged.
This data could then be converted to a MIDI file. The mapping of colony features to musical parameters was performed with the goals of maximizing auditory distinctiveness and preserving information. We arranged each colony or note temporally (with a resolution of 4.3 sec computed by the MIDI library), based on distance from the center of the dish. Thus, the music could be visualized as a wave radiating out from the plate core, hitting notes successively from closest to farthest from the center.
The generated MIDI file was agreeable enough to listen to, owing to the harmony of the pentatonic scale notes. The rhythm of the notes does not adhere to compositional conventions such as time signatures, beats, or bars, however, because the rhythms are determined by the growth and density of microbiota. Additionally, the output file had a limited set of notes we could produce, because the pitches were based solely on colony parameters and the sector of the plate to which the colony belonged. Thus our MIDI file, on its own, would not have been an effective musical output that would have been listenable in the popular music or hip-hop aesthetic, and would not have satisfactorily fulfilled the artistic aspect of the project's mission. Hence, we worked with Biota Beats' music producer, Charles Kim, to postprocess and improve the musical output of our project.
MIDI to Music
Hip-hop's compositional strategies rest in how one selects, creates, and mixes loops—repeatable and catchy musical concepts. The process utilizes music technologies such as DAWs, which enable sequencing MIDI for harmonic and rhythmic content, arranging underlying temporal structures according to one's intuitive sense of time, and the selection and sampling of preexisting audio media sources. Rather than capturing live acoustic sounds in a singular environment, the Internet has enabled the distribution and sharing of sounds captured and processed all over the world. This music production process has helped facilitate a “dissolution of the division of labor between composers, performers, and audience” (Keller, Schiavoni, and Lazzarini 2019). This ubiquitous music process is an exciting one, in which the performer can compose in real time via samples, the audience can become part of the performance via field recordings and other connective technologies, and the composer becomes the audience via experiencing the created music simultaneously and publicly. Biota Beats intentionally situates itself within this lineage, exploring the possibilities of ubiquitous music in practice, perspective, and philosophy.
The creative edge used by Biota Beats is found in the MIDI data itself, composed “organically” from the tracking and analysis of the microbiomes. Given that there is no preset tempo or pitch grid into which the MIDI is quantizing, Biota Beats leans on the growth and movement of the microbiomes themselves to assert the temporal structure of the note inputs as seen in Figures 5 and 6. It is within this peculiar density—signifying microbial growth as sonified from the biota record—that a version of one's sonic identity emerges, supplying unique compositional data to inspire new musical constraints and strategies.
The microbiome-extracted MIDI data serves as a “musical source code” and thus acts as a sample that can be loaded into a DAW to be spliced, sequenced, and assigned to synthetic instruments, curated audio clips, or other MIDI-enabled instruments. In Figure 6a, the Ableton Live program shows that there are various groupings of MIDI slices. Additionally, within each of these groupings, not all the notes are perfectly aligned on the rhythmic grid.
Yet, this is the beauty of biological data: In the strict musical confines of I/O, the microbiomes present their own, quirky sense of time. The creative task is then to frame such sequences into a composition that enlivens the source media that the biology provides, communicating emotional information and potentially altering mental states.
In composing a beat, deciding which sample to use, at what time to cut, and how to mix with other sounds is how one expresses artistry. There is a fair amount of interpretation at play in choosing how to adapt the microbiome MIDI data. For example, a few microbiome-specific compositional strategies arose from Biota Beats experimentations: (1) because the microbiome MIDI has its own temporal structure, adjusting the tempo can provide varying rhythmic concepts, as seen in Figure 6b; (2) melodic phrases can be cut and looped, offering a human-like sense of movement and musical phrasing, processed through various hardware and software synthesizers; and (3) field-recorded audio from various environments can be loaded, sliced according to transient markers, and replayed via microbiome MIDI.
Simply put, creative interpretation of data is what binds sonification with compositional strategy. In other words, how one chooses to interpret the data affects the sonic outcome of the music, potentially shifting how an audience may react to the microbiome data itself.
A dedicated Biota Beats visualization tool was designed to connect all the different components of the work. The addition of a visualization engages the viewer's visual cortex, thereby creating a stronger cognitive connection with the user than could be achieved with audio input alone. The visualization had to meet the following four goals: (1) to show all body parts from which bacteria were collected, (2) to show the Petri dish and bacterial colonies, (3) to visualize and play the music, and (4) to visually connect the previous goals.
For interactivity, buttons were added per body part. Clicking a button would show the body part of interest, highlight the corresponding section on the Petri dish and its bacterial colonies, and trigger the audio composed from the bacteria of the given region. A “symphony” button was added that cycles the visualization through all body parts, playing the musical sections first in a linear order, then layering all sections together to present a “body symphony.” For an example, see http://biotabeats.org/visualization.html.
Discussion: Culturing the Microbiome
In studies of the human microbiome, researchers take samples of microorganisms from the human body and typically culture these organisms on various types of growth media for subsequent analysis, which can include morphological studies, colony counts, and genetic sequencing. Despite significant advances in sequencing techniques, we focused our efforts on optical analysis techniques via imaging technologies that were more easily accessible to our research team. Utilizing sequencing and DNA analysis of microbial colonies will be the subject of future research, and is described later in this article.
Although our methods suited our purposes, the proportion of bacteria that grew well on the plate, or “biota record,” were not necessarily representative of the individual's full microbial population. In the mouth especially, microorganisms that typically grow in biofilms would not be able to effectively compete for survival on a solid medium. Additionally, given that our sonification algorithm was looking for circular shapes, colonies growing in films or any other shape would not have been easily detected or characterized with a “diameter.” Our algorithm also took in spatial data of colonies on the plate to determine the temporal arrangement of the notes. Although the microbiome of an individual is constantly changing, our methods, over identical trials, would likely not yield the same microorganisms dominating a plate and surviving to form a colony, as resources are limited and the diversity of the full population cannot be sustained. Thus, our MIDI output is reproducible from the same image, although the stochastic nature of bacterial colony growth means that an additional sample from the same person will result in different growth patterns on the plate.
The choice of solid LB medium, without any antibiotic, was intended to provide an environment able to sustain as many different microorganisms as possible (and convenient in terms of access and procurement). LB was originally optimized for culturing Escherichia coli, however, and although we did not detect any E. coli in one of our 16S sequencing experiments to identify what species of bacteria were growing, it is possible we selected for bacteria with similar food sources and other needs. The populations of microorganisms present are dependent on the body part sampled, in addition to other factors defining particular sections where swabbed. For example, the skin is a large and diverse organ that can be divided into regions by moisture and oil levels.
Since 2016, the Biota Beats team has worked with both national and local science institutions, acclaimed musicians, and pioneering scientists, as well as presenting in a wide range of venues, from museum exhibitions to youth science conventions. In the following, we present a brief history of our projects. For more information, see http://biotabeats.org.
The iGEM Project
Biota Beats was conceived by the EMW Street Bio team in 2016 with the vision to make biology more accessible to the public through popular music. To share the Biota Beats project with the global community, the project was submitted to the Hardware track of the 2016 International Genetically Machines (iGEM) competition. The iGEM competition is a worldwide contest in synthetic biology in which teams representing high schools, universities, and community labs like EMW Street Bio exhibit innovative, biologically based systems they have created to have an impact on humanity. The Biota Beats prototype was accompanied by a poster and iGEM wiki documenting its development and outreach (see http://2016.igem.org/Team:EMW_Street_Bio). It was presented to participants and judges and was awarded a bronze medal, motivating the team to continue expanding upon the initial design.
Youth Science Initiative
As a part of bringing Biota Beats to the community and with the rise of integrated STEAM (science, technology, engineering, art, and math) curricula, we engaged with local middle school–aged youth by pioneering the EMW Youth Science Initiative. This initiative seeks to educate, inspire, and expose underrepresented local youth not only to the content but also to the creative and critical thinking aspects of the sciences, as well as examining the sciences through a social lens.
The first session was hosted by Ginkgo Bioworks and took place on 15 October 2016, with a focus on microorganisms. The youth were shown how microbiomes cultured on an agar plate resembled classic vinyl records and were given a demonstration as to how a new type of record player could play music from our cultured microbiota. Soon after the demonstration, participants swabbed different parts of their body and inoculated agar plates, essentially composing their personal microbiome records.
EMW Gallery Exhibit: Culturalizing Science
The Culturalizing Science exhibition also provided the backdrop to teach students from English High School in Boston, Massachusetts, about the microbiome in the gallery space. The Biota Beats team designed a program for the students, including an introductory video on the human microbiome, a lesson describing specific bacteria (Listeria, Salmonella, E. coli, and Staphylococcus) using giant plush bacteria toys, and a crafting session where the students created their own bacteria from paper and wool. The story of Biota Beats was also told, followed by swabbing of the participants to create their personal biota records.
Uni-Verse at iGEM
In the original prototype of Biota Beats, each plate was a “song,” each body part was linked to an instrumental sound and pitch, and each colony was a note. In “Uni-Verse,” the global sample of the microbiome from over a hundred teams created a single song. The continents from which they were visiting were linked to an instrumental sound and pitch, and each plate was a single note, as seen in Table 1. The change in model resulted in a modification of the sonification algorithm, and each plate was now summed up through a single parameter of density, of the percentage of the plate covered with bacterial growth. This value was calculated by binarizing each plate image with varying thresholds, and calculating the percentage of darkened pixels corresponding to growth (circular or not). This value was then used to calculate the varying octaves of each plate note, similar to our procedure with the individual Biota Beats tracks. With each plate coming from a specific iGEM team with ties to specific institutions, each colony and note had an associated location of origin; thus, with six tracks, one for each continent represented, the temporal arrangement of notes was decided based on longitude, from west to east.
|Africa||Hands||808 kick with airy|
|North America||Nose||Atmospheric 1980s|
|Africa||Hands||808 kick with airy|
|North America||Nose||Atmospheric 1980s|
The visualization component was designed to communicate the new metaphor behind the Biota Beats song. Because geographic information was encoded into the music, the geographic locations of each of the participating teams were displayed on a spinning globe. The corresponding body parts were displayed by color-highlighting an adapted version of the “I, Virus” image. Finally, the equalizer displaying the audio was designed to look like an atmospheric layer wrapped around the globe.
Cambridge Science Festival
Stedelijk Museum Breda
Liberty Science Center, New Jersey
From December 2018 to May 2019, Biota Beats was part of the Microbes Rule! exhibition at Liberty Science Center in New Jersey. The exhibition explored how the intersection of art and science in the fields of microbiology and synthetic biology can help people visualize and connect with the invisible world of microbes (see Figure 13). Biota Beats was chosen as the soundtrack of this exhibition.
Custom Biota Beats
We also developed custom Biota Beats from several collaborators, including both hip-hop icon DJ Jazzy Jeff and the cofounder of Hubweek, a Boston-based festival, Linda Henry (see Figures 14 and 15). For each custom beat, we sampled body parts of each individual, including the hands, nose, and mouth. These samples were inoculated on a biota record, and we used the collected data to generate a beat.
The microbiome is intensely personal, constantly changing, and sensitive to changes in lifestyle and environment. Although the microbiome largely stabilizes after one grows out of childhood, it is in a state of dynamic equilibrium, changing from day to day (Lozupone et al. 2012). Thus, capturing snapshots of individuals' microbiomes at particular points in their lives could conceivably track their mental and physical journey in a new dimension.
Our current sampling process and sonification algorithm is based on visual and spatial data regarding growth of microorganisms on the biota record. In the interest of improving replicability, however, we are working on using an input data stream other than plate images. Current research can use cotton swabs to collect bacteria; but, instead of plating the samples, DNA purification and preparation procedures are used to prepare the sample for DNA sequencing. Sequencing 16S rRNA, which involves sequencing a specific subsection of a microorganism's ribosome, is the standard for classifying microbiota. Current limitations to implementation are, however, the cost of sequencing machines (or possible fees of sequencing companies) and reagents, as well as the time associated with each process (Osman et al. 2018).
We conducted preliminary testing in this direction by sequencing particular colonies from testing plates, and the populations of microorganisms we found matched up with those in the literature associated with skin. Initial attempts at sonifying classification data could be started prior to obtaining physical samples and sequencing results with the databases of microbiomes online (e.g., the Human Gut Project), either following previously established sonification models for genetic sequences, or using phylogenetic trees (Boutin and de Vienne 2017).
Kits and Online Repository
Making Biota Beats commercially available would reach a much wider audience, and it would allow users to interact and create music on their own. Creating a self-contained kit allowing users to grow, image, and sonify their own bacteria without the need for a certified biological laboratory space would place agency in learning about the microbiome into their hands. An online repository for everyone's creations would help foster community and dialogue about how to interpret and develop this technology.
The Biota Beats group also held a workshop through the MIT Independent Activities Period (IAP) to brainstorm other applications of the technology. One such proposed project was Biota Bonds, which involved sampling the mother's microbiome, which undergoes significant changes during the pregnancy. Given the simple structure of a lullaby, Biota Beats could construct an algorithm to produce a song that adheres to the constraints of the genre and which would create a personalized lullaby to which both mother and infant could listen. Other projects included ChoreoBiome and Biota Explore. ChoreoBiome highlighted the way a dancer would interact with the music, utilizing accelerometers to allow performers to manipulate the sounds as they translate and embody the movements of the microbiota with their own bodies. Biota Explore takes samples from the microbiome of a city, and gives users a chance to learn about their environment through a new lens that encourages curiosity about their surroundings. Overall, the technology of Biota Beats is fluid and can mold itself to the many interests and desires of various users and different data sets.
The EMW Community Space in Cambridge, Massachusetts, supported the work of all authors on this project. Additionally, Charles Kim was supported by the Teachers College at Columbia University, New York; Alexandria Guo by Wellesley College and the Massachusetts Institute of Technology Media Laboratory; Sara Sprinkhuizen and David Sun Kong by the MIT Media Laboratory; Gautam Salhotra by the University of Southern California at Los Angeles; and Keerthi Shetty by the Dana-Farber Cancer Institute in Boston, Massachusetts. Correspondence regarding this article should be addressed to David Sun Kong.
We gratefully acknowledge past members of the Biota Beats team, including Yixiao Xiang, Shannon Johnson, Thrasyvoulos Karydis, Viirj Kan, Rachel Smith, Mary Tsang, and Ani Liu, who were part of the EMW Street Bio team that participated in the 2016 iGEM competition and are pictured in Figure 16. We would also like to thank and acknowledge Mani Sai Suryateja Jammalamadaka, Sabrina Marecos, Lizza Román, Bart Scholten, Nicole Bakker, and Udayan Umapathi, who assisted with the Uni-Verse project at iGEM 2017, as well as the hundreds of students from the iGEM competition who provided microbiota samples, and members of iGEM headquarters, including Meagan Lizarazo, for their partnership. We would like to thank the participants in our MIT IAP Biota Beats workshop, including Russell Pasetes, Agnes Cameron, Rajeet Sampat, Angela Vujic, Shaheen Lakhani, Laya Anasu, Gary Zhang, Marius Hoelter, Gary Stillwell, Manuel Pelayo, Carole Urbano, Jabari King, Misha Sra, and Ilya Vidrin. Finally, we are grateful to the individuals who provided microbiota samples for our custom Biota Beats, including George Church, DJ Jazzy Jeff, and Linda Henry, and Jeffrey Cott for his help with the logo design and branding.