Abstract
This paper describes PlantConnect, a real-time interactive system that explores human-plant interaction via the human act of breathing, the bioelectrical and photosynthetic activity of plants, and computational intelligence to bring the two together. Part of larger investigations into alternative models for the creation of shared experiences and understanding with the natural world, the work is presented as a concrete implementation of a possible model based upon reciprocal interplay and information flows between human and nonhuman worlds.
For some time, artists have increasingly utilized computational media technologies to create interfaces with living organisms and the natural environment [1] in what artists Rasa Smite and Raitis Smits have called “emerging technoecological” practices [2]. Many of these artworks feature systems that might be referred to as biocybernetic interfaces: the bridging of the living world with the computational world. Artists marshal technologies such as robotics, bioenergy technologies, machine learning, and computer vision to translate the often-unseen processes of nonhuman organisms to human sensory ratios, in speculative attempts at creating some kind of meaningful and aesthetically potent connection between the two. Smite and Smits’s notion of techno-ecologies extends French philosopher Félix Guattari’s notable work The Three Ecologies to include our current immersion within technologies [3]. Guattari argues for more subjectivity in science. In extending the definition of ecology to encompass social relations and human subjectivity as well as environmental concerns, Guattari states that the boundaries between nature and technology need to be collapsed if we are to address the ecological crisis properly. Learning to think “transversally,” or across disciplines and systems of ideas, is a crucial step toward the goal of developing the alternative ontologies, epistemologies, and social relations necessary for ecological sustainability. As Italian philosopher Rosi Braidotti notes, this transversality must “include the relational dependence on multiple nonhumans and the planetary dimension as a whole” [4]. Exemplified by Smite and Smits’s own Biotricity—which features the sonification of bacterially generated electricity and microbial fuel cell (MFC) technology [5]—these works can act as cultural interfaces to this transversality, bridging diverse groups and practices (including humans and nonhumans) to establish new dialogues around sustainability. Overall, the integration of biological systems has had an almost visceral appeal to artists, since systems may exhibit unexpected or unconceived patterns of behavior that purely digital or mechanical systems may not. In addition, many artists are attracted to the thematic blurring of boundaries between digital and biological worlds as ways of experiencing the enigmatic “otherness” of nonhuman species [6]. Whether referred to as bioart, environmental art, or by a myriad of other names, the focus on human-nonhuman-environmental interactions resonates across these practices. Through various types of process-driven practices that feature combinations of living matter and emerging technologies, artists not only are exploring how these systems can serve as vectors of novelty and unexpected variety but also are forging new aesthetics and systems of ideas focused on showcasing alternative possibilities of human-nonhuman relations in the age of climate change and environmental degradation.
The interactive installation PlantConnect (Fig. 1) explores the possibilities of these technologically mediated encounters between human and nonhuman agents by combining plants, bacteria, and MFC technology in a real-time system that features the human act of breathing, the bioelectrical and photosynthetic activity of plants, and computational intelligence to link the two together. In PlantConnect the photosynthetic activity from an array of plant-microbial fuel cells (P-MFCs) and the bioelectrical activity of bacteria in the plants’ soil are measured and translated into light and sound patterns using machine learning. Part of larger investigations into alternative models for the creation of shared experiences and understanding with the natural world, the project explores complexity and emergent phenomena by harnessing the material agency of nonhuman organisms and the capacity of emerging technologies as mediums for information transmission, communication, and interconnectedness between the human and nonhuman. By staging interactions among human, nonhuman, and machine agencies, PlantConnect contributes to dialogues not only on sustainable futures but also in reimagining the nature of our relationship to the environment and the nonhuman world more broadly. Might we be able to construct experiences using computational intelligence and living organisms that feature what science and technology scholar Andrew Pickering calls a “performative ontology” that does not separate people and (living) things [7]? Can we create an ontological model of the world as one consisting of open-ended interactions and reciprocal interplay between all the life and matter in it, one that accounts for the agency of matter and all the living things on Earth in addition to humans and intelligent machines [8]?
PlantConnect was exhibited in 2019 at the Asia Culture Center in Gwangju, South Korea, as part of the Arts & Creative Technology (ACT) Festival and the International Symposium on Electronic Art (ISEA 2019). (© Carlos Castellanos)
PlantConnect was exhibited in 2019 at the Asia Culture Center in Gwangju, South Korea, as part of the Arts & Creative Technology (ACT) Festival and the International Symposium on Electronic Art (ISEA 2019). (© Carlos Castellanos)
A Primer on Microbial Fuel Cell Technology
MFCs are an emerging bioenergy technology for generating electricity from biomass using microorganisms found in diverse environments such as wastewater, soil, and lakes [9]. Essentially, MFCs are batteries. They convert chemical energy to electrical energy via the action of anaerobic bacteria that metabolize organic matter. Generally, MFCs are used under the conditions of an aerobic cathode with air or oxygenated water and an anaerobic anode in wastewater or other organic matter (Fig. 2). The organic matter is metabolized by the bacteria, generating electrons and protons. The electrons attach to the MFC’s anode, while at the cathode, oxygen together with electrons and protons are chemically reduced to water. Positive hydrogen ions are also released and are directed through the membrane to the cathode side. In dual-chamber designs, a proton exchange membrane is used as a separator between the cathode and anode, while single-chamber designs rely on the organic material (e.g. soil) as a natural separator, where the bottom is under anaerobic conditions and the top is aerobic (the cathode is exposed to air or oxygenated water). In addition to generating power, MFCs can also be used as part of or in conjunction with waste processing systems and remediation of contaminated lakes and rivers [10]. Overall, MFCs offer a very different approach to power generation and wastewater treatment, as the treatment process can become a method of capturing energy in the form of electricity or hydrogen gas rather than a drain on electrical energy.
Single-chamber (left) and dual-chamber MFC designs. (© Carlos Castellanos)
PlantConnect uses an array of 16 P-MFCs as the core element of the system. P-MFCs use naturally occurring and known processes around the roots of plants (typically aquatic plants) to produce electricity [11]. The plant produces organic matter via photosynthesis under the influence of sunlight. Most of this organic matter ends up as root material or exudates in the soil, where it is metabolized by anaerobic bacteria, resulting in the release of electrons as described above.
The Plantconnect System
As shown in Fig. 3, a participant blowing or whistling into a carbon dioxide (CO2) sensor located within the array of plants causes the CO2 levels to surpass a baseline threshold. This in turn triggers an array of 16 grow lights and a set of software sound instruments. Participants thus receive an immediate visual and sonic response. The lights are directed at the plants (from 2 m above) and thus contribute to their photosynthesis. There is one light for each plant. The photosynthesis levels are obtained from housings containing a plant and a CO2 sensor placed near it (discussed below). When the light above the plant turns on, it causes the CO2 levels near the plant to decrease. These levels are translated into interpolation parameters for the software sound instruments and spatialization module of the system. Meanwhile, the voltage signals from the P-MFCs are read by a standard microcontroller and analyzed to find the minimum and maximum voltage values. These thresholds determine the on/off patterns of the lights when they are triggered by human breath/CO2. Once the CO2 levels on the breath sensor fall below the baseline threshold, the lights turn off. This can take anywhere from 1 to 10 seconds.
Using two digital video cameras and a simple blob detection algorithm, the system then detects the on/off state of the lights in the light array, relative to the background. These data are then sent to a clustering algorithm that performs rudimentary pattern recognition. The data are then sent to the sound instruments and spatialization module to create the generative sound environment. In this way, the machine learning algorithm—and by extension the plants—select instruments and alter their amplitude, duration, pitch, and other parameters.
To initiate a response from the system, a participant will typically blow or whistle into the CO2 sensor located in the center of the space. This triggers each grow light to turn on but only if the voltage of its associated P-MFC is above the requisite threshold. The result is an unpredictable and varied pattern of lights and sound that is experienced as a reaction by the plants to human breath and light. The entire sound, computer vision, and machine learning portion of the system was built using Cycling ‘74 Max [12]. The project runs on two Apple Macintosh computers. One computer (the “CV/ML” computer) handles the computer vision and machine learning tasks, while the other (the “sound” computer) handles generative sound and communication with the microcontroller. Data are sent from the CV/ML computer as User Datagram Protocol (UDP) messages from Max over a standard Ethernet connection to the sound computer that is also running Max, with the sound instruments loaded. The sensor readings, P-MFC voltage readings, and light control system were built on the Arduino microcontroller platform. The following subsections detail each element of the system mentioned above.
Plants and Plant Housings
The plants used in this project are Oryza sativa, commonly referred to as Asian rice. The P-MFCs were built using carbon fiber as the anode and cathode material. The anodes are attached to insulated copper wire inside a waterproof acrylic housing (Fig. 4). They are then submerged about 5 cm below the surface, so they are not exposed to oxygen. The plants are housed in enclosures made of wood and clear vinyl (Fig. 1). Although the housings serve an aesthetic purpose in the piece, they are necessary for properly measuring changes in CO2 absorption from the plants themselves, irrespective of changes in the surrounding CO2 levels in the space. They also provide sufficient ventilation to allow for adequate airflow (thus not risking the CO2 level continually rising inside the housing) [13].
P-MFCs showing the rice plant, carbon fiber anode and cathode, and waterproof acrylic housings. (© Carlos Castellanos)
P-MFCs showing the rice plant, carbon fiber anode and cathode, and waterproof acrylic housings. (© Carlos Castellanos)
Biosignals and Light Control
All signal acquisition and light control are handled by a single Arduino Mega 2560 microcontroller [14]. Acquiring voltages from the P-MFCs is a simple matter of connecting each cathode (which in this case is the positive lead) to an analog input of the Arduino. However, the voltages are not acquired from each individual P-MFC. Instead, groups of four P-MFCs are wired together in series to make a single voltage source that is then connected to an Arduino analog input. As there are 16 P-MFCs, this amounts to four groups of four P-MFCs (hereafter referred to as “P-MFC groups”) and thus a total of four voltage sources.
While the system is running, the voltage signals in each P-MFC group are analyzed and used to generate a set of dynamic thresholds. A total of four thresholds are generated, one for each light and P-MFC in the group. These thresholds determine which lights activate and thus determine the on/ off patterns of the light array. Each of these values will set each light in the group to an active state successively in a clockwise manner when it is surpassed. When a participant blows on the CO2 breath sensor (and the sensor value goes above the predetermined threshold), it will trigger the active lights to turn on. The lights themselves are 20-watt LED grow lights that emit a warm white color. They are connected through two 8-channel relays, which are controlled by the Arduino. When plants are actively photosynthesizing, they absorb greater amounts of CO2 than when they are not photosynthesizing (e.g. at night). In PlantConnect, the P-MFCs’ levels of photosynthesis are obtained by measuring CO2 near the plants. The sensor returns the CO2 levels in parts per million and sends the data to the Arduino, along with the breath CO2 sensor. This sensor is also connected via serial/RS-232 to the Arduino. Here, we keep a running median of the nine most recent CO2 readings. This helps to establish a baseline level with respect to the surrounding environment. Thus, the threshold for triggering lights and sound is a predetermined level above this baseline (20 ppm by default). CO2 readings are taken at a rate of two per second.
Computer Vision
A simple blob detection algorithm is used to differentiate the lights from the background. This is a relatively simple task as the piece is installed in a rather dark space. To achieve blob detection easily and reliably within the Max environment, a third-party library, cv.jit [15], was used. The cv.jit.blobs.centroids object returns a list of blob centroid coordinates. Two USB digital video cameras [16] were mounted between the plants and the grow lights (just over two meters from the floor) to provide the video feeds. They were pointed directly at the lights and connected to the CV/ML computer running Max. The two video feeds were combined and together produced a streaming image that captured all the lights in the space. The video feed was then algorithmically cut up into four rows and eight columns for a total of 32 cells. In its default configuration, the video image is 640 × 360 pixels. Thus, each grid is 80 × 90 pixels. This grid is the reason for using blob detection (as opposed to simply reading the on/off states from the microcontroller). The size of the lights and the fact that each light panel takes up to 500 milliseconds to reach full brightness means that a single light may register as more than one blob since it may spill over to another row or column. Both factors serve to add an element of variety and aleatoric behavior to the system (for example, the system may very quickly switch between several different cluster assignments for the same light pattern, resulting in an erratic, “glitchy” sound).
Blobs are analyzed, and the x/y coordinate of each blob’s centroid (center of mass) is returned. A list of 32 binary numbers corresponding to the location of each centroid within the grid of 32 cells is then output, with 0 being “off” and 1 being “on.” This list determines which “voice” of the sound instrument (which essentially corresponds to pitch) gets played. For example, if the first light is turned on and the blob centroid is located at pixel location (40, 55), index 0 (the first item in the list) will be set to 1 and thus will trigger the sound instrument to play its lowest pitch. Depending on how the system is configured, the duration of each triggered voice is set to a fixed amount at runtime or is determined by whether the blob detection algorithm recognizes the light (essentially if the light is on, its corresponding voice stays on). Pitches (or which voice gets played) are arranged left-to-right and top-to-bottom in the 4 × 8 grid. Thus, the first voice 1 would be the top left and voice 32 would be the bottom right of the grid.
Clustering/Pattern Recognition
The same list of 32 binary numbers that is sent to the sound instruments is also sent to the machine learning module. We then apply a fuzzy c-means clustering algorithm to the data. Fuzzy c-means clustering (FCM) [17] is a method of clustering (a type of unsupervised machine learning) that allows a given data point to belong to more than one cluster. A membership grade (in our case, a floating-point number between 0.0 and 1.0) is calculated for each data point that indicates the degree to which that point belongs to each cluster. Frequently used in pattern recognition tasks, FCM assigns membership in a cluster by calculating the distance between the cluster centroid and the data point. The closer the data point is to the cluster centroid, the higher its membership grade for that cluster (i.e. the closer it is to 1.0).
In PlantConnect, we use the ml.fcm object from the ml.* package for Max [18]. We first initialize the object by assigning it a fuzz coefficient of 1.05, selecting the number of clusters to calculate (in our case, four) and a termination threshold of 0.01 (the default). The fuzz factor effects how “crisp” or “fuzzy” the cluster memberships are (higher numbers return fuzzier membership grades), while the termination threshold affects the speed and accuracy of the cluster calculation (higher values produce quicker, more approximate clusters). We then generate 1,000 random data points as a training set. Each data point has 32 dimensions (corresponding to the possible location of each centroid within the grid of 32 cells) and consists of ones and zeros. Once this is done and the live video feed is turned on, the system is ready to perform real-time clustering of incoming light patterns. When the system is running and new data on the light on/off patterns are received, a query is made to the ml.fcm object, which then outputs a list of four membership grades (one for each cluster, between 0.0 and 1.0). These numbers are used to set the volume of each sound instrument (0.0 = minimum volume, 1.0 = maximum volume). In essence, the FCM algorithm is used as a kind of intelligent mixer for the sound instruments, generating a variety of sounds that would be unlikely or even impossible for a human-controlled mixer to achieve.
Generative Sound
The real-time data representing the shifting light patterns along with the output of the FCM algorithm are translated into a series of UDP messages that control the sound instruments and a spatialization module within the Max environment. These messages essentially function as note on/ off messages to “play” the instruments. Five sound instruments have been constructed, each with its own distinct timbre. Four of these instruments correspond to the four cluster memberships generated by the FCM algorithm (and will henceforth be referred to as the “cluster instruments”). These instruments require human interaction (via breath/ CO2) to be activated. A fifth instrument is the default instrument. It plays continuously, requiring no human action to be activated.
The default sound instrument simply maps the voltage levels from each P-MFC group to pitch (the higher the voltage, the higher the pitch). In addition, any transient spikes in the CO2 levels from any of the P-MFCs are sonified by the default instrument (and are heard as transient spikes in the pitch). The cluster instruments receive CO2 levels as well. However, in this case we add up the CO2 level of each plant of each P-MFC group and get an average of those readings. Then we take the five most recent averages and obtain the median value. These values are then used as interpolation parameters for the spatialization module (discussed below).
Finally, the CO2 readings are also collected and used to construct an envelope function that is used as a modulation source for the cluster instruments. Each instrument uses this modulation data differently. For example, one instrument uses it to crossfade between different wavetables, while another uses it to alter the depth factor (the amount of deviation around a center frequency) of a modulation oscillator and to crossfade between two control signals.
PlantConnect also features 8-speaker sound spatialization using circular panning. By default, sounds related to readings taken from each P-MFC group are sent to the two adjacent speakers closest to that group. In addition, whenever a light is triggered above a particular P-MFC in a group, the sound instrument will be heard on the two adjacent speakers closest to that group, in a manner like left-to-right panning—the idea being that the sound instrument is heard near the P-MFC whose lights are currently on (and thus triggering sound). CO2 levels of each P-MFC influence the amount of spatialization spread between all the speakers. The median value of the five most recent averaged CO2 readings of each P-MFC group is used to determine the amount that the triggered sound instrument spreads from its “home” location (the two adjacent speakers closest to it) to the other speakers. When triggered, the sound spreads in both a clockwise and counterclockwise direction from this home location.
Discussion
Observing participant interactions with PlantConnect revealed what I believe to be a consistent pattern of behavior. After the initial surprise of the triggering of lights and sound, participants would observe closely, often by walking around and looking up as well as down and close to the plants. Sometimes they would lean in to observe particulate elements (e.g. sensor placement, soil). Most viewers stayed with the work considerably longer than is typical for most artworks [19] and did so in a manner (looking around, perhaps confused but curious or even delighted) that suggests that they appreciated (if only partially “getting”) what was going on with the work. This of course is not atypical for interactive or “intelligent” artworks. Initial curiosity related to the functioning of the work reveals cultural anxieties about technology and a desire to understand the presumably fixed rules related to its operation. Yet in this case the mixing of living and computational systems seems to add a kind of heightened yet relaxed curiosity to the piece. Perhaps participants feel more comfortable in simply accepting the mystery of living things and their interactions with a technological system and are thus more inclined to think about (and accept) organic and ecological explanations as the drivers of complexity.
The Asian rice plant was chosen because it was readily available, as it is grown plentifully in the Gwangju area (where the piece was first exhibited) and because it has been used by researchers in several P-MFC designs with some success [20]. As materials for an electronic artwork, the plants, mud, and bioelectricity made for an altogether novel experience that is qualitatively different from a more conventional electronic art setup. Like many technological artworks, this piece requires a fair amount of maintenance and setup time. However, unlike most technological artworks, the “messiness” and process of replenishment (e.g. adding water) and care (especially in the days and weeks before the exhibition) added weight to the sense that we were not just tapping into an information source but were embedding ourselves within already occurring organic processes. Making, exhibiting, and observing tended to collapse into each other. This, I believe, was born in part from what I felt were material expressions of what I see as intimately bound up with the agency of the plants, mud, and bacteria: their autonomy—their ability to simply be, their tropic tendencies, and their adaptation to environmental conditions. Almost regardless of what we do, the plants will absorb CO2, and the bacteria will generate even tiny amounts of electricity for some time. The challenge is in transducing enough of these processes (e.g. capturing enough voltage) to make a compelling experience for the audience while continuing to let the plants and bacteria exercise their agency. PlantConnect is in some ways an experiment in accounting for these kinds of contingencies—and in fact these possibilities are ultimately what the work is about.
Conclusion
In his influential paper “The Aesthetics of Intelligent Systems” [21], art theorist Jack Burnham offers a consideration of art that utilizes intelligent systems as establishing a dialogue that can expand the horizons of the art experience by enabling us to tap into the information-rich environment. The crucial insight offered by Burnham is his assertion that this emerging expansion of the art experience “encourages the recognition of man [sic] as an integral part of his environment” [22]. Burnham stated his belief that “the ‘aesthetics of intelligent systems’ could be considered a dialogue where two systems gather and exchange information so as to change constantly the state of the other” [23]. While in PlantConnect there are more than two systems involved, the framework of reciprocal dialogue (and agency) is still very much at play. More than just using digital technologies to bridge the living world with the computational world, PlantConnect asserts that if we are to reimagine the nature of our relationship to the environment and the nonhuman world, we must work from a posture of designing systems and experiences that bring this environmental embeddedness, reciprocity, and co-performing agencies into high relief.
PlantConnect can be seen as a contribution to this conversation via notions of interspecies and machine agency. In addition to Pickering’s performative ontology, we can also say that PlantConnect stages multiple intersecting agencies at once—plant, bacteria, human, and machine—all operating in different spatial and temporal scales. This “microperformativity,” or decentering of human sensory ratios in biologically focused works [24], helps establish a kind of equity between plant, bacteria machinic, and human, as it requires some level of acknowledgment by humans that there are invisible (yet vital) processes happening (e.g. bacterial voltages and metabolic processes, photosynthesis) whose translation to human scale is subtle and sometimes seemingly unchanging (e.g. bacterial voltages may stay the same for hours). PlantConnect can also be seen as enabling what curator Jens Hauser refers to as “co-corporeality” between the actors involved [25], redefining notions of body and blurring lines between system and environment by changing focus from “mesoscopic actions to its microscopic functions, from physical gestures to physiological processes, and from staged diegetic time to real performative time” [26].
Whether “speaking” to plants or “listening” to bacteria, computational, biological, and environmental technologies all have cultural and aesthetic dimensions that call for further artistic and critical exploration. Indeed, the convergence or intermingling of computational, nonhuman, and human agencies may be a template for aesthetic experiences that highlight this performative ontology and microperformativity, showing us the unpredictability and dynamic potency that living organisms can exhibit while showcasing possibilities for new ways of human understanding of these organisms and the environment. In PlantConnect, bioelectricity, light, sound, CO2, photosynthesis, and computational intelligence form a circuit that enhances informational linkages between human, plant, bacteria, and the physical environment, enabling a mode of interaction that is experienced not just as a technologically enabled act of translation, but as an embodied flow of information.
Acknowledgments
PlantConnect was created by Carlos Castellanos and Bello Bello. Thank you to the Asia Culture Center in Gwangju, South Korea, for its support.