A growing number of individuals live with medical conditions and injuries that render them minimally communicative. Assessing their level of consciousness and awareness is a major challenge that has profound implications for care decisions and their relationships. Resonance: a novel brain-computer interface assemblage, is designed to detect and augment expressions of consciousness in minimally communicative individuals. Resonance consists of (1) high-density EEG features that vary with states of consciousness; (2) sound; and (3) therapeutic clowns. Seven EEG features of consciousness are calculated in real time and mapped to sonic output. Therapeutic clowns use multisensory improvisational play to interact with these sonified brain features to create interpersonal connections with minimally communicative individuals. Resonance has the potential to reveal real-time variations in an individual’s level of consciousness, which may create an entirely new form of interpersonal interaction with minimally communicative persons.

Over the past 30 years, improvements in life-saving and life-sustaining technologies have created “a new strain of human beings” [1]; ranging from individuals in critical care to elders with advanced dementia, these individuals have minimal to no ability to interact with others. Even under the best circumstances, caregivers report uncertainty and ambiguity when interacting with these persons [2]. The crux of this uncertainty hinges upon the central question: Are these unresponsive individuals conscious? Put another way, are they aware of their environment and able to perceive the interactions of other people? The answer to this question is foundational to their relationships and decisions about their care, yet is often based on guesswork by health care professionals and family members alike [3].

This paper presents Resonance, a novel brain-computer interface assemblage that we designed to detect and augment expressions of consciousness in minimally communicative individuals [4]. The assemblage consists of (1) electroencephalography (EEG), (2) translation of EEG into sound, and (3) therapeutic clowns.

Emerging evidence from the neuroscientific research of consciousness has correlated features of EEG to an unresponsive individual’s level of consciousness [5]. The sonification of EEG features associated with levels of consciousness may enable caregivers to hear the waxing and waning of someone else’s awareness [6]. Therapeutic clowns are ideally placed to create interactions from this sonification, as they are trained to stimulate interaction using musical and rhythmic techniques to capture attention and develop call-and-response [7]. Furthermore, they often work in health care settings to strengthen patient personhood against a backdrop of often dehumanizing medical interventions and procedures [8].

Resonance integrates multiple technological, ecological, and human components:

  1. Electroencephalography: EEG measures electrical signals generated by the brain. High-density 128-channel EEG is recorded in real time through custom software, and time-varying features of consciousness are extracted and sent to a sound generator. The EEG equipment used for Resonance has been used in previous studies to collect data from patients with disorders of consciousness [9].

  2. Sound: The sounds of Resonance are designed to enhance interaction between minimally communicative individuals and their caregivers [10,11]. We have tailored them to the particular health care environments in which they will be heard (e.g., the sounds do not conflict with those of medical devices). Each EEG feature is sonified and combined together to produce a “soundscape” by mixing recordings, synthesizers, and/or filters.

  3. Therapeutic clowns: Therapeutic clowning is a 30-year-old international practice rooted in multi-sensory improvisational interplay. The clowns have a repertoire of nimble and adaptable techniques to create interpersonal connections with individuals with varying levels of responsiveness [12]. As therapeutic clowns have a heightened sensitivity to minimal multisensory feedback, they are ideally placed to assess the impact of the sonified EEG on their own responsiveness to the individual and to suggest ways that Resonance can be used by others to enhance the quality of their interactions.

Resonance deploys custom-made software that calculates the real-time values of seven EEG features that have been identified by consciousness research as strong neural markers of level of consciousness.

Ratios of Spectral Power

We computed spectrograms of the EEG using the multitaper method, with window lengths of T=2 s, step size of 0.1 s, time-bandwidth product of NW=2, and K=3 number of tapers. Next, we summed the overall power of the theta (4–8 Hz), alpha (8–13 Hz), and beta (13–25 Hz) bandwidths over all overlapping windows spanning an interval of 5 sec. Two ratios of spectral power (beta/alpha; alpha/theta) were automatically sent to the sonificator every 5 sec. Higher frequencies are typically associated with higher levels of consciousness.

Topographic Distribution of Alpha Power

We used the global scalp power distribution at 10 Hz of a 5-sec EEG window to calculate the ratio of frontal-versus-posterior-dominant alpha power; the higher this ratio, the more likely the participant is to be unconscious.

Phase-Amplitude Coupling of Extra Low-Frequency Phase and Alpha Amplitude

Phase-amplitude coupling of extra-low frequency (0.1–1 Hz) phase and alpha (8–14 Hz) amplitude has been associated with levels of consciousness [13]. We calculated the phase-amplitude coupling of these two frequency bands for six electrodes in the frontal area (i.e., the area surrounding electrode F2) and five electrodes in the parietal area (i.e., the area surrounding electrode Pz). We then computed a phase-amplitude modulogram by assigning each temporal sample of instantaneous amplitude to one of 18 equally spaced phase bins based on the instantaneous value of the low-frequency phase and then averaging the instantaneous amplitude of alpha within the window. Finally, we calculated the average modulogram power for each electrode across all trough and peak phases across nonoverlapping 30-second windows.

Frontoparietal Functional Connectivity

The strength of frontoparietal functional connectivity varies with levels of consciousness: Stronger connectivity is associated with higher levels. With Resonance, we estimated the functional connectivity between electrodes using the weighted phase lag index (wPLI) [14]. To account for injuries and asymmetries in the brain, we distinguish four different regions of frontoparietal wPLI: (1) left lateral, (2) left midline, (3) right midline, and (4) right lateral. We calculated wPLI for each electrode pair from the EEG filtered to the alpha bandwidth (8–13 Hz) recorded during a 10-second epoch. The surrogate-corrected wPLI values from all four brain regions were sonified every 10 seconds.

Frontoparietal Feedback versus Feedforward Connectivity

The phase relationship between frontal and parietal brain regions varies according to an individual’s state of consciousness, with a dominant feedback relationship during conscious awareness and a dominant feedforward relationship during unconsciousness [15]. We used the directed phase lag index (dPLI) to calculate the direction of the phase lead/lag relationship between two signals [16]. Next, we used a Hilbert transform to extract the instantaneous phase of the EEG from each channel and calculate the phase difference (Aφt) between channels. We calculated dPLI across the same four brain regions as wPLI and surrogate-corrected the results to account for spurious connectivity. The corrected dPLI values from all four brain regions were sonified every 10 sec.

Topographic Location of Network Hubs

When using graph theory to characterize EEG brain networks [17], the network hub is posterior during conscious awareness and anterior during unconsciousness [18]. For a 30-second EEG epoch, we constructed a brain network from the top 15% of wPLI connectivity values. We calculated the degree (total number of connected nodes) of each electrode and approximated the network hub using the highest-degree electrode. We calculated the relative anterior-posterior location of this electrode and sonified its location every 30 sec.

Permutation Entropy

Permutation entropy measures the local dynamical changes of EEG by quantifying the regularity structure of a time series based on a comparison of the order of neighboring signal values [19]. Permutation entropy changes significantly between levels of consciousness [20]; with loss of consciousness, it decreases to a greater extent in the frontal region than in the parietal areas. We calculated the permutation entropy of 10-sec EEG epochs for the six channels surrounding Fz and the five channels surrounding Pz using embedding dimension dE = 5 and time delay τ = 4. The average frontal and parietal values were sonified every 10 sec.

Sound Design Process

We designed the sounds used in Resonance through an iterative process with four therapeutic clowns from the Dr. Clown Foundation [21]. We held three workshops to create sounds that were specific enough to inform the clowns that a change in an EEG feature of consciousness had occurred, but abstract enough to allow for the clowns to improvise, play, and be creative in their interactions with the patient. In Workshop 1, we played a large range of noises from a sound-scape generator website [22] for the clowns. Sound categories ranged from natural noises, through tonal drones and atmospheres, to patternscapes and soundscapes. We asked the therapeutic clowns to comment on the sounds in each category according to three listening modes: (1) functional listening (i.e., when a change in the sound occurs, how easy is it to perceive?), (2) aesthetic listening (i.e., is it pleasant to listen to? What mood does it evoke?), and (3) artistic listening (i.e., how do these sounds constrain [or not] the art of therapeutic clowning?).

Based on the feedback of the clowns, we prototyped three different sonifications that synthesized the principles of useful features of sounds in Workshop 1. During Workshop 2, we incorporated these sonifications into a Wizard of Oz prototype, where we asked the clowns to interact with an individual wearing a high-density EEG cap and simulating unresponsiveness while an engineer changed parameters of the sonifications based upon the real-time variations in the participants EEG features. The clowns provided feedback and reflections based on their experiences, which we used to select and refine the prototypes.

We tested the next prototype in a workshop held in a medical simulation center (Fig. 1). This setting allowed the clowns to interact with an individual simulating unresponsiveness in a replica of a hospital room. We recorded real-time EEG, which the Resonance software converted automatically into sound, enabling the clowns to experiment with different ways of incorporating the sounds into their practice and giving them an opportunity to provide feedback to refine the subtleties and nuances of the sonification. Interactions with this final prototype were also important for giving the clowns practice in attuning their listening to the Resonance sounds before interacting with minimally communicative patients.

Fig. 1

Workshop 3 in the sound design process of Resonance, where a therapeutic clown duo (Dr. Fifi and Dr. Tcheksa) interacted with the real-time sonified EEG of an individual in a simulated medical environment. (© Stefanie Blain-Moraes. Photo: Diane Lynn Weidner.)

Fig. 1

Workshop 3 in the sound design process of Resonance, where a therapeutic clown duo (Dr. Fifi and Dr. Tcheksa) interacted with the real-time sonified EEG of an individual in a simulated medical environment. (© Stefanie Blain-Moraes. Photo: Diane Lynn Weidner.)

Close modal

Composing the Sounds of Resonance

We intended the sounds composed for Resonance to be informative of a participant s dynamic level of consciousness, aesthetically pleasing, and adaptable to the idiosyncratic patterns of each brain-injured individual. The final sonification for Resonance emerged along the theme of a “deconstructed lullaby.” The composition was inspired by several constraints that were necessary for Resonance to be a functional brain-computer interface assemblage.

First, the mapping between changes in EEG features and sounds needed to be deterministic, which enabled “replaying” the brain sonification and comparison across different participants. We followed a sound generation process inspired by Luciano Berio’s Sequenza IXa for solo clarinet (1980) [23]. The sounds for the Sequenza are based upon an elaborate melody that Berio composed that explores the range of clarinet sounds. Berio then broke the melody into short fragments and manipulated them by looping, repeating, reversing, and compressing; the combination of these fragments created a final composition that explored the space of the melody by drawing out its features and contrasts. Similarly, we composed an elaborate series of melodies for Resonance and broke them into short, looped segments. Each of the seven EEG features listed in the section titled “Mapping the EEG Features to Sound” controlled a different melody, with their values selecting different sequences of the melody to be played in a short, looped segment. Each of the loops was played simultaneously, creating a rich musical texture.

Second, we needed the sounds of Resonance to account for the large intersubject variability of brain signals while remaining aesthetically pleasing and informative. The range of potential values of EEG features is large, especially in participants with neural injuries. Conversely, the variations in EEG features that mark a significant change within a given participant are comparatively small. We addressed this constraint by composing Resonance as a musical fractal, where overall macro-variations were mapped onto a specific form and microvariations could be perceived by sonically “zooming in” to a specific point in the macro form, revealing more and more sonic detail. As an illustrative example, we mapped dPLI onto a melody that gradually climbed from a low to high pitch; variations in dPLI that remained within a narrow range for a given time would gradually reveal more ornamentation and variation within this pitch. In this way, the fractal form revealed information about both local and global structures of the EEG features.

Finally, we needed the sounds composed for Resonance to address the time scales across which the EEG features dynamically evolved (i.e., from 5 to 30 s). Resonance was composed with sound files characterized by rapidly varying internal details; this kept the resultant sounds musically interesting while a listener waited for an update in an EEG feature, and enabled the sounds to shift in a perceptible manner when these values changed.

Mapping the EEG Features to Sound

The sounds mapped to the EEG features were played simultaneously. This was possible because sound is “transparent”; in other words, several sounds can be heard at once without interfering with each other. This allowed for more information to be registered simultaneously and holistically and for the possibility of identifying relationships between the signals that are harder to notice in visual charts and graphs. We designed the specific mapping of Resonance such that the sonic representation of each EEG feature occupied a niche: a unique place in the sound ecosystem such that each sound was easily differentiable, yet harmonious [24]. We gave each of the seven EEG features a specific musical identity and role; hence, a listener could tune in to a specific aspect of the sonification to discern the state of a particular EEG feature while also hearing the state of the brain as a whole by listening to the ensemble. The final mapping of EEG features to sounds was performed as follows:

  1. Ratios of Spectral Power. The spectral power ratios controlled melodic instruments that formed the foreground of the music. Alpha/theta controlled piano and harp sounds, while beta/alpha controlled synthesizer sounds.

  2. Topographic Distribution of Alpha Power. The degree of anteriorization was mapped onto a pointillistic texture composed from granular synthesis. The sounds have a static quality when alpha power is anterior dominant and a dynamic quality when alpha power is posterior dominant. To carve out a unique sonic space for this feature, we composed it with a significantly shorter loop length than all the other features (0.5 seconds).

  3. Phase Amplitude Coupling. The sounds were mapped onto a Rhodes electric keyboard, with slow sounds looping every five seconds. Because this signal was sent only every 30 seconds, this provided a repetitive, minimalistic background that helped the rest of the music become coherent.

  4. Frontoparietal Functional Connectivity. wPLI was mapped to sustained woodwind tones, which grew louder when the signal increased and quieter when it decreased. This method sonically represented the strength of functional connectivity with a richness of musical texture. The four wPLI values were spatialized such that sounds from the left brain hemisphere were played on the corresponding speaker. Instead of looping a shorter segment, the woodwind tones sustained longer melodies that counteracted the choppy effect from the shorter loops.

  5. Frontoparietal Feedback versus Feedforward Connectivity. dPLI was mapped to melodies composed with xylophones and vibraphones. When dPLI was feedback dominant, the instruments became more active; when feedforward dominant, they became less active. The melody created by the xylophones and vibraphones was microtonal. This carved out a unique sonic space for this feature among the other pitched material. Like wPLI, dPLI was also spatialized.

  6. Topographic Location of Network Hubs. This feature was represented by bass activity: When the hubs were anterior, the bass activity was static; when they were posterior, the bass activity was dynamic.

  7. Permutation Entropy. Two different noise textures were mapped onto the frontal and parietal permutation entropy values, where an increase in permutation entropy resulted in a louder noise texture, and a decrease, in a softer. The difference between the frontal and parietal permutation entropy values was represented through the speed of drums and the number of different drums heard.

We recruited two individuals with disorders of consciousness to participate in a study to explore Resonance interactions, obtaining written informed consent from their legal representatives in accordance with the Declaration of Helsinki. We provided participants with two to four Resonance sessions over a month, with two therapeutic clowns participating each time. We synchronized EEG and sonic output with video recording, and an ethnographer (N.I.S.) recorded field notes of the session. We retrospectively analyzed the videos for significant moments of interaction. In the following section, we present a vignette of a significant interaction for each participant, with identifying details anonymized. This study was approved by the Institutional Review Board of McGill University (A10-B57-19A).

Aaron

Aaron is an anglophone man who endured a serious car accident at the age of 30: he was ejected from the vehicle and suffered a severe traumatic brain injury. He was in an intensive care unit for several months before regaining consciousness. Two years passed between the accident and his participation in our study. Aaron currently lives in a long-term care home, where he is visited daily by his wife Carrie. He was minimally communicative at the time of the study.

Therapeutic clowns Cookie and Cherie enter the room. Aaron seems uncomfortable: he looks tired, breathes with an open mouth, and curls down in his wheelchair. Carrie comments, “He is having a rough day. We’ll see the doctor tomorrow.”

As Aaron’s EEG sonifications are broadcast inside the room, the clowns interact with him energetically. The sound changes to become melancholy, and Cookie hums along in harmony. Cherie squats next to Aaron’s chair and puts her hand on his arm. He is still hunched but looks at her sideways. A bell sounds. Cherie replies, “Me too . . . but it is lovely to see you.” Aaron moves in his chair, and Carrie approaches to help him sit straight while violins replace the bells. Cherie says, “I think he is falling for me.” Cookie and Carrie laugh quietly. Cherie turns to Cookie and asks innocently, “Should I ask him to marry me?” “That’s a good question,” Cookie replies.

Cherie approaches him and asks, “Am I not to your liking? Please tell me now, once and for all, and I shall never darken your door again.” She waits and hears the xylophone and violin sounds crescendo. “What? He says he already has a true love!” Cookie responds, “That’s what I was fearing, Cherie.” Cherie faints into Cookie’s arms. Aaron looks at them attentively and sits upright for the first time during the session.

Marianne

Marianne is a francophone woman who, at the age of 27, was hit by a car moving at high speed and suffered a severe traumatic brain injury. She emerged from a coma into a disorder of consciousness. Three years later, she now lives at home and is cared for by her parents. At the time of the study, Marianne had no communicative ability.

Marianne is lying on her bed in her spacious living room. Across from her, a floor-to-ceiling window brings natural light and a view of the forest into the room. When Resonance is turned on, the initial sonification is very quiet and predominantly consists of xylophones and bells.

Dr. Fifi and Dr. Wash enter the room. They stand together beside Marianne’s bed and observe her in silence as they breathe in synchrony. The sound gradually crescendos. Slowly, Dr. Fifi starts to rock rhythmically to the sounds, and Dr. Wash follows her. The sound continues to grow louder. Dr. Fifi moves the pompoms in her hands in time with the sounds. The sound decrescendos, and the clowns extend their arms to the ceiling and stop moving. Gradually, they lower their arms and observe Marianne. Dr. Fifi places her hand on Marianne’s knee while Dr. Wash places his hand on her shoulder; a few seconds later, the sound is dominated by louder xylophones. The clowns continue to improvise movement that reflects the EEG soundscape, which continues to dynamically change over the session.

Resonance is an assemblage of EEG, sound, and therapeutic clowns that can enable new forms of interaction with minimally communicative individuals. The EEG sonification “reveals” otherwise tacit embodied expressions and creates different potentialities for the therapeutic clowns— as illustrated in the two vignettes. The clowns generate a bidirectional interaction with Aaron, treat his sounds as a surrogate for responses, and use the sonic changes as cues to continue/redirect their narrative. In turn, Aaron’s EEG sonification appears to change in response to moments created in the clowns’ play. A more unidirectional interaction dominates Marianne’s Resonance session, where her EEG sonification drives and is reflected and amplified in the clowns’ embodied movements. Both cases illustrate how Resonance pushes the boundaries circumscribing subjectivity by offering a potential new mode of becoming aware of the participants’ sensitivities. The assemblage blurs distinctions between human and technology, emotion and art, public and private. By creating a co-participatory space for caregivers and families of unresponsive individuals, Resonance may create an entirely new form of interpersonal interaction and connection, opening up a novel space for human connection. Clinically, we encourage explorations of such interactions to promote patient recovery, as preliminary studies have shown that providing appropriate interaction and feedback can increase an unresponsive patient’s level of consciousness [25]. Artistically, we encourage improvisation and play in this entirely unexplored medium, which generates enormous creative potential for new forms of therapeutic clowning and may answer important questions in clowns’ practice, including “Is this a favorable moment to intervene?” and “Is my presence as a clown making a difference?” Across these future pursuits, we believe that Resonance constitutes a privileged space to explore interpersonal interaction with and the personhood of minimally communicative individuals.

This work owes a great debt to the feedback of Guillaume Paquet, Anne Brulotte-Legare, and Jean-François Ouellet from the Dr. Clown Foundation, whose input significantly shaped the final outcome of Resonance. This project would not be possible without the funding of the AUDACE program of the Fonds de Recherche Société et Culture (2019-AUDC-263302).

1
Ian
Brown
,
The Boy in the Moon: A Father’s Search for His Disabled Son
(
Toronto
:
Random House Canada
,
2009
).
2
S.
Blain-Moraes
et al
,
“Biomusic: a novel technology for revealing the personhood of people with profound multiple disabilities,”
Augmentative and Alternative Communication
29
, No.
2
,
159
173
(
2013
).
3
E.
Racine
et al
“Observations on the ethical and social aspects of disorders of consciousness,”
Canadian Journal of Neurological Sciences
37
, No.
6
,
758
768
(
2010
).
4
B.E.
Gibson
,
F.A.
Carnevale
, and
G.
King
,
“ ‘This is my way’: reimagining disability, in/dependence and interconnectedness of persons and assistive technologies,”
Disability and Rehabilitation
34
, No.
22
,
1894
1899
(
2022
).
5
G.A.
Mashour
and
A.G.
Hudetz
,
“Neural Correlates of Unconsciousness in Large-Scale Brain Networks,”
Trends in Neuroscience
41
, No.
3
,
150
160
(
2018
).
6
A.
Väljamäe
et al
,
“A review of real-time EEG sonification research,”
International Conference on Auditory Display 2013 (ICAD 2013)
pp.
85
93
.
7
P.
Kontos
et al
,
“Presence redefined: The reciprocal nature of engagement between elder-clowns and persons with dementia,”
Dementia
16
, No.
1
,
46
66
(
2017
).
8
S.
Auerbach
et al
,
“An investigation of the emotions elicited by hospital clowns in comparison to circus clowns and nursing staff,”
European Journal of Humour Research
1
, No.
3
,
26
53
(
2014
).
9
C.
Duclos
et al
,
“Brain Responses to Propofol in Advance of Recovery from Coma and Disorders of Consciousness: A Preliminary Study,”
American Journal of Respiratory and Critical Care Medicine
205
, No.
2
,
171
182
(
2022
).
10
See Blain-Moraes [2].
11
F.
Grond
et al
,
“Participatory Design of Affective Technology: Interfacing Biomusic and Autism,”
IEEE Transaction of Affective Computing
13
(
2022
) pp.
250
261
.
12
L.
Linge
,
“Hospital clowns working in pairs—in synchronized communication with ailing children,”
International Journal of Qualitative Studies in Health and Well-Being
3
, No.
1
,
27
38
(
2008
).
13
S.
Blain-Moraes
et al
,
“Neurophysiological correlates of sevoflurane-induced unconsciousness,”
Anesthesiology
122
, No.
2
,
307
316
(
2015
).
14
M.
Vinck
et al
“An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias,”
NeuroImage
55
, No.
4
,
1548
1565
(
2011
).
15
U.
Lee
et al
,
“Disruption of frontal-parietal communication by ketamine, propofol, and sevoflurane,”
Journal of the American Society of Anesthesiologists
118
, No.
6
,
1264
1275
(
2013
).
16
C.J.
Stam
and
E.C.W.
van Straaten
,
“Go with the flow: use of a directed phase lag index (dPLI) to characterize patterns of phase relations in a large-scale model of brain dynamics,”
NeuroImage
62
, No.
3
,
1415
1428
(
2012
).
17
E.
Bullmore
and
O.
Sporns
,
“Complex brain networks: graph theoretical analysis of structural and functional systems,”
Nature Reviews. Neuroscience
10
, No.
3
,
186
198
(
2009
).
18
M.
Kim
et al
,
“Functional and Topological Conditions for Explosive Synchronization Develop in Human Brain Networks with the Onset of Anesthetic-Induced Unconsciousness,”
Frontiers in Computational Neuroscience
10
(
2016
).
19
C.
Bandt
and
B.
Pompe
,
“Permutation Entropy: A Natural Complexity Measure for Time Series,”
Physical Review Letters
88
(
2002
)
174102
.
20
A.
Ranft
et al
,
“Neural Correlates of Sevoflurane-induced Unconsciousness Identified by Simultaneous Functional Magnetic Resonance Imaging and Electroencephalography,”
Anesthesiology
125
, No.
5
,
861
872
(
2016
).
22
Mynoises.net.
23
Gleb
Kanasevich
,
“Luciano Berio—Sequenza IXa (1980) for clarinet solo, Gleb Kanasevich—clarinet”
: https://www.youtube.com/watch?v=vGogPD1H6YI.
24
B.
Hasanain
et al
,
“A formal approach to discovering simultaneous additive masking between auditory medical alarms,”
Applied Ergonomics
58
(
2017
)
500
514
.
25
T
Pape
et al
,
“Placebo-controlled trial of familiar auditory sensory training for acute severe traumatic brain injury: A preliminary report,”
Neurorehabilitation and Neural Repair
29
, No.
6
,
537
547
(
2015
).