Abstract
Ecologically valid research and wearable brain imaging are increasingly important in cognitive neuroscience as they enable researchers to measure neural mechanisms of complex social behaviors in real-world environments. This article presents a proof of principle study that aims to push the limits of what wearable brain imaging can capture and find new ways to explore the neuroscience of acting. Specifically, we focus on how to build an interdisciplinary paradigm to investigate the effects of taking on a role on an actor's sense of self and present methods to quantify interpersonal coordination at different levels (brain, physiology, behavior) as pairs of actors rehearse an extract of a play prepared for live performance. Participants were six actors from Flute Theatre, rehearsing an extract from Shakespeare's A Midsummer Night's Dream. Sense of self was measured in terms of the response of the pFC to hearing one's own name (compared with another person's name). Interpersonal coordination was measured using wavelet coherence analysis of brain signals, heartbeats, breathing, and behavior. Findings show that it is possible to capture an actor's pFC response to their own name and that this response is suppressed when an actor rehearses a segment of the play. In addition, we found that it is possible to measure interpersonal synchrony across three modalities simultaneously. These methods open the way to new studies that can use wearable neuroimaging and hyperscanning to understand the neuroscience of social interaction and the complex social–emotional processes involved in theatrical training and performing theater.
INTRODUCTION
Theater has the power to portray reality or to create alternative realities that tell stories in a powerful and engaging fashion. The creation of imaginary worlds and narratives leads to actors taking on new social roles. We speculate that trained actors may have particular expertise in social interaction, as they are able to present different characters to an audience and recreate the same convincing social interaction night after night on the stage. There is a mutual understanding between the actors and the audience that the event on the stage is a pretense; for example, if two characters have a parent–child relationship on stage, this is not a real relationship; it is subject to the storyline and ceases to exist outside the performance (Goldstein & Bloom, 2011). Although many people enjoy the social and dynamic nature of theater, little is known about the neural and cognitive processes that enable actors to do this. In the growing domain of neuroaesthetics, researchers are beginning to understand how dance, visual arts, and music impact our brain and cognitive processes (Omigie et al., 2015; Kirk, Skov, Hulme, Christensen, & Zeki, 2009; Calvo-Merino, Jola, Glaser, & Haggard, 2008; Cross, Hamilton, & Grafton, 2006). However, much less is known about theater.
The present article introduces new ways in which neuroscientists can study theater. We focus on behavior and brain systems in actors, and specifically the cognitive processes involved when actors take on a role. More precisely, this article aims to (1) push the boundaries of wearable neuroimaging systems to capture pairs of people moving about a theater-rehearsal space, (2) find new methods to study how taking on a role impacts an actor's sense of self, and (3) present methods to explore patterns of naturalistic social interactions through the quantification of their interpersonal coordination at different levels (brain, physiology, behavior). To achieve these goals, we present a novel study using a multimodal platform including functional near-infrared spectroscopy (fNIRS), systemic physiology, and behavior (motion capture and video recordings) to capture the neural, physiological, and behavioral signatures of pairs of actors in rehearsal for a Shakespeare performance, and present two possible approaches to analyzing this complex data set as proof of principle. First, we explain why theater-neuroscience is important and outline some of the factors that must be considered in designing and implementing research in this domain.
The Importance of Theater-neuroscience Research
Understanding the neurocognitive mechanisms of acting and performance is important for neuroscience, the arts and education. For neuroscience, acting may provide an opportunity to study complex interactions between people in a controlled and reproducible fashion. There is an increasing interest among researchers in “real-world neuroscience” (Bevilacqua et al., 2018; Cruz-Garza et al., 2017), second-person neuroscience (Schilbach et al., 2013), and embodied cognition (Martin, Kessler, Cooke, Huang, & Meinzer, 2020; Wilson, 2002), but it has sometimes been challenging to design paradigms in these new areas. Theater may offer an important way to move beyond the static and often computer-based stimuli used in experiments carried out in isolation and examine the dynamics of social and emotional engagement involved in complex and meaningful contexts. This is valuable in several ways. First, two actors rehearsing a scene are able to perform the same actions repeatedly and reproducibly, allowing multiple recordings of data to be collected from them, but will also retain the socio-emotional meaning of the scene in each performance. Thus, studying the actors can enable the investigation of the fundamental processes that coordinate speech and motor behavior between people in a reproducible setting. For instance, the cognitive processes of the actors playing Titania and Oberon in Midsummer Night's Dream, for example, cannot be assumed to match those of real fairy lovers, but the auditory, visual, and motor performance of the actors do draw on real social interactions. Second, the question of what it takes, cognitively, to become a character through the rehearsal process is an important and rarely examined question. One recent study suggests that playing a role may alter activation in brain regions linked to the sense of self (Brown, Cockett, & Yuan, 2019), and we examine this possibility in a later section. Further examination of the question of how actors take on roles may give important insights into the sense of self and the neural mechanisms of pretense and imagination. Finally, interactions between actors and naive participants in conjunction with brain imaging may provide a powerful way to explore a range of types of social interaction, extending examples of classic social psychology to modern neuroscience (Remland, Jones, & Brinkman, 1995).
For the arts and education, theater neuroscience is important as it sheds light on the neural processes that occur during rehearsal and performance. Understanding the similarities and differences in the activity of neural circuits could elucidate differences between different types of acting methods and techniques and will help create an interdisciplinary approach to theater. It could also help us evaluate the use of theater in education. In particular, researchers are interested in whether the techniques used in acting that engage the social brain may lead to actors having better mentalizing and empathy skills than nonactors (Goldstein & Bloom, 2011). To summarize, there are many reasons to believe that the study of theater and acting has value to researchers working in cognitive and social neuroscience and has wider implications for the arts and education. However, there are also many challenges to research in this untapped domain. In the following sections, we outline how these challenges can be faced.
Capturing Data from Actors during Performance
The vast majority of research into the neural mechanisms of cognition uses fMRI brain imaging, where participants must lie down and remain still in isolation in a noisy and unnaturalistic scanner while brain images are captured. Such studies have provided a wealth of information on basic cognitive processes but are limited when the focus of our research is flexible social interactions with other people (Risko, Richardson, & Kingstone, 2016; Schilbach et al., 2013). New portable wearable technologies that include mobile EEG (Bevilacqua et al., 2018), fNIRS (Pinti et al., 2020), eye-tracking (Bianchi, Kingstone, & Risko, 2020), and motion capture (Vlasic et al., 2007) are now allowing researchers to move out of the laboratory and understand neurocognitive processes in people freely moving in the real world. Here, we employ many of these wearable sensors in a hyperscanning configuration to monitor interpersonal synchrony in pairs of actors.
We used two wearable fNIRS systems to measure the actors' brain activation patterns in the pFC. fNIRS is a noninvasive optical neuroimaging technique that uses near-infrared light to measure the changes in brain oxygenation and hemodynamics. It is based on neurovascular coupling and gathers an indirect measure of brain activity through the quantification of the changes in oxygenated (HbO2) and deoxygenated (HHb) hemoglobin (Pinti et al., 2020). fNIRS has distinct advantages and disadvantages compared with other neuroimaging techniques (see Table 1 in Pinti et al., 2020, for a summary), but what makes it particularly suitable for ecologically valid investigations of brain function within social environments is its superior robustness to movements with respect to other neuroimaging modalities, portability, and the availability of wireless instrumentation (Pinti, Aichelburg, et al., 2015).
Given that “interacting brains exist within interacting bodies” (Hamilton, 2021), we monitored bodily coordination alongside fNIRS to aid the interpretation of hyperscanning brain data by measuring physiological changes in both actors using wearable chest straps and capturing the actors' behavior and movements using wearable motion capture suits and video recordings. This is critical to better understand how embodied social interactions impact brain-to-brain synchrony.
Acting and the Sense of Self
A major challenge for naturalistic neuroscience research is integrating well-controlled experimental manipulations into free and unstructured natural behavior. Here, we use the classic “self-name effect” as a way to probe neural engagement when people are acting or not acting. A person's own name is a highly salient and attention-grabbing signal, which has increased priority (Gronau, Cohen, & Ben-Shakhar, 2003; Shapiro, Caldwell, & Sorensen, 1997); the most well-known example of this being the cocktail party phenomenon (Cherry, 1953). Studies investigating the underlying neural correlates of hearing one's name consistently identify increased activity of the medial prefrontal cortex (mPFC; Holeckova et al., 2008; Carmody & Lewis, 2006). In particular, Kampe, Frith, and Frith (2003) used fMRI to investigate hearing one's own name compared with a random name. Participants were required to press a button when they heard a surname compared with a first name. Results found significant activation of the superior frontal gyrus (SFG) only when hearing their own name compared with a random name. Their results provide support for the notion that calling someone's name is an important social cue that implies intent to communicate. Similar effects have been reported in infants using fNIRS (Imafuku, Hakuno, Uchida-Ota, Yamamoto, & Minagawa, 2014) demonstrating the robustness of this phenomenon. Thus, it seems likely that brain responses to hearing one's own name could provide a marker of the sense of self. However, it is not clear if fNIRS in adults is sufficiently sensitive to pick up these effects, especially in a complex context with other activities going on.
There is reason to believe that acting might impact neural responses related to the sense of self, as seen in a recent innovative study conducted by Brown et al. (2019). Their paper describes how, phenomenologically, actors take on a fictional first-person perspective on the role they inhabit, assuming the characteristics, thoughts, and actions of another person for the duration of the play. This is a result of a rigorous third-person perspective analysis of the character via various acting techniques leading up to and through the rehearsal period. To test this idea, Brown et al. (2019) conducted an fMRI study of professional actors reading parts from Romeo and Juliet. They found that a reduced engagement of self-related brain systems (dorsomedial pFC, SFG, and the ventral medial pFC) occur when actors were responding to questions “in character.” Hence, they suggested that acting involves a suppression of self-processing.
In the present study, we opted to test if the “own name effect” can be seen in actors as they rehearse a piece of theater actively in character for a play that would be performed live on stage, and if the suppression of self (Brown et al., 2019) might also be visible. To give a concrete example, if an actor named “Nica” is performing a simple motor task with another actor “Jacob,” we would expect Nica's pFC to respond if she hears a voice calling her own name (Nica) but not her partner's name (Jacob). However, if the same name-calls are made while Nica is playing the role of Titania in a scene with “Jacob” in the role of Bottom and then if Nica has suppressed her sense of self to act as Titania, we might expect less engagement of pFC on hearing the name-call Nica. Her response to hearing Jacob should remain similar to a control condition. However, it is also possible that the impact of the complex social-motor task of acting would drown out any impact of name-calls on pFC, or that our fNIRS device is not sensitive enough to measure the relevant signals during acting. Thus, the present study aims to determine if in a proof-of-principle approach, responses to name-calls can be seen in adult actors and if they might change when acting.
Interpersonal Synchrony across Multiple Levels
As well as taking on a role as an individual, acting often requires a dynamic interaction between two or more people. Thus, theater neuroscience provides us with an opportunity to explore these interpersonal dynamics. There is increasing evidence that people engaged in social interactions coordinate their behavior, brains, and physiological signals across multiple levels. Brain synchrony across pairs of participants engaged in a coordinated task has been reported in many contexts (Cui, Bryant, & Reiss, 2012) including conversation (Jiang et al., 2012) and puzzle-solving (Fishburn et al., 2018). It seems likely that, during many joint actions, the neural activity in one brain can be coupled to the neural activity in the other person's brain both via verbal communication (e.g., speaker–listener during a conversation [Hirsch, Noah, Zhang, Dravida, & Ono, 2018; Stephens, Silbert, & Hasson, 2010]) and nonverbal communication (e.g., hand gestures, eye-to-eye contact, facial expressions [Noah et al., 2020]). Previous research has also shown that the brain activity itself can be influenced by the social signals expressed by the other person (Hasson, Ghazanfar, Galantucci, Garrod, & Keysers, 2012). Therefore, by capturing brain activity from two people at once, it is possible to quantify and localize any coherent activity across the two brains, which may help us understand the neural mechanisms of social coordination (Hamilton, 2021). Brain-to-brain coupling (i.e., intersubject synchrony) is typically quantified in terms of the correlation or cross-brain coherence occurring between the neural signals of the two brains (Astolfi et al., 2020; Cui et al., 2012) and can occur across the pFC (Noah et al., 2020; Jiang et al., 2012).
In separate studies, researchers have also quantified physiological synchrony between pairs of participants engaged in joint tasks. Previous studies suggest that interacting individuals might be more likely to become synchronized in heart rate (HR) and breathing rate (BR) as they coordinate their behaviors and share emotional states. For instance, Konvalinka et al. (2011) found that similar HR dynamics occurred in a religious fire-walking ritual only between performers and relevant spectators but not with irrelevant viewers. Concurrent patterns of HR changes were also found when building Lego models together (Fusaroli, Bjørndahl, Roepstorff, & Tylén, 2016). Similarly, Helm, Sbarra, and Ferrer (2012) further demonstrated that shared BR dynamics occur when romantic couples complete a series of tasks and share emotional arousal. In a recent review, Palumbo, Marraccini, Weyandt, and Wilder-Smith (2016) summarize how physiological synchrony can act as an index of the state of an interpersonal relationship.
Coordination between people is also seen in terms of behavior. Its common forms include shaking the body together, dancing together, walking together, hitting the beat together, and swinging wrists together (Richardson & Dale, 2005). Launay, Tarr, and Dunbar (2016) proposed that interpersonal synchronization can be defined as when individuals establish a stable relationship with others and have simultaneity in their actions, that is, their thoughts, feelings, and behaviors become synchronized.
These studies of interpersonal coordination have each focused on different measures (brain/heart/ behavior), and few studies that we are aware of have examined coordination across multiple modalities. Here, we take advantage of our multimodal platform and present analytical methods to examine interpersonal coordination across all these different levels as a proof of principle. We use the common and well-established wavelet coherence method to quantify interpersonal coordination (Cui et al., 2012; Issartel, Marin, & Cadopi, 2007) comparing the average coherence level of true pairs of participants to pseudopairs created when the same data are shuffled to create pairs that did not really exist (i.e., pseudodyads). Our sample size does not permit strong hypotheses, but we can use our multimodal data to present analytical methods to test if interpersonal coordination is found in different measures (brain activity, HR, BR, and accelerometer), and if this coordination is greater for real dyads compared with pseudodyads. We can also explore coordination across different frequency bands, which may help us target specific effects in future studies.
Summary of This Study
To summarize, this study aims to take neuroscientific measurement into the world of theater. As an exploration of the usefulness of wearable neuroimaging in theater, we investigate the sense of self and interpersonal coordination in professional actors as they rehearse an extract from a play. We aim to push technological boundaries in combining multiple modalities of data capture, including fNIRS (closely looking at activation in prefrontal regions) and physiological and behavioral recordings. This proof-of-principle will open the way to future research in the domain of theater-neuroscience and other rich social interactions.
METHODS
Participants and Recording Sessions
Participants were recruited through a local theater company (Flute Theatre) based in London. This is a small theater group who perform both traditional Shakespeare plays and interactive adaptions of Shakespeare for individuals with autism and their families. During the period of the study, the six actors were also working together to rehearse a stage play and all the actors were highly familiar with each other and with the core acting tasks used in this study. Six actors (three male) were recruited with a mean age of 26.5 years. Participants were healthy with no known psychiatric or neurological impairments. They were compensated for their participation. This study was approved by the University College London (UCL) Research Ethics Committee, and written informed consent was obtained from all participants. Data were collected over 4 days with 19 sessions of data captured in total. Two actors took part in each session (always in the same pairs).
Studies of the neural mechanisms engaged in actors performing theater have never been performed before, and there is no prior work on which to base a formal power analysis. Our sample size of six participants (all the actors available) may seem small, but it was essential to work with actors who were familiar with this approach to Shakespeare and the specific pieces used in this study. Each participant took part in multiple data collection sessions giving a total of 19 data sets for analysis. Thus, for statistical purposes, our sample size is n = 19, which is comparable to previous infant studies of the sense of self in mPFC (Imafuku et al., 2014), fMRI studies of acting (Brown et al., 2019), and studies of aesthetic perception in performing arts (Calvo-Merino et al., 2008).
Experimental Tasks
During each recording session, actors performed three different tasks in short blocks presented in a pseudorandom order. These were: Walking and Speaking (Control) and Acting. In the Walking task, the two participants were instructed to move around the space at a walking pace without interacting with each other. Each block of Walking lasted 45 sec. In cognitive terms, this task requires whole body movement and a minimal degree of coordination to avoid bumping into the other person.
In the Speaking task, the two actors stood side-by-side facing the “audience” (at least 10 attentive people including other actors and the research team) and read lines from Shakespeare. The actors were given the text on a printed page and read alternate lines until the end of the text was reached or the time limit of 45 sec was up. The texts chosen were prologues from four Shakespeare plays (Romeo and Juliet, Henry V, Henry VIII, and Pericles). These were chosen because they have a strong rhythm in Shakespearean language but are not delivered by a particular character. In cognitive terms, this task requires speech and turn-taking but does not involve remembering lines, coordinating actions, or taking on a strong role.
In the Acting task, two different Shakespeare scenes were used, each requiring full engagement from the two actors. Both scenes were selected from Flute Theatre's adaptation of A Midsummer Night's Dream for children with autism, in which short excerpts of the play can be performed repeatedly to allow the children to engage in socially interactive behaviors. This repetition of short elements made the scenes particularly suitable as a cognitive task, and all the actors involved were very familiar with the scenes, having performed the play before. Thus, they can interpret these short scenes in the context of their deep understanding of the full play.
In the “Titania” scene, one actor has the role of the fairy queen Titania and the other has the role of Bottom the donkey, which is indicated by holding hands to the sides of the head as “ears.” Titania moves about the space until she can make eye contact with Bottom and then says “Doy-yo-yo-yoing; I love thee!” while making an exaggerated hand gesture. In response, Bottom becomes alarmed and turns his back on Titania. Titania then moves around the space to capture Bottom's gaze again, and the scene can repeat as many times as needed. In the “Cobweb” scene, one actor has the role of Cobweb the spider, whereas the other has the role of Bee. The two characters start on opposite sides of the stage, with the Bee moving about. When Cobweb throws out her hands to catch the Bee, the Bee freezes and then moves slowly toward Cobweb, in time with Cobweb's hand movements. Cobweb hugs the Bee, and the Bee screams and tilts his head as if dead. Cobweb then moves away, carefully watching the Bee to do a victory dance, but the Bee revives and moves back to the other side of the stage. The scene can repeat as many times as needed. For each of these two scenes, the actors were instructed to do “Cobweb” or “Titania” but then could select who took which role. They would perform the sequence 2 or 3 times before swapping roles and performing again until the time limit (120 sec) ran out. The actors found it easy to negotiate the roles and swap as needed without words or delays.
These short scenes were selected because they include strong social interaction (eye contact, hugs) and close interpersonal coordination. Each action sequence depends on the partner's actions to maintain the correct timing, as the actors create the characters. In cognitive terms, the acting task requires visuomotor coordination, social interaction, and careful executive control to make the interaction work, as well as the adoption of the particular role.
Name-call Events
In addition to the three basic tasks described above, we imposed “name-call” events on the actors to test how their pFC responds to hearing their own name (or their partner's name). Name-call stimuli were prerecorded on the experimental computer in the voice of one member of the experimental team, who also instructed the actors to start/end each task. At pseudorandom points during each trial, the actors would hear one of their names called out. Each name-call event acts as a “self-name” trial for one actor and as an “other-name” trial for their partner, allowing us to collect more events in the time available. Actors were told that they would hear names called out but should ignore it and continue with their tasks.
There were 17 “Name” trials in each experimental session, happening 12 times in the control trials and 5 times in the acting trials. Overall, the “name-call” events fall into a 2 × 2 factorial design with factors: name (self, other) × task (control, acting). Examples of trial timings are given in Figure 1. This allows us to test if participants' response to their own name changes when they are acting a role.
Data Acquisition
Data collection took place in a theater space (Bloomsbury theater studio, UCL). A sketch of the experimental area is shown in Figure 2A.
To capture the events in as much detail as possible and to track both participants' brain, behavior, and physiology, we used a multimodal wearable and wireless platform on both actors at the same time (Figure 2B). This included the following equipment: (1) Neuroimaging: a 22-channel wearable fNIRS system (LIGHTNIRS) sampled brain hemodynamic/oxygenation changes over pFC at 13.33 Hz. Optodes arrangement and channel configuration are shown in Figure 3A.
To place the fNIRS cap in a reliable way across participants, we used the 10–20 electrode placement system to locate Channel 19 in correspondence with the Fpz point (10% of the Nasion-Inion distance). The Montreal Neurological Institute (MNI) and anatomical locations of each channel are listed in Table 1; (2) Behavior: Actors wore full-body mocap suits (Perception Neuron) that capture the location of the head and limbs with 18 intertial measurement unit (IMU)/magnetic markers at 120 Hz. Actors' movements were also recorded by means of an accelerometer placed in a wearable belt (EQ02 LifeMonitor, Equivital). Audio and video recordings of the whole room were captured as well as a video camera (Sony Handycam HDR-CX405) for behavioral coding of participants' performance; (3) Physiology: Changes in heart and respiration rates were monitored using a wearable belt (EQ02 LifeMonitor, Equivital) worn around the chest; the electrocardiogram was recorded at 256 Hz and respiration at 25.6 Hz and then used to compute the heart and respiration rates (in bpm) every second/1 Hz.
Ch. Number . | MNI Coordinates . | BA Anatomy . | Probability . | Number Data Sets . | ||
---|---|---|---|---|---|---|
x . | y . | z . | ||||
1 | 55 | 28 | 30 | 44 - pars opercularis, part of BA | 0.33 | 9 |
45 - pars triangularis BA | 0.67 | |||||
2 | 44 | 52 | 24 | 45 - pars triangularis BA | 0.26 | 7 |
46 - Dorsolateral prefrontal cortex | 0.74 | |||||
3 | 26 | 67 | 21 | 10 - Frontopolar area | 0.90 | <6 |
4 | 7 | 70 | 21 | 10 - Frontopolar area | 1 | <6 |
5 | −20 | 68 | 22 | 10 - Frontopolar area | 0.96 | <6 |
6 | −40 | 55 | 23 | 46 - Dorsolateral prefrontal cortex | 0.89 | 9 |
7 | −52 | 30 | 29 | 45 - pars triangularis BA | 0.82 | 13 |
8 | 63 | 16 | 16 | 6 - Pre-Motor and Supplementary Motor Cortex | 0.40 | 9 |
44 - pars opercularis, part of BA | 0.44 | |||||
9 | 54 | 44 | 6 | 45 - pars triangularis BA | 0.62 | 8 |
46 - Dorsolateral prefrontal cortex | 0.38 | |||||
10 | 39 | 64 | 4 | 10 - Frontopolar area | 0.81 | 6 |
11 | 18 | 73 | 5 | 10 - Frontopolar area | 0.89 | <6 |
12 | −11 | 74 | 5 | 10 - Frontopolar area | 0.92 | <6 |
13 | −34 | 65 | 3 | 10 - Frontopolar area | 0.78 | 9 |
11 - Orbitofrontal area | 0.22 | |||||
14 | −50 | 46 | 7 | 45 - Pars triangularis BA | 0.47 | 10 |
46 - Dorsolateral prefrontal cortex | 0.53 | |||||
15 | −60 | 18 | 17 | 6 - Premotor and supplementary motor cortex | 0.22 | <6 |
44 - Pars opercularis, part of BA | 0.49 | |||||
45 - Pars triangularis BA | 0.27 | |||||
16 | 57 | 35 | −4 | 45 - Pars triangularis BA | 0.58 | 7 |
17 | 48 | 54 | −10 | 46 - Dorsolateral prefrontal cortex | 0.59 | 13 |
47 - Inferior prefrontal gyrus | 0.38 | |||||
18 | 30 | 68 | −9 | 11 - Orbitofrontal area | 0.88 | 6 |
19 | 6 | 71 | −9 | 10 - Frontopolar area | 0.35 | <6 |
11 - Orbitofrontal area | 0.64 | |||||
20 | −23 | 68 | −10 | 11 - Orbitofrontal area | 1 | 10 |
21 | −43 | 57 | −9 | 46 - Dorsolateral prefrontal cortex | 0.54 | 6 |
47 - Inferior prefrontal gyrus | 0.30 | |||||
22 | −54 | 37 | −4 | 45 - Pars triangularis BA | 0.55 | <6 |
46 - Dorsolateral prefrontal cortex | 0.21 |
Ch. Number . | MNI Coordinates . | BA Anatomy . | Probability . | Number Data Sets . | ||
---|---|---|---|---|---|---|
x . | y . | z . | ||||
1 | 55 | 28 | 30 | 44 - pars opercularis, part of BA | 0.33 | 9 |
45 - pars triangularis BA | 0.67 | |||||
2 | 44 | 52 | 24 | 45 - pars triangularis BA | 0.26 | 7 |
46 - Dorsolateral prefrontal cortex | 0.74 | |||||
3 | 26 | 67 | 21 | 10 - Frontopolar area | 0.90 | <6 |
4 | 7 | 70 | 21 | 10 - Frontopolar area | 1 | <6 |
5 | −20 | 68 | 22 | 10 - Frontopolar area | 0.96 | <6 |
6 | −40 | 55 | 23 | 46 - Dorsolateral prefrontal cortex | 0.89 | 9 |
7 | −52 | 30 | 29 | 45 - pars triangularis BA | 0.82 | 13 |
8 | 63 | 16 | 16 | 6 - Pre-Motor and Supplementary Motor Cortex | 0.40 | 9 |
44 - pars opercularis, part of BA | 0.44 | |||||
9 | 54 | 44 | 6 | 45 - pars triangularis BA | 0.62 | 8 |
46 - Dorsolateral prefrontal cortex | 0.38 | |||||
10 | 39 | 64 | 4 | 10 - Frontopolar area | 0.81 | 6 |
11 | 18 | 73 | 5 | 10 - Frontopolar area | 0.89 | <6 |
12 | −11 | 74 | 5 | 10 - Frontopolar area | 0.92 | <6 |
13 | −34 | 65 | 3 | 10 - Frontopolar area | 0.78 | 9 |
11 - Orbitofrontal area | 0.22 | |||||
14 | −50 | 46 | 7 | 45 - Pars triangularis BA | 0.47 | 10 |
46 - Dorsolateral prefrontal cortex | 0.53 | |||||
15 | −60 | 18 | 17 | 6 - Premotor and supplementary motor cortex | 0.22 | <6 |
44 - Pars opercularis, part of BA | 0.49 | |||||
45 - Pars triangularis BA | 0.27 | |||||
16 | 57 | 35 | −4 | 45 - Pars triangularis BA | 0.58 | 7 |
17 | 48 | 54 | −10 | 46 - Dorsolateral prefrontal cortex | 0.59 | 13 |
47 - Inferior prefrontal gyrus | 0.38 | |||||
18 | 30 | 68 | −9 | 11 - Orbitofrontal area | 0.88 | 6 |
19 | 6 | 71 | −9 | 10 - Frontopolar area | 0.35 | <6 |
11 - Orbitofrontal area | 0.64 | |||||
20 | −23 | 68 | −10 | 11 - Orbitofrontal area | 1 | 10 |
21 | −43 | 57 | −9 | 46 - Dorsolateral prefrontal cortex | 0.54 | 6 |
47 - Inferior prefrontal gyrus | 0.30 | |||||
22 | −54 | 37 | −4 | 45 - Pars triangularis BA | 0.55 | <6 |
46 - Dorsolateral prefrontal cortex | 0.21 |
The anatomical areas (Brodmann's area [BA]) and the corresponding atlas-based probabilities for each channel are included. Only probabilities greater than 20% are listed. The number of good data sets per channel is reported. The excluded channels from the group analyses (n data sets < 6) are reported in italic.
The timing of each data collection session was controlled by a program written in Cogent, which ran on a laptop. This program determined the order of the tasks performed by the actors and displayed a written instruction for the next task block on the screen. A member of the research team would tell the actors what to do, check they were ready, and then recorded the start time of that block. The Cogent program would play the name-call stimuli through loudspeakers during the task block as required and then display an END message when the task time limit was over. Again, the research team member would tell the actors to stop and prepare for the next block. The person who made the name-call voice recordings was the same person who gave the actors their task instructions during the study, so they were attuned to that person's voice throughout.
DATA ANALYSIS
Preprocessing
The fNIRS data were preprocessed using the Homer2 software package (Huppert, Diamond, Franceschini, & Boas, 2009) following the preprocessing pipeline described in Pinti, Scholkmann, Hamilton, Burgess, and Tachtsidis (2019). We first visually inspected the raw intensity signals to identify channels with a low signal-to-noise ratio because of detector saturation, poor optical coupling (e.g., low photon counts, no HR component in the raw data), or substantial movement artifacts (e.g., because of head movements or actor's exaggerated facial expressions particularly affecting the channels over medial pFC) to exclude from further analyses. The excluded channels from the group analyses (n data sets < 6) are reported in italic in Table 1.
Raw data were then converted into changes in optical density (Homer2 function, hmrIntensity2OD). Motion artifacts were identified and corrected using the wavelet-based method (Homer2 function, hmrMotionCorrectWavelet; iqr = 1.5; Molavi & Dumont, 2012), and a band-pass filter ([0.005 0.4] Hz; Homer2 function: hmrBandpassFilt) was applied to remove high-frequency noise such as heartbeat and slow drifts. Preprocessed optical density signals were converted into concentration changes of ΔHbO2 and ΔHbR using the modified Beer–Lambert law (Homer2 function, hmrOD2Conc; fixed DPF = 6). To improve the reliability of our results, we combined the preprocessed ΔHbO2 and ΔHbR into the activation signal by means of the correlation-based signal improvement method (Cui et al., 2012). This allows us to draw conclusions on a signal that includes information about both oxy- and deoxy-hemoglobin with the aim of minimizing false positives in our statistical analyses (Tachtsidis & Scholkmann, 2016).
Preprocessed fNIRS activation signals were then used to run two separate analyses:
- (1)
Contrast effects analysis (Contrast Effects Analysis): to localize the brain regions associated with the processing of hearing one's own or other's name during different cognitive load conditions (acting, control/not acting);
- (2)
Brain-to-brain coherence (Brain-to-brain Coherence): to quantify and localize where across-brain synchrony occurs between the two actors during different cognitive load conditions (acting, control/not acting).
HR and BR, and acceleration signals were used to compute interpersonal synchrony at the physiological and behavioral levels, respectively (Physiological and Behavioral Interpersonal Synchrony). This was done to investigate whether synchronization of heart and BRs and movements occurs between the two actors during naturalistic social interactions under different cognitive loads (acting, control/not acting).
Contrast Effects Analysis
First-level statistical analysis was carried out using a channel-wise general linear model (GLM; Friston et al., 1994) to fit the fNIRS activation signals, down-sampled to 1 Hz, using the SPM for fNIRS toolbox (Tak, Uga, Flandin, Dan, & Penny, 2016). For each participant, the design matrix included nine regressors modeling the following experimental conditions: (1) walking block; (2) speaking block; (3) Acting cobweb block; (4) acting Titania block; (5) control-self: self-namecall event during control blocks (6) acting-self: self-namecall event during acting blocks; (7) control-other: other-namecall event during control blocks; (8) acting-other: other-namecall event during acting blocks; (9) fixes: unplanned pauses such as researchers fixing a motion capture sensor if it fell off during the experiment. These event epochs were convolved with the canonical hemodynamic response function and used to fit the fNIRS activation signals. Single-subject beta values were estimated for each of the nine regressors.
Group-level statistical analysis was then performed on the beta estimates using one-sample t tests to compute the following contrasts:
Contrast 1 – Main Effect of name calling: [Acting-Self + Control-Self] > [Acting-Other + Control-Other];
Contrast 2 – Simple Effect of name calling while acting: (Acting-Self) > (Acting-Other);
Contrast 3 – Simple Effect of name calling while not acting: (Control-Self) > (Control-Other);
Contrast 4 – Acting-name calling interaction: [Acting-Self > Acting-Other] > [Control-Self > Control-Other];
Contrast 5 – Main Effect of acting: [Acting-Self + Acting-Other] > [Control-Self + Control-Other].
Brain-to-brain Coherence
First, fNIRS data channels were grouped into ROIs based on probabilistic anatomical locations listed in Table 1. Seven ROIs were formed in accordance with the most probable anatomical region of each channel, thus resulting in six ROIs consisting of three data channels and one ROI consisting of four data channels (Figure 3B). The seven ROIs are as follows: right inferior frontal gyrus (R-IFG), right dorsolateral prefrontal cortex, right lateral frontopolar cortex (R-FPC), medial frontopolar cortex, left lateral frontopolar cortex, left dorsolateral prefrontal cortex (L-DLPFC), and left inferior frontal gyrus.
The preprocessed channel fNIRS activation signals were averaged within each ROI to obtain the seven ROI-level fNIRS time series. Both participants within a dyad had to have at least one valid channel within an ROI for that ROI to be used in the analysis for that dyad. ROI-level fNIRS time series were then also averaged across ROIs to create an eighth fNIRS signal corresponding to the whole pFC ROI.
Brain-to-brain coherence was computed between the fNIRS signal in each of the eight ROIs for each dyad using wavelet transform coherence (WTC) implemented in the wavelet coherence toolbox by (Grinsted, Moore, & Jevrejeva, 2004). More precisely, the continuous wavelet transform is first calculated for each single fNIRS time series, where the real part of the transform represents the amplitude and the imaginary part provides the phase. The cross-wavelet coherence is then calculated from the two wavelet transforms as the square of the smoothed cross-spectrum normalized by the individual smoothed power spectra (Torrence & Compo, 1998). Hence, WTC decomposes each actor's fNIRS time series into frequency components and highlights the local correlation between the two fNIRS time series of the dyad in the time–frequency space (Cui et al., 2012). Interpersonal neural synchrony as expressed by WTC is thus represented in the time–frequency space, that is, at different frequency components (y axis) and for the whole duration of the experiment (x axis). Here, brain-to-brain coherence was computed between pairs of ROIs of the two actors within a dyad (e.g., Dyad 1: Actor 1 R-IFG with Actor 2 R-IFG). A flow chart of the procedure is shown in Figure 4.
To estimate the significance of the interpersonal coherence values, we generated pseudodyads and computed the WTC. Pseudodyads were constructed by taking a real dyad, splitting one of the actor's fNIRS time series into equal halves, and then reconstructing the fNIRS signal by switching the halves. This means that in a pseudopair, Actor 1's fNIRS signal from time (0–20 min) might be compared with Actor 2's fNIRS signal from (10–20) mins and (0–10) mins concatenated in that order. This preserves actor identity and all sample characteristics except the live interaction itself. Pseudopairs were created within dyad to ensure that the pseudopair WTC is based on the same optode locations and channels as the real pair WTC. This allowed us to avoid the issue of unevenly distributed missing data as different channels were excluded for each actor. For both the real and pseudodyads and for each ROI and whole pFC, we averaged the WTC values across time (Figure 4); two-samples t tests were then used to compare the average WTC values of each real dyad versus the corresponding pseudodyad at each frequency. We refer to task frequencies for those frequency components belonging to the range of our task timing, that is, ∼0.007–0.2 Hz; we refer to frequencies above (f > 0.02 Hz) or below (f < 0.007 Hz) this range as high frequencies and low frequencies, respectively.
Physiological and Behavioral Interpersonal Synchrony
Similarly, to the procedure described in the Brain-to-brain Coherence section and Figure 4, WTC was computed between the HR (WTCHR), BR (WTCBR), and acceleration (WTCACC) signals of the two actors. This was done for each dyad across the whole experimental session. To estimate the significance of the physiological and behavioral interpersonal coherence values, we generated pseudodyads following the same methodology of the brain-to-brain coherence (i.e., taking a real dyad, splitting one of the actor's heart rate or BR or acceleration time series into equal halves, and then reconstructing the signal by switching the halves) and computed the WTC on the pseudodyads.
For WTCHR, WTCBR, and WTCACC of both the real and pseudodyads, we averaged the WTC values across time; two-samples t tests were then used to compare the average WTCHR, WTCBR, and WTCACC values of each real dyad versus the corresponding pseudodyad at each frequency. We refer to task frequencies for those frequency components belonging to the range of our task timing, that is, ∼0.007–0.2 Hz; we refer to frequencies above (f > 0.02 Hz) or below (f < 0.007 Hz) this range as high frequencies and low frequencies, respectively.
RESULTS
Contrast Effect Analysis
In this section, we present the results of the group-level GLM analysis carried out on the fNIRS activation signals of each actor individually. In particular, our design matrix included nine regressors (see Preprocessing section), which were used to fit the fNIRS data and estimate the β values for each actor. The β values were used to test how hearing our own name or another name modulates activity in pFC while acting or not acting. t Test activation maps resulting from the contrasts listed in the Preprocessing section are shown in Figure 5. Group averaged β values of the significant channels (p < .05) of the compared conditions are reported as well. The corresponding results using multilevel modeling for the group analysis are reported in Table A1 in the Appendix. We did not find a significant main effect of name-calling (Figure 5A) nor a simple effect of name-calling while acting (Figure 5B) when actors heard their partner's name with respect to their own name. Whilst Channel 7 was close to significance for the main effect of name-calling contrast in the GLM analysis (Figure 5A), the mixed-model analysis revealed a significant effect (Table A1 in the Appendix).
However, when participants were not acting, their mPFC did respond to hearing their own names. Specifically, hearing self-name compared with other-name lead to increased signal in their mPFC (Channel 20, Figure 5C; see Table 1) while not acting. In addition, L-DLPFC significant activity showed an interaction effect between the task and name factors (Figure 5D). This result was confirmed by the mixed-model analysis (Table A1 in the Appendix) that unveiled also a new significant interaction in Channel 8. In this region, there was a strong response to self-name during control tasks but a suppression to self-name when acting, with less sensitivity to other-name. Finally, we did not find a significant main effect of acting (Figure 5E).
Brain-to-brain Coherence
The real versus pseudodyads comparison revealed statistically significant (p < .05; FDR corrected for multiple comparisons) differences in terms of brain-to-brain synchrony as indicated by WTC (Figure 6). Results are shown in Figure 6 where the black dots represent the frequency at which the real (red line) and the pseudodyads (blue line) group-average WTC are significantly different for each ROI. In particular, we have focused on the frequency range up to ∼0.1 Hz, which includes reasonable frequency components associated with hemodynamic changes. The mPFC and left pFC ROIs were excluded from this plot as less than six dyads provided good data less than six dyads. The brain-to-brain coherence of the real dyads (Figure 6, red line) was significantly higher than the brain-to-brain coherence of the pseudodyads (Figure 6, blue line) in all the ROIs, meaning that significant across-brain synchrony occurred between the actors. This can be mostly observed in the task frequency range (∼0.007–0.2 Hz, i.e., the gray shaded areas in Figure 6) in the right pFC ROIs as marked by the black arrows. There are also effects at low frequencies (f < 0.007 Hz), possibly related to physiological processes.
Physiological and Behavioral Interpersonal Synchrony
To assess if there was a statistically significant coherence between the actors at the physiological and behavioral level, two-samples t tests were used to compare the average WTCHR, WTCBR, and WTCACC values of each real dyad versus the corresponding pseudodyad for each frequency component. Results are presented in Figure 7. Also here, we have focused on the frequency range up to ∼0.1 Hz, which includes frequency components that could be associated with hemodynamic changes in the fNIRS data. We found that real dyads had significantly higher (p < .05) WTCHR (Figure 7A, black dots), WTCBR (Figure 7B, black dots), and WTCACC (Figure 7C, black dots) with respect to the pseudodyads at the different frequency bands.
DISCUSSION
Almost nothing is known about the neural mechanisms that occur when people take on a different role such as actors in the theater and how they coordinate their movements with others as part of a performance. Here, we used fNIRS in conjunction with physiological recordings and motion capture to examine brain activity patterns in pairs of actors rehearsing Shakespeare. We found that fNIRS was able to record pFC activity in actors while they rehearsed an extract of a play for live stage performance with differences found in pFC regions between experimental and control conditions. We also showed that multimodal data recorded in naturalistic settings can be used to quantify interpersonal coordination of brain, behavior, and physiology across pairs of actors. We discuss each of these results in turn and then consider the technological limitations on this kind of research and how they may be overcome in future to enable new research in theater-neuroscience.
Methodological Advances
The primary aim of this article was to determine if it is possible to record meaningful signals from the brains of actors as they rehearse a piece, using wearable fNIRS equipment. In this, we faced several challenges, including (a) designing a cognitive task that was feasible and meaningful, (b) capturing fNIRS signals during complex movements, (c) capturing body movements and physiological signals to allow appropriate interpretation of the fNIRS, and (d) finding appropriate methods to analyze this complex data set. We believe that our study succeeds in showing that these challenges can be tackled and that there are ways in which to capture some of the real-world richness of theater and acting with brain imaging devices. This builds on previous work that has been recorded from musicians (Omigie et al., 2015) and dancers (Calvo-Merino et al., 2008) extending performance-neuroscience to the domain of theater.
A major challenge in real-world neuroscience lies in designing experiments that allow participants to engage in complex naturalistic tasks but still have enough experimental control for a robust analysis. Here, we opted to impose external events on an ongoing sequence of complex actions to balance experimental control with real-world behavior. That is, we could allow actors to perform their normal rehearsal while we imposed “name call” events and then we can analyze the data to identify brain responses to this socially meaningful stimulus.
To capture fNIRS signals and physiological signals during complex actions, we used two wearable Shimadzu LightNIRS devices in conjunction with physiological monitoring belts, wearable motion capture, and video recording. We developed methods to synchronize signals across the equipment and made extensive use of motion correction in our fNIRS data analysis. There was still some data loss, but we were able to capture enough data to complete a meaningful analysis. We demonstrate the application of two different analysis methods for this data set. First, an event-related GLM approach, similar to traditional models of fMRI data (Penny, Friston, Ashburner, Kiebel, & Nichols, 2006), was used to test if the pFC responds to name-calls. Second, a wavelet coherence approach was used to track patterns of similarity across brains, physiology, and behavior. Whereas this method has been used on separate data sets before (e.g., Hirsch et al., 2021; Quer, Daftari, & Rao, 2016), few studies have applied wavelet coherence across multiple data modalities in the same participants. Our results show that it is feasible to do this, and we discuss the effects we find in different frequency bands in more detail below. Overall, we believe that our work provides a proof-of-principle for the application of fNIRS to the domain of theater and the development of new experimental designs and analysis approaches for these data.
The Acting Self
Our cognitive intervention in this study was a name-call task where participants hear their own name or a control name when acting or not acting. Previous data suggest that acting might lead to a suppression of the sense of self (Brown et al., 2019). We also know that the pFC is engaged by self-related concepts (Northoff et al., 2006) and responds to hearing one's own name (Imafuku et al., 2014; Kampe et al., 2003). Our results are in line with these earlier studies. We find that, during control tasks, actors were more responsive to their own name instead of their partner's name, and this effect was found in Channel 20, which was one of our closest good channels to mPFC. Unfortunately, data quality in most medial channels was poor, and these could not be analyzed. This may have been caused by exaggerated facial expressions with consequent eyebrow movements made by the actors while acting. Frowning and eyebrow-raising is in fact a source of motion artifact in fNIRS signals (Noah et al., 2020; Yücel, Selb, Boas, Cash, & Cooper, 2014) and may be hard to avoid in naturalistic social interactions. The positive response to self-name calling in Channel 20 is consistent with prior literature showing self-name prioritization (Gronau et al., 2003; Shapiro et al., 1997) and suggests that fNIRS is able to detect a self-name response in typical adults. We go beyond these tasks and show that self-name responses can be seen even when participants are engaged in other cognitive tasks such as walking around a space and speaking lines from Shakespeare. Both of these are complex coordination tasks but did not require participants to be in their character roles specific to the play that was being rehearsed.
We further found that when actors were engaged in the acting task, this effect was absent. In particular, Channel 6 in L-DLPFC showed an interaction between task and name-calling, such that there was a response to one's own name during control tasks, but this was suppressed during acting. This pattern is consistent with the hypothesis that acting involves a suppression of the sense of self as (Brown et al., 2019) showed that, when participants in MRI are required to respond as if they were Juliet (or Romeo), they showed reduced activity of the mPFC and SFG (Kampe et al., 2003). Both the medial and lateral pFC have previously been associated with mentalizing, including imagination (Dupre, Luh, & Spreng, 2016; Trapp et al., 2014) and pretense (Whitehead, Marchant, Craik, & Frith, 2009; German, Niehaus, Roarty, Giesbrecht, & Miller, 2004). Our findings are consistent with pFC activity findings showing that it is effective to use wearable brain imaging technology rather than laboratory-based technology such as fMRI to investigate the sense of self in this creative context.
We do not know the technique the actors used to become their Midsummer Night's Dream character, but their technique would likely involve an in-depth knowledge of the character in relation to other characters (Stanislavsky & Hapgood, 1949). Actors must use their imagination paired with this knowledge to build a profile for the character that spans before and after the events of the play (Kogan & Kogan, 2009). In this particular piece, the actors are performing short excerpts of Shakespeare that have been designed to be engaging for children with autism (Hunter, 2014) and the social dynamic between the two actors involved in each scene is a high priority. Both during our rehearsals and in Flute Theatre's work in general, the actors often multirole, or take on several different characters during the course of a performance. This means all actors have experienced the pieces from all different points of view and do not have a strong link to one particular character (e.g., Titania vs. Cobweb). This provided us with more flexibility for data collection in the current study because all actors could perform in all roles but also meant that we could not examine neural mechanisms of developing an expertise in performing one particular character. The question of how an actor's sense of self and brain activity patterns change when the actor delves into a single role over an extended period is an important one for future research, and claims can only be made upon replication and adaption of this study.
Coherence across Brains and Bodies
Our multilevel hyperscanning configuration allowed us to explore methods to evaluate if interpersonal coordination occurs between pairs of actors while engaged in a dynamic task involving acting. We investigated the feasibility of using wavelet coherence analysis between the fNIRS signals (brain-to-brain coherence), heart and respiration rate signals (physiological coherence), and acceleration data (behavioral coherence) of both actors to assess interpersonal synchrony in naturalistic experiments. In particular, to disentangle whether the observed coherence patterns are meaningfully different to chance, we compared the wavelet coherence values of real dyads versus pseudodyads.
In terms of brain-to-brain coherence, we found statistically significant (p < .05) interpersonal synchrony in our task frequency range (Figure 6) in R-IFG and R-FPC and at low frequencies across all ROIs. The coherence in the task frequency range further confirms the involvement of right prefrontal regions in coordinating one's own behavior with a partner. This could be understood in terms of the mutual prediction model of interpersonal coordination (Hamilton, 2021; Kingsbury et al., 2019). Coherence at low frequency ranges is harder to interpret. It could be related to the switches from one task to another, which involve substantial changes in interpersonal coordination and motor processing. However, very low frequency effects in fNIRS data are typically related to physiological changes such as autonomic regulations (Pinti, Cardone, & Merla, 2015). In addition, the very low frequency range might not be very informative as the cutoff frequencies of the high-pass filters typically used in fNIRS studies (Pinti et al., 2019) and in this work as well (i.e., 0.005 Hz) attenuate the very low frequency components of the fNIRS signal. This suggests that the brain-to-brain coherence that we found at low frequencies might not be related to cognitive processing.
Thanks to our multimodal data set, we were able to further investigate if any coherence occurred in the physiological signals. We found significant interpersonal synchrony especially in terms of HR (Figure 7A) and, to a lesser extent, in BR (Figure 7B). Interpersonal synchrony was significantly higher in the real dyads compared with pseudodyads in agreement with previous studies. This suggests that synchrony of HR and breathing is higher in real dyads because of their interaction and the coordination of their behaviors. This is also shown by significant coherence in their movements at all frequencies (Figure 7C), given that our task involved movements with different time dynamics. The finding that HRs, breathing, and behavior are coordinated is consistent with a large number of previous studies documenting interpersonal synchrony in some of these measures (Fusaroli et al., 2016; Helm et al., 2012; Konvalinka et al., 2011).
A critical question is then—how do the different types of interpersonal coordination relate to one another? For example, a simple model might say that coordination of breathing (driven by speaking in a conversational pattern) might cause their HR and brain activity to coordinate. If that were the case, we would expect to see coordination of breathing at task-related frequencies in addition to heart and brain coordination at these frequencies. The lack of breathing coordination in the task-frequency band speaks against this model. A second model might suggest that movement coordination as the participants walk and interact then drives coordination of heartbeats (because some actions are more or less energetic) and of brain activity. This is a plausible explanation for the low-frequency effects. Here, we see robust coordination in both acceleration and HR, together with a global brain coordination effect across the whole of the pFC. The indiscriminate nature of this effect suggests that the brain-to-brain coherence at the low frequencies may be of physiological origin and not task-related, and mostly driven by HR changes.
However, this explanation may not apply so well to the brain-coherence changes in the task-related frequencies in R-IFG and R-FPC (see black arrows in Figure 6A and 6B). Coherence here is in the 0.020- to 0.021-Hz range (50-sec period) but only in these two brain regions and not in the left inferior frontal gyrus or DLPFC. The same frequency range shows changes in acceleration but not in HR or breathing, although we must be cautious as the absence of a statistical effect cannot be strongly interpreted. This pattern of data suggests it is possible that the effects in R-IFG and R-FPC highlighted with arrows in Figure 6 may reflect the coordination of cognitive processes such as mutual prediction (Kingsbury et al., 2019). However, it is also possible that there are physiological effects here that we have not captured (Tachtsidis & Scholkmann, 2016). These are preliminary results on a small sample, and further study would be needed to confirm this. Our analysis highlights some interesting frequency bands to examine in future work and that this analytical framework can be used to quantify multilevel interpersonal synchrony.
Taken together, our preliminary results highlight that it is possible to gather multimodal hyperscanning data and investigate social interactions by means of interpersonal synchrony in an ecological setting. By extending this approach at multiple levels (brain, behavior, physiology), it will be possible to disentangle the factors that drive interpersonal coherence and gain a better understanding of the causes of interpersonal coherence.
Limitations
The study reported here represents the first time that fNIRS has been used to quantify the neural processes involved in acting while participants are engaged in a dynamic performance. As such, it provides a proof-of-principle that this kind of research can be performed and opens the way to future work, but there are also several limitations to the project. First, we worked with a group of just six actors who performed in repeated sessions, giving us a total of 19 data collection sessions. This is a small sample size for neuroimaging research. With the same participants providing data repeatedly, multilevel modeling might seem appropriate, but unfortunately, the data set was too small for this approach to be viable, and models sometimes did not converge. Future studies with larger sample sizes could use multilevel data analysis methods.
Second, our strict data quality controls meant that we lost data for several sessions, and we were not able to analyze the medial prefrontal channels because we had less than six good data sets available. Future studies could use acting performances where actors are seated and perform fewer movements to obtain better data, but that would reduce the dynamic interactivity of the performance. In addition, we generated pseudodyads with a within-dyad approach, that is, by swapping the first and second halves of the signals for one participant within the same dyad. This could alter the temporal dynamics of the pseudosignals and lose important trends in the data during the whole experimental session that are not directly related to the social interaction, such as slow drifts arising from psychophysiological changes in the actors (e.g., the actor getting tired), and thus deflate the correlation in the pseudopairs. In our future work, we plan to adopt a between-dyads approach (i.e., correlate one actor's signal with the signal of an actor from another dyad), minimizing the problem of missing data across dyads, especially in terms of variable fNIRS channels exclusion. Finally, given our small sample size, we were not able to apply a correction for multiple comparisons on our name-calling contrasts. However, the cross-brain and physiological coherence results that include data from the whole session rather than single events did pass a correction for multiple comparisons. Larger sample sizes and methods to reduce motion artifacts should mitigate these issues in future studies.
In any neuroimaging study of natural behavior, there is the challenge of collecting detailed data on the behavior itself to understand the relationships between brain and behavior. Here, we used motion trackers and video to capture whole body movement, but we did not have access to eye trackers or face-cameras to capture gaze behavior and facial action cues. Our past experience is that eye-trackers do not provide robust data in the context of these dynamic interactions, and increasing the number of devices worn by each actor also makes it more challenging for them to perform fully. However, in future work, it will be useful to obtain more detailed measures of behavior to understand the social dynamics that mediate coherence between brains (Hamilton, 2021).
Broader Implications
Applying cognitive neuroscience to the real world requires overcoming many different challenges. These include the challenge of designing experiments that balance freedom and realism against experimental control, the technical challenges of data collection, and the challenge of analyzing complex multimodal data sets. As a proof-of-principle paper, our study demonstrates that it is possible to overcome these challenges and opens the way for future studies of the neuroscience of theater and other dynamic complex social situations. We suggest that it could be valuable to study in more detail how acting might change the sense of self, both as an actor develops a character over a series of rehearsals and in the longer term when people attend drama school and learn to express a character through drama. It would also be interesting to examine the relationship between drama training for adults and theater workshops for children with autism (Hunter, 2014), which may function as a dynamic and engaging way to let children practice new social skills.
Our work examining interpersonal coherence has broader relevance outside the study of theater. This topic is of growing interest across social neuroscience, with many studies reporting coherence at the level of brain (Hirsch et al., 2018) or heartbeat (Fusaroli et al., 2016) or behavior (Richardson & Dale, 2005). However, few studies have examined all these dimensions together. We show that it is possible to collect multimodal data on interpersonal coordination and use a wavelet coherence approach to analyze several different modalities. This means it is possible to compare the frequency bands where coherence is seen across modalities, and potentially draw conclusions about the origins of interpersonal coherence. In particular, we highlight effects at a frequency of 0.020–0.021 Hz where there is coherence in some brain regions as a potential focus for future studies. Future studies of interpersonal coordination should record across multiple modalities to get a clearer understanding of the origins of these patterns of coordination and their causes. In fact, wavelet coherence is a powerful method that can be applied to understand the relationship between signals from different modalities. Future works can investigate what drives the neural coupling computing WTC between the fNIRS and physiological signals (e.g., fNIRS vs. respiration), or between physiological signals (e.g., HR vs. acceleration).
Last, this article demonstrates the value of interdisciplinary research for the purpose of deepening our understanding of human social behavior. Our project brought together a large team from theater, engineering, psychology, and neuroscience to develop research in a novel area. We hope this can be a blueprint for future research in these fields and will pave the way for a deeper understanding of how theater and acting can transform our social selves and our understanding of the social world.
APPENDIX
Channel . | Contrast . | F Value . | df . | p Value . | Included Channel . |
---|---|---|---|---|---|
1 | Acting > control | 1.0703 | 81.184 | .304 | Yes |
2 | Acting > control | 0.6338 | 63.158 | .429 | Yes |
3 | Acting > control | 0.1325 | 25 | .719 | No (n < 6) |
4 | Acting > control | 2.7557 | 24 | .110 | No (n < 6) |
5 | Acting > control | 0.0208 | 24 | .887 | No (n < 6) |
6 | Acting > control | 2.5544 | 82.473 | .114 | Yes |
7 | Acting > control | 1.986 | 121.41 | .161 | Yes |
8 | Acting > control | 3.2695 | 82.473 | .074 | Yes |
9 | Acting > control | 0.5533 | 72.208 | .459 | Yes |
10 | Acting > control | 0.2489 | 52.146 | .620 | Yes |
11 | Acting > control | 0.036 | 24 | .851 | No (n < 6) |
12 | Acting > control | 1.5912 | 24 | .219 | No (n < 6) |
13 | Acting > control | 0.1183 | 82.179 | .732 | Yes |
14 | Acting > control | 0.5493 | 91.676 | .461 | Yes |
15 | Acting > control | 0.8279 | 34.052 | .369 | No (n < 6) |
16 | Acting > control | 0.9206 | 63.385 | .341 | Yes |
17 | Acting > control | 0.7868 | 121.7 | .377 | Yes |
18 | Acting > control | 0.7868 | 121.7 | .377 | Yes |
19 | Acting > control | 1.0479 | 15 | .322 | No (n < 6) |
20 | Acting > control | 1.0479 | 15 | .322 | Yes |
21 | Acting > control | 1.3097 | 53.23 | .258 | Yes |
22 | Acting > control | 0.0018 | 34.077 | .967 | No (n < 6) |
1 | Interaction | 1.936 | 81.184 | .168 | Yes |
2 | Interaction | 0.3128 | 63.158 | .578 | Yes |
3 | Interaction | 0.1932 | 25 | .664 | No (n < 6) |
4 | Interaction | 2.643 | 24 | .117 | No (n < 6) |
5 | Interaction | 7.1337 | 24 | .013 | No (n < 6) |
*6 | Interaction | 10.802 | 82.473 | .001* | Yes |
7 | Interaction | 0.5697 | 121.41 | .452 | Yes |
**8 | Interaction | 6.2948 | 82.473 | .014** | Yes |
9 | Interaction | 0.1817 | 72.208 | .671 | Yes |
10 | Interaction | 0.4125 | 52.146 | .524 | Yes |
11 | Interaction | 0.0044 | 24 | .948 | No (n < 6) |
12 | Interaction | 3.7178 | 24 | .066 | No (n < 6) |
13 | Interaction | 2.0169 | 82.179 | .159 | Yes |
14 | Interaction | 0.3421 | 91.676 | .560 | Yes |
15 | Interaction | 6.9552 | 34.052 | .013 | No (n < 6) |
16 | Interaction | 0.692 | 63.385 | .409 | Yes |
17 | Interaction | 0.0903 | 121.7 | .764 | Yes |
18 | Interaction | 0.0903 | 121.7 | .764 | Yes |
19 | Interaction | 0.1281 | 15 | .725 | No (n < 6) |
20 | Interaction | 0.1281 | 15 | .725 | Yes |
21 | Interaction | 0.1918 | 53.23 | .663 | Yes |
22 | Interaction | 16.5215 | 34.077 | .000 | No (n < 6) |
1 | Self name > other | 0.0341 | 81.184 | .854 | Yes |
2 | Self name > other | 0.2547 | 63.158 | .616 | Yes |
3 | Self name > other | 0.0024 | 25 | .961 | No (n < 6) |
4 | Self name > other | 0.0614 | 24 | .806 | No (n < 6) |
5 | Self name > other | 0.1203 | 24 | .732 | No (n < 6) |
6 | Self name > other | 0.2964 | 82.473 | .588 | Yes |
7 | Self name > other | 4.6713 | 121.41 | .033* | Yes |
8 | Self name > other | 0.7552 | 82.473 | .387 | Yes |
9 | Self name > other | 0.1531 | 72.208 | .697 | Yes |
10 | Self name > other | 0.0213 | 52.146 | .884 | Yes |
11 | Self name > other | 1.59 | 24 | .219 | No (n < 6) |
12 | Self name > other | 7.932 | 24 | .010 | No (n < 6) |
13 | Self name > other | 0.0091 | 82.179 | .924 | Yes |
14 | Self name > other | 0.8987 | 91.676 | .346 | Yes |
15 | Self name > other | 0.137 | 34.052 | .714 | No (n < 6) |
16 | Self name > other | 2.9181 | 63.385 | .092 | Yes |
17 | Self name > other | 1.2649 | 121.7 | .263 | Yes |
18 | Self name > other | 1.2649 | 121.7 | .263 | Yes |
19 | Self name > other | 0.8346 | 15 | .375 | No (n < 6) |
20 | Self name > other | 0.8346 | 15 | .375 | Yes |
21 | Self name > other | 0.0504 | 53.23 | .823 | Yes |
22 | Self name > other | 2.0822 | 34.077 | .158 | No (n < 6) |
Channel . | Contrast . | F Value . | df . | p Value . | Included Channel . |
---|---|---|---|---|---|
1 | Acting > control | 1.0703 | 81.184 | .304 | Yes |
2 | Acting > control | 0.6338 | 63.158 | .429 | Yes |
3 | Acting > control | 0.1325 | 25 | .719 | No (n < 6) |
4 | Acting > control | 2.7557 | 24 | .110 | No (n < 6) |
5 | Acting > control | 0.0208 | 24 | .887 | No (n < 6) |
6 | Acting > control | 2.5544 | 82.473 | .114 | Yes |
7 | Acting > control | 1.986 | 121.41 | .161 | Yes |
8 | Acting > control | 3.2695 | 82.473 | .074 | Yes |
9 | Acting > control | 0.5533 | 72.208 | .459 | Yes |
10 | Acting > control | 0.2489 | 52.146 | .620 | Yes |
11 | Acting > control | 0.036 | 24 | .851 | No (n < 6) |
12 | Acting > control | 1.5912 | 24 | .219 | No (n < 6) |
13 | Acting > control | 0.1183 | 82.179 | .732 | Yes |
14 | Acting > control | 0.5493 | 91.676 | .461 | Yes |
15 | Acting > control | 0.8279 | 34.052 | .369 | No (n < 6) |
16 | Acting > control | 0.9206 | 63.385 | .341 | Yes |
17 | Acting > control | 0.7868 | 121.7 | .377 | Yes |
18 | Acting > control | 0.7868 | 121.7 | .377 | Yes |
19 | Acting > control | 1.0479 | 15 | .322 | No (n < 6) |
20 | Acting > control | 1.0479 | 15 | .322 | Yes |
21 | Acting > control | 1.3097 | 53.23 | .258 | Yes |
22 | Acting > control | 0.0018 | 34.077 | .967 | No (n < 6) |
1 | Interaction | 1.936 | 81.184 | .168 | Yes |
2 | Interaction | 0.3128 | 63.158 | .578 | Yes |
3 | Interaction | 0.1932 | 25 | .664 | No (n < 6) |
4 | Interaction | 2.643 | 24 | .117 | No (n < 6) |
5 | Interaction | 7.1337 | 24 | .013 | No (n < 6) |
*6 | Interaction | 10.802 | 82.473 | .001* | Yes |
7 | Interaction | 0.5697 | 121.41 | .452 | Yes |
**8 | Interaction | 6.2948 | 82.473 | .014** | Yes |
9 | Interaction | 0.1817 | 72.208 | .671 | Yes |
10 | Interaction | 0.4125 | 52.146 | .524 | Yes |
11 | Interaction | 0.0044 | 24 | .948 | No (n < 6) |
12 | Interaction | 3.7178 | 24 | .066 | No (n < 6) |
13 | Interaction | 2.0169 | 82.179 | .159 | Yes |
14 | Interaction | 0.3421 | 91.676 | .560 | Yes |
15 | Interaction | 6.9552 | 34.052 | .013 | No (n < 6) |
16 | Interaction | 0.692 | 63.385 | .409 | Yes |
17 | Interaction | 0.0903 | 121.7 | .764 | Yes |
18 | Interaction | 0.0903 | 121.7 | .764 | Yes |
19 | Interaction | 0.1281 | 15 | .725 | No (n < 6) |
20 | Interaction | 0.1281 | 15 | .725 | Yes |
21 | Interaction | 0.1918 | 53.23 | .663 | Yes |
22 | Interaction | 16.5215 | 34.077 | .000 | No (n < 6) |
1 | Self name > other | 0.0341 | 81.184 | .854 | Yes |
2 | Self name > other | 0.2547 | 63.158 | .616 | Yes |
3 | Self name > other | 0.0024 | 25 | .961 | No (n < 6) |
4 | Self name > other | 0.0614 | 24 | .806 | No (n < 6) |
5 | Self name > other | 0.1203 | 24 | .732 | No (n < 6) |
6 | Self name > other | 0.2964 | 82.473 | .588 | Yes |
7 | Self name > other | 4.6713 | 121.41 | .033* | Yes |
8 | Self name > other | 0.7552 | 82.473 | .387 | Yes |
9 | Self name > other | 0.1531 | 72.208 | .697 | Yes |
10 | Self name > other | 0.0213 | 52.146 | .884 | Yes |
11 | Self name > other | 1.59 | 24 | .219 | No (n < 6) |
12 | Self name > other | 7.932 | 24 | .010 | No (n < 6) |
13 | Self name > other | 0.0091 | 82.179 | .924 | Yes |
14 | Self name > other | 0.8987 | 91.676 | .346 | Yes |
15 | Self name > other | 0.137 | 34.052 | .714 | No (n < 6) |
16 | Self name > other | 2.9181 | 63.385 | .092 | Yes |
17 | Self name > other | 1.2649 | 121.7 | .263 | Yes |
18 | Self name > other | 1.2649 | 121.7 | .263 | Yes |
19 | Self name > other | 0.8346 | 15 | .375 | No (n < 6) |
20 | Self name > other | 0.8346 | 15 | .375 | Yes |
21 | Self name > other | 0.0504 | 53.23 | .823 | Yes |
22 | Self name > other | 2.0822 | 34.077 | .158 | No (n < 6) |
The model was fit in R studio using lmer with the model beta ∼ act_no * self_name + (1|pptno), and results are reported from anova1mer. Channels with less than six contributing data sets were excluded from our main analysis. They are reported here for completeness but are reported in italic, and we do not highlight significant results in these channels. Boldface highlights significant effects: Single asterisks indicate channels showing a significant effect that replicate our primary analysis as illustrated in Figure 5. Double asterisks mark other channels with significant effects.
Reprint requests should be sent to Dwaynica A. Greaves, Institute of Cognitive Neuroscience, University College London, Alexandra House, WC1N 3AZ, London, United Kingdom, or via e-mail: [email protected].
Data Availability Statement
There is no IRB approval for data posting and sharing due to the small sample of unique participants.
Author Contributions
Dwaynica A. Greaves: Conceptualization; Data curation; Investigation; Methodology; Project administration; Writing—Original draft; Writing—Review & editing. Paola Pinti: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Sara Din: Conceptualization; Data curation; Investigation; Methodology; Resources; Validation; Writing—Original draft. Robert Hickson: Data curation; Formal analysis; Methodology; Software; Writing—Original draft. Mingyi Diao: Data curation; Formal analysis; Methodology; Software; Writing—Original draft. Charlotte Lange: Data curation; Investigation; Methodology; Resources; Validation. Priyasha Khurana: Data curation; Formal analysis. Kelly Hunter: Conceptualization; Methodology; Project administration; Resources. Ilias Tachtsidis: Methodology; Resources; Software; Supervision. Antonia F. de C. Hamilton: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing.
Diversity in Citation Practices
Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.
REFERENCES
Author notes
D. A. Greaves and P. Pinti contributed equally to this study.