While viewing a video clip, we experience a wide variety of contents, from low-level features of the images to high-level ideas such as the storyline. Each change in our experience must be supported by some corresponding change in neurophysiological activity. Differentiation analysis, which quantifies the differences in brain activity by measuring the distances between observed brain states, was applied here to continuous high-density electroencephalographic data recorded while participants watched short video clips. These clips were manipulated in various ways to change the degree of meaningfulness of their contents. We found that neurophysiological differentiation mirrored that of phenomenal differentiation, being higher for meaningful clips and lower for phase-scrambled versions or random noise. The distinction between meaningful and meaningless clips was present even at the individual level, and moreover, differentiation values correlated with individual subjective reports of meaningfulness. Spatial and spectral breakdowns of the overall effect showed frontal and posterior ROIs and highlighted specific roles for different spectral bands. Comparing the results with a multivariate decoding approach reveals that the two methods are capturing different aspects of brain activity and highlights a crucial theoretical distinction between the level and pattern of activity. In future applications, differentiation analysis may be used to evaluate the subjective meaningfulness of stimuli when behavioral responses may be inadequate, as with disorders of consciousness.
As we experience the world, or when watching a movie, we encounter a variety of objects and situations. Each moment of experience can be uniquely identified by its contents, which can range from simple, low-level, visual features, such as oriented edges, to complex, high-level, invariant ideas, such as faces, places, danger, and so on. On the other hand, we can also encounter inputs that are objectively different but subjectively indistinguishable, such as television noise. If a change in the inputs results in a change in corresponding experience, then this difference can be considered as subjectively meaningful. These meaningful differences must be supported by distinct neural activity, and we further assume that the more distinct the experiences are, the more differentiated the supporting activity should be. Differentiation analysis (DA) quantifies the changes in neurophysiological activity patterns associated with a given stimulus set (Mensen, Marshall, & Tononi, 2017). DA quantifies how distinct the patterns of activity are during presentation of a stimulus set. Differentiation will be zero if the activity pattern is consistent for the duration of the stimulus set, whereas high differentiation is achieved when the activity pattern is changing substantially from moment to moment. Thus, applying DA to neural activity evoked by a set of stimuli can provide an objective measure of the neural differentiation that must underlie the subjective meaningfulness of that particular stimulus set.
In previous work, we employed fMRI to show that differentiation measures based on the complexity of neurophysiological activation were higher when viewing short “Charlie Chaplin” clips compared with watching a scrambled version of the clips and were lowest when watching television noise (Boly et al., 2015). This result was obtained despite similar overall levels of neurophysiological activation and although the stimulus differentiation (e.g., pixel-to-pixel variability) was actually higher in the case of television noise. Subsequent work has demonstrated that stimulus set meaningfulness can also be captured by electroencephalography (EEG) using DA (Mensen et al., 2017). Participants were presented with static images from a number of meaningful categories (animals, food, people, etc.) as well as three meaningless categories (noise, phase-scrambled images, and overlapping disks). By measuring the multivariate distances between the evoked neural responses to each image, we showed that neurophysiological differentiation was higher for the meaningful compared with meaningless images. Differentiation was significant at the individual level and was most related to the individual's rating of subjective differences, as opposed to stimulus novelty, predictability, or stimulus differentiation.
In this study, we apply DA to continuous EEG recordings obtained while viewing video clips to provide an objective measure of the subjective meaningfulness of a given stimulus set. Unlike static images, continuous stimuli provide a stimulus set that is more naturalistic and one to which the brain is especially well adapted. Moreover, movie clips offer a generally richer and more meaningful experience and trigger high-level ideas associated with the ongoing temporal development of a story as well as meaningful dialogue in the auditory domain. As with our fMRI study, random noise was used as the absolute baseline for subjective meaningfulness. Additional contrasting conditions were employed to examine the effect of stimulus novelty, spatial structure, temporal coherence, and visual complexity, and a breakdown of the DA was used to identify specific frequencies and channels that contribute to these effects. Finally, we used a decoding approach between conditions to examine the relationship between level of activity and differentiation.
Eight neurologically healthy participants performed the experiment (all men, seven right-handed). All participants were given oral and written information about the study and completed an informed consent form before the experiment. The experiment was approved by the University of Wisconsin at Madison institutional review board.
Stimuli and Task
Participants were shown a series of 30-sec-long television advertisements. Forty-two distinct advertisements were collected. Advertisements were chosen as they are relatively short yet are self-contained stories that present the participant with a large range of ideas and situations. Each clip was subsequently edited in five distinct ways. In the reverse condition, the clip was reversed in time such that scene transitions, as well as the statistical properties of the audio, remained constant. However, understanding of the audio would be diminished as well as high-level ideas regarding the ongoing story. In the outlines condition, each frame of the movie was reduced to object outlines identified using an edge detection algorithm (Wolffsohn, Mukhopadhyay, & Rubinstein, 2007). This manipulation left the audio intact while limiting the richness of the visual scene. The shuffled condition took small sections of the movie (between 2 and 4 sec) and recombined the sections in random order. Although the visual and audio contents shown to the participant remained the same, abrupt transitions increased and the storyline was lost. The remaining two manipulations relied on phase scrambling (Honey, Kirchner, & VanRullen, 2008). This was achieved by separating both the phase and magnitude of each frame, adding a random phase between 0π and 2π, and then recombining the components. A phase-scrambled condition, where the temporal order of frames was conserved, and a phase-time-scrambled condition, with randomized order of frames, were created. Finally, clips of television noise were made by combining a series of individual frames created by generating random values of red, green, and blue for each pixel.
During electrode preparation, participants repeatedly viewed 1 of the 42 clips, chosen randomly for each participant. This clip was presented in its original format for a minimum of 20 repetitions to create habituated condition to control for novelty. The same clip was then presented in all its edited formats so that participants could familiarize themselves with them. During the EEG recording, stimulus presentation was divided into blocks; in each block, the participant was presented with the habituated video, a novel video, a clip from each of the five edited conditions, and a television noise example. These eight conditions were presented in a random order for each block, with eight blocks per participant, for a total of 64 clips. Except for the habituated condition, all clips presented were distinct from one another, and each participant viewed a unique set of videos. After each video, participants were asked to rate the film on a scale from 1 to 6 to establish whether they thought it was interesting, meaningful, and understandable. Participants could take a break before the start of each video until they were ready and were encouraged to take additional breaks between blocks. During the break, the participants could ask any questions about the experiment or discuss the contents of the previous clip with the experimenter. This was done to attempt to reduce the amount of interference and mind-wandering related to the previous clip. Participants were asked to blink normally and to keep their eyes on a fixation cross overlaid on the video to reduce the influence of eye movements on the recording.
EEG Recording and Preprocessing
The EEG was recorded from 256 channels on the standard EGI net (hydrocel geodesic sensor) using the Net Amps 300 amplifier (Tucker, 1993). The central channel (Cz) was used as a reference. After a 0.03-Hz first-order high-pass filter was applied, the data were imported into EEGLAB for preprocessing (Delorme & Makeig, 2004). All participants were analyzed using the same general pipeline. Data were first down-sampled from the original 1000 to 250 Hz. A bandpass filter between 0.5 and 40 Hz was applied. The time series was then epoched from 2 sec after movie start until 2 sec before the end of the movie, leaving trials of 26 sec. Each trial was then manually examined for bad channels and artifacts. Bad channels were subsequently removed, and trials with large artifacts were excluded from further processing. Independent component analysis was then performed over the remaining channels and trials. Components were examined individually by plotting their time course, power spectra, temporal evolution, and topography using in-house but openly available visualization tools (github.com/CSC-UW/csc-eeg-tools). Components that corresponded to eye movements, blinks, heartbeat, muscle activity, and nonbiological artifacts were removed, and the time series was recomputed. The data set was then manually reexamined, and the remaining bad channels and trials were removed; on average, 7.1 trials were removed per participant. The removed channels were then recovered using spline interpolation (Perrin, Pernier, Bertrand, Giard, & Echallier, 1987); on average, 6.9 channels were interpolated per recording. Finally, channels were rereferenced to the average activity, and the original reference was reintroduced into the data set for 257 channels over 6,500 samples for each trial. Topographical and spectral statistical analyses were performed using a nonparametric permutation procedure. Univariate t values were calculated for each channel or frequency bin independently, then a threshold-free cluster enhancement was applied to take the activity of neighboring channels or bins into account, and a maximum permutation approach was used to determine the statistical significance of these enhanced values (Mensen & Khatami, 2013). This procedure strictly controls the FWE rate of the multiple comparisons across channels and frequencies.
When measuring the differentiation of the brain's responses to an ongoing stimulus (i.e., a movie), the general concept is to capture the number of distinct brain states that occur throughout this period and quantify how different these states are from each other. Practically, this can be achieved by assessing the state of the brain at various intervals within this period and measuring the differences between all these states. In previous work, neurophysiological differentiation was calculated as the mean Euclidean distance between the evoked response after being presented with a static image (Mensen et al., 2017). When viewing a continuous stimulus, there is no clear-cut event from which to obtain an ERP and thus time-locked comparisons of signal amplitude are unreasonable. The power spectral density (PSD) of the segment, on the other hand, can be compared without the need to time-lock to a specific event. Thus, to estimate neurophysiological differentiation of the ongoing activity, we split each trial into smaller 1-sec segments and transform the time series into its corresponding PSD using the fast Fourier transform. Using 1-sec segments gives us an acceptable frequency resolution of 1-Hz bins; the underlying activity can be assumed to be approximately stationary, and we create a sufficient number of states to estimate differentiation for each trial. If neurophysiological differentiation follows from phenomenal differentiation, we expect the pattern of neural activity to remain stable during the meaningless clips and be differentiated during meaningful clips. Similar spatial topographies and frequency spectrum across states should be a feature of stable neural activity. Thus, we defined neurophysiological differentiation as the median Euclidean distance in the high-dimensional space of channels and frequencies (from 1 to 40 Hz), between all states during a single clip (see Figure 1 for an example of state-by-state distances for two conditions).
To perform DA, it is necessary to select an appropriate metric to evaluate the distances between system states. For the current analysis, we use the Euclidean distance between PSDs. Pattern-based measures such as the correlational distance are not appropriate to apply to PSDs without some sort of transformation because the general shape (1/f) will be highly preserved across windows. Cross-validated distances such as the linear discriminate t value require multiple repetitions of the same stimuli (Walther et al., 2016). In theory, having multiple repetitions of the same stimuli is beneficial to reduce the influence of independent noise sources; in practice, however, it means showing fewer distinct clips from which to estimate neurophysiological differentiation. Given that the goal of DA is not to measure a specific pattern of activity for a given stimuli but to estimate how differentiated patterns of activity can be for a given stimulus set, we chose to maximize the number of distinct trials, and thus the cross-validated measures are inappropriate.
We explored the results further by measuring neurophysiological differentiation in each channel and for each frequency band. Although this approach might underestimate the overall differences in patterns, we can gain some insight into whether certain channels or frequencies contribute more to differentiation than others. To account for baseline levels of neurophysiological differentiation, for each participant, we calculated the ratio between the mean differentiation value of the meaningful movies over the meaningless ones. For ease of interpretation, we subtracted 1 from this value such that any positive value represented higher differentiation for meaningful clips. We also correlated the differentiation within each trial with the behavioral ratings of whether the participant thought the clip was interesting, meaningful, and understandable and their estimate of the number of unique experiences during the clip.
Finally, multivariate pattern decoding was used to distinguish between meaningful and meaningless clips from the PSD. This was done to contrast DA to more commonly used power analyses. The LIBLINEAR toolbox was used in conjunction with ClassifyEEG for the multivariate decoding (Cauchoix, Crouzet, Fize, & Serre, 2016; Fan, Chang, Hsieh, Wang, & Lin, 2008). A binary linear classifier was used with a training data set of 75% of trials, using the other 25% to test the accuracy of the model. A “holdout” approach was used such that, for both training and testing, the same number of trials (between 15 and 18 depending on the participant) was taken from each of the two conditions. This ensures that chance accuracy is maintained at 50%, despite a higher number of meaningful trials in the full data set. Classification accuracy was cross-validated by repeating this process 50 times using different trials for training and testing and then taking the mean accuracy across all instances. Initially, all channels and frequencies were used as features, and then to further explore effects, channels and frequencies were analyzed independently.
The DA approach is most useful if it can be applied to the individual to assess meaningfulness of a particular stimulus set given that each individual may find distinct aspects of the clips more (or less) meaningful. To this end, we calculated individual significance by first measuring the differentiation of every trial, comparing the meaningful with meaningless trials by t value, relabeling the trials at random 5,000 times, recalculating the t value for each permutation, and finally comparing the t value of the original labeling to the distribution of permuted values to obtain a statistical significance p value for each participant separately. When examining the full data set (all channels and frequency bins), five of the eight participants had a significant difference between meaningful and meaningless clips (p ≤ .0002, the minimum possible p value given the number of permutations). This indicates that, for those participants, every meaningful trial had greater differentiation than every meaningless trial. When focusing on the spatial ROIs described below, in both cases, seven of the eight participants showed significant individual results. Importantly, the participant who had the sole nonsignificant individual result was different between the ROIs. That is, all participants showed significant individual results in at least one but generally both ROIs.
Figure 2 shows the mean differentiation for each category, normalized by the mean differentiation across categories for each participant, as well as the corresponding group mean across all eight participants. At the group level, a one-way ANOVA indicated that there were significant differences between the conditions, F(7, 49) = 12.574, p < .001, effect size: η2 = .64. Post hoc t tests indicate that the habituated, t(7) = 2.954, p = .021, Cohen's d = 1.044; novel, t(7) = 4.685, p = .002, d = 1.656; and shuffled, t(7) = 6.904, p < .001, d = 2.441, conditions showed significantly higher differentiation than the mean, whereas phase-scrambled, t(7) = −4.064, p = .005, d = 1.437; phase time, t(7) = −3.288, p = .013, d = 1.162; and noise, t(7) = −3.852, p = .006, d = 1.362, showed significantly lower differentiation. Differentiation levels for the reverse, t(7) = 0.962, p = .368, d = 0.340, and outlines, t(7) = 1.330, p = .225, d = 0.470, conditions did not differ significantly from the mean. However, using full pairwise comparisons, all the meaningful conditions showed significantly higher differentiation when compared with any of the meaningless conditions (all ps < .05; see Supplementary Figure 1 for the full matrix).
When differentiation was calculated for each channel independently, all channels showed positive ratio values indicative of higher differentiation for meaningful clips. Group statistics showed two spatial regions of significantly higher differentiation for meaningful clips than meaningless clips (see Figure 3). The most significant region was a frontal–central region (peak channel E52; t = 8.212, p < .001), consisting of 33 channels (p < .01). The second region, around central posterior channels (peak channel E146; t = 6.392, p = .018), consisting of 29 channels (p < .02). These two regions defined the spatial ROIs (individual results already described above).
Figure 4 shows the individual frequency band contributions to the overall differentiation value. Although most frequency bins showed a positive ratio between meaningful and meaningless clips, there were two specific bands that showed significantly greater differentiation for meaningful clips. The lower delta frequency range between 1 and 6 Hz peaked at 4 Hz (t = 7.422, p < .001). The higher beta range from 13 to 17 Hz peaked at 14 Hz (t = 6.374, p = .018). Notably, the alpha range from 9 to 10 Hz showed nonsignificant negative values. We further examined the breakdown of frequencies within both the frontal and posterior ROIs (see Figure 4). The low-frequency range showed increased differentiation in the frontal region (2-Hz peak: t = 7.254, p < .001; 14-Hz peak: t = 6.221, p = .014), whereas the posterior region showed significant peaks in the 14-Hz range (t = 11.809, p < .001) and around 37 Hz (t = 4.223, p = .042). The pattern across the individual conditions was largely consistent over these regions and peak frequencies with a few notable exceptions. Between the 2- and 14-Hz bin in the frontal region, there was an evident decrease in the differentiation to the reverse and shuffled conditions, both of the movie types with a form of temporal scrambling. Moreover, whereas novel standard movies and shuffled movies tend to be among the more highly differentiated, as we look to higher frequencies in the posterior region, their differentiation decreased.
Differentiation values for each trial were also correlated with each of the participant's response to whether they thought the movie was interesting, meaningful, or understandable and their estimate of the number of experiences during the movie (see Figure 5). If the differentiation values over all channels and frequencies were considered, responses significantly correlated with ratings of interest, t(7) = 4.613, p = .002, d = 1.631; meaningfulness, t(7) = 5.435, p = .001, d = 1.922; and the number of experiences, t(7) = 5.417, p = .001, d = 1.915. When considering only differentiation in the frontal region, correlation values were more consistent across participants for all ratings (mean t value = 7.991). Although correlation values also improved for the posterior region, participants' estimates of the number of experiences was the most reliable predictor of neurophysiological differentiation (t = 10.280, p < .001, d = 3.634).
Examining the structure of the PSD over all channels and bins, the linear classifier was able to decode meaningful and meaningless clips with a high mean accuracy of 93.7% (SE = 1.3%, range = 87.8%–99.5%). Figure 6 shows the decoding accuracy when each channel and each frequency bin were examined independently. Mean accuracy was reduced to 70.2% (SE = 2.9%) when channels were examined independently. The central posterior region showed the highest decoding accuracy across all frequencies with a mean of 80.7% (peak at E151; SE = 1.5%). When examining the individual contributions of frequency bins (but all channels were considered together), decoding accuracy was significantly at above-chance levels (mean accuracy = 74.6%, SE = 1.4%). Accuracy peaked in the range of 5–11 Hz (peak of 7 Hz at 84.0%, SE = 4.0%; t = 8.543, p = .008). Notably, this is the range that showed the lowest differentiation ratio between meaningful and meaningless movies. Finally, we also examined the frequency breakdown for both the frontal and posterior ROIs. Although many frequency bins in both regions still showed significantly above-chance accuracy at the group level (see Figure 6), accuracy was reduced compared with using all channels (frontal mean = 58.3%, SE = 0.8%; posterior mean = 56.2%, SE = 0.7%). This suggests that some interplay between frequencies and spatial location is key to accurate decoding.
In a previous fMRI study, we showed that measures of Lempel–Ziv complexity of the BOLD signal can provide an objective estimate of the subjective meaningfulness of a stimulus set for simple movie clips (Boly et al., 2015). More recently, we introduced the method of DA and demonstrated how DA could be applied to ERPs to measure the subjective meaningfulness of image sets (Mensen et al., 2017). Our current findings extend these earlier results by adapting DA to single trials of continuous EEG recordings while participants viewed (and listened to) video clips that were more or less subjectively meaningful. The results show that the neurophysiological differentiation to video clips with subjectively meaningful content is significantly higher compared with movies with little subjectively meaningful content at the individual level. Moreover, by utilizing an appropriate set of contrasting conditions, we showed that DA was sensitive to both low- and high-level aspects of our subjective meaningfulness in specific channels and frequencies. Differentiation scores for individual trials were significantly correlated with the participants' ratings of subjective meaningfulness and interest and, most consistently, with their estimation of the number of different experiences they had in the course of the clip.
The global measure of differentiation defines the state of the system as the multivariate PSD across all channels over short time windows. We could therefore apply DA without the need to lock to a specific temporal event (e.g., stimulus onset). We also investigated more restricted measures of differentiation by considering as system states specific frequency bands over all channels as well as individual channels over all frequencies. When the state of the system is defined by individual channels, the resulting topography for the ratio of differentiation between meaningful and meaningless movies showed a central posterior hot spot remarkably similar to our previous finding using static images (Mensen et al., 2017). This finding suggests that, although the video clips have a number of nonvisual concepts, visual perception nonetheless dominates the overall landscape of differentiation. In absolute terms, the largest meaningful-to-meaningless ratio was obtained in this posterior region. However, at the group level, statistical significance was greater in a frontocentral region, likely because of higher topographic consistency across individuals. This frontal region may be important for audiovisual integration: Frontal scalp activity in the high-beta range has been related to the McGurk effect (Keil, Müller, Ihssen, & Weisz, 2012), and frontal theta scalp activity was higher when broken audio was successfully understood with the aid of visual cues (Shahin, Kerlin, Bhat, & Miller, 2012). In both the posterior and frontal hot spots, the highest differentiation was observed for novel and shuffled movies.
Partitioning the frequency spectrum showed a distinct pattern of higher differentiation to meaningful clips in the ranges of 1–6 and 13–17 Hz. These frequency bands are in line with recent research showing the importance of theta activity for feed-forward activity and beta activity for feedback-related activity in the visual cortex (Bastos et al., 2015). The spectral breakdown of differentiation changed significantly when looking at frontal and posterior regions independently. In the frontal region, lower frequencies (∼4 and ∼13 Hz) dominate differentiation to meaningful movies. Both ranges are consistent with previous findings suggestive of their importance in multimodal integration (Keil et al., 2012; Shahin et al., 2012)—a key difference between the current study and our previous study examining differentiation to static images (Mensen et al., 2017). The pattern among meaningful clips also differed between these two frequencies. Temporally altered conditions (reversed and shuffled) showed a reduced level of differentiation at 13 Hz compared with 4 Hz, suggesting that this frequency may be more sensitive to storyline continuity and temporal ordering. The posterior region showed a broadband range of differentiation with an emphasis on higher frequencies. The importance of these higher frequencies, especially gamma (>30 Hz), for visual perception has been well documented in healthy (Martinovic & Busch, 2011) and clinical (Tan, Lana, & Uhlhaas, 2013) populations. These distinct patterns would indicate that the global differentiation values (over the entire spectra and scalp) are likely to reflect some specific interactions across channels and frequencies (Siegel, Donner, & Engel, 2012).
The most consistent attribute of these spectral breakdowns was a low differentiation ratio between meaningful and meaningless trials in the alpha range (8–12 Hz). This result contrasts with decoding accuracy, which was highest in these very frequencies. Previous research has shown an inverse relationship between posterior alpha activity and attention during passive viewing of television clips (Simons, Detenber, Cuthbert, Schwartz, & Reiss, 2003). On the other hand, the focus of spatial attention can be decoded using the topography of alpha power (Samaha, Sprague, & Postle, 2016; Foster, Anderson, Serences, Vogel, & Awh, 2015). The precise role of alpha activity in perception is controversial. Several studies have shown that the particular phase of ongoing alpha activity is predictive of whether a briefly presented stimulus will be perceived or not (Mathewson, Gratton, Fabiani, Beck, & Ro, 2009; van Dijk, Schoffelen, Oostenveld, & Jensen, 2008) or whether a transcranial magnetic pulse will elicit a phosphene (Dugué, Marque, & VanRullen, 2011; Romei et al., 2008). These findings and our results suggest that alpha activity may act as a gate to perception, modulated by attention, but that it does not contribute to phenomenal differentiation. A possible interpretation is that decoding is sensitive to the drop in overall attention during meaningless clips, which is reflected by changes in alpha power. On the other hand, decoding is also likely sensitive to shifts of attention between meaningful clips, which are reflected by changes in alpha topography. However, activity in the alpha range would not reflect changes in the content of perception, at least not to the extent measurable using EEG. The discrepancy between DA and the decoding results emphasizes that the two approaches are distinct and independent.
We systematically manipulated the movies to alter their meaningfulness in different ways. Whereas the outlines and reverse conditions showed lower differentiation ratios, the temporally shuffled condition was on par with the novel and habituated conditions. It is plausible that, as long as the audio and video sequence is intelligible for a few seconds, neurophysiological activity is just as differentiated. In this sense, the neurophysiological differences underlying the high-level experience of storyline continuity may not be easily captured using EEG. On the other hand, the spatial and frequency breakdowns suggest that there may be specific components of the EEG signal that are sensitive to these aspects (see categorical differences in the frontal ROI at 14 Hz or posterior ROI at 37 Hz in Figure 4). It should be emphasized, however, that DA does not aim to qualify or localize which particular aspects of a stimulus set will be meaningful to a participant. Indeed, the specific aspects of a stimulus set that are subjectively meaningful are likely to be partially unique, and differently localized, in distinct individuals (Charest, Kievit, Schmitz, Deca, & Kriegeskorte, 2014). The key assumption of DA is simply that different experiences are supported by different neural states. This assumption is consistent with the finding that, at the individual level, all participants showed significantly higher differentiation to meaningful movies in at least one of the ROIs, but not necessarily in both ROIs or in the analysis of the full data set analysis.
In this study, DA was applied to continuous EEG recordings while participants watched movies in which the degree of subjective meaningfulness was expected to be relatively similar. Future studies should apply DA to cases where meaningfulness is more dependent on the specific expertise or history of individual participants. The multivariate approach used in DA is critical in these cases as it makes no a priori assumptions about localization or type of activity pattern that supports the particular experiences participants may have. A further important aspect of DA is that it requires no report, verbal or otherwise. DA may therefore be a useful tool to assess the subjective meaningfulness of stimuli, for example, in patients with disorders of consciousness.
We thank Melanie Boly, Larissa Albantakis, and William Mayner, for their fruitful discussions on the topic. We also thank Ben Jones and Francesca Siclari for their assistance with electrode preparation. This project was funded by the Swiss National Science Foundation (grant no. P300P3_158505), the Human Brain Project (EU-H2020-fetflagship-hbp-sga1-ga720270), the Luminous project (EU-H2020-fetopen-ga686764), the Templeton World Charities Foundation (grant TWCF0667/AB41), and Tiny Blue Dot, Inc. (grant MSN196438/AAC1335).
Reprint requests should be sent to Armand Mensen, Coma Science Group, GIGA-Consciousness, Avenue de l'Hopital, 11, Liège 4000, Belgium, or via e-mail: email@example.com.