Our moment-to-moment conscious experience is paced by transitions between states, each one corresponding to a change in the electromagnetic brain activity. One consolidated analytical choice is to characterize these changes in the frequency domain, such that the transition from one state to the other corresponds to a difference in the strength of oscillatory power, often in pre-defined, theory-driven frequency bands of interest. Nonetheless, recent computational advances allow us to explore new ways to characterize electromagnetic brain activity and its changes. Here, we assembled a set of multiple transformations aiming to describe time series in a multidimensional feature space. On an MEG dataset with 29 human participants, we tested how the features extracted in this way described some of those state transitions known to elicit prominent changes in the frequency spectrum, such as eyes-closed versus eyes-open resting-state or the occurrence of visual stimulation. We then compared the informativeness of multiple sets of features by submitting them to a multivariate classifier (SVM). We found that the new features outperformed traditional ones in generalizing states classification across participants. Moreover, some of these new features yielded systematically better decoding accuracy than the power in canonical frequency bands that has been often considered a landmark in defining these state changes. Critically, we replicated these findings, after pre-registration, in an independent EEG dataset (N = 210). In conclusion, the present work highlights the importance of enriching our perspective on the characteristics of electromagnetic brain activity, by considering other features of the signal on top of power in theory-driven frequency bands of interest.

Our everyday experience unfolds in an endless sequence of states: attentive, hungry, drowsy, startled. Each of these states coincides with characteristic behavioral outcomes as well as with a complex pattern of electrophysiological brain activity.

Research has long focused on the spectral features of these internal “brain states’’. Seminal EEG works showed that the transitions from rest to being engaged in a task (Berger, 1929), or from wakefulness to sleep or anesthesia (Amzica & Steriade, 1998; Loomis et al., 1935) are well described by prominent changes in canonical frequency bands, such as alpha (8–13 Hz) and delta (1–4 Hz). Since then, brain oscillations have proven to be ubiquitous in characterizing diverse brain states (Wang, 2010), showing that shifts in oscillatory power pace movement planning and execution (Pfurtscheller et al., 2006; Pfurtscheller & Lopes da Silva, 1999), precede perceptual decision-making outcomes (Balestrieri & Busch, 2022; Ergenoglu et al., 2004; Samaha et al., 2020), and play a critical role in the maintenance of items in working memory (Lundqvist et al., 2016; Miller et al., 2018). A recent work (Greene et al., 2023) offered a useful operational definition where brain states i) are a product of a specified cognitive or physiological state, ii) are characterized by a widely distributed pattern of activity or coupling, and iii) affect the future physiology and/or behavior of the organism. Given these premises, brain states sit at the nexus between physiology and behavior, and understanding their widely distributed pattern of activity is a goal both ambitious and crucial.

One way to tackle the complexity of brain states is to focus on single domains and cortical areas. A well-studied domain is the human visual system, where neural activity is shaped by the complex interplay of thalamo-cortical and cortico-cortical rhythms (da Silva et al., 1973; Womelsdorf et al., 2014) pacing the transition from resting states to active ones (Pfurtscheller et al., 1996). Rhythmic activity is also posited to orchestrate both the flow of bottom-up information as well as the top-down modulation (Jensen, 2023; Jensen & Mazaheri, 2010; Michalareas et al., 2016; van Kerkoerle et al., 2014). This is not the only perspective: recent accounts put to the test the importance of oscillations in visual information processing (Roelfsema, 2023), and highlight the importance of other phenomena, such as aperiodic transients, in the feedforward flow of information (Vinck et al., 2023).

Such a broadening of perspective is also methodologically grounded. In fact, various studies highlighted the limitations of exclusively concentrating on spectral parameters in time series analysis, particularly within specific frequency bands. The estimation of changes in oscillatory power is liable to confounders, including changes in aperiodic components (Donoghue et al., 2020) and the non-sinusoidal nature of oscillatory phenomena (Cole & Voytek, 2017). Moreover, brain oscillations show clear signs of non-stationarity along a typical recording (Benwell et al., 2019), or appear as isolated bursts whose interpretation could be distorted when averaging trials (Jones, 2016), or display a non-zero mean inducing baseline shifts (Studenova et al., 2022). Furthermore, some features of time series (such as entropy or complexity) are not directly represented in the power spectrum. All these issues highlight the importance of expanding our analytic repertoire for the study of brain states beyond the traditional approaches based on oscillations in canonical frequency bands.

The aim of the present study is, thus, to investigate brain state decoding based on time series features in comparison to the power spectrum. We chose a set of states known for their emblematic and robust oscillatory patterns in the human visual cortex: absence of visual stimulation (resting state with eyes closed), resting state with eyes open, and task-related activity using a visual stimulation known to elicit strong gamma oscillations (Hoogenboom et al., 2010). We capitalized on previous works (Fulcher & Jones, 2017; Gemein et al., 2020; Lubba et al., 2019) to compose a novel feature space defining brain states along dimensions such as signal variability, complexity, aperiodic components, intrinsic timescales, and entropy. We inferred the informativeness of these features in discriminating brain states by evaluating their classification accuracy. We further compared the classification accuracy of this aggregated set of features against the benchmark defined by power in canonical frequency bands, and we find a consistent advantage of the novel features. This result, corroborated by a pre-registered replication on an independent EEG dataset, encourages the exploration of new ways to characterize brain states.

2.1 Study structure

This study consists of a main experiment and a replication experiment. In the main experiment, we analyzed the MEG data of 29 participants (29 males, mean age = 24.9, SD = 5.3). These participants performed a battery of tasks, including 5 min resting state with eyes closed, (EC), 5 min resting state with eyes open (EO), and a speeded detection task introduced by (Hoogenboom et al., 2010; see section 2.2).

Further, we aimed at a pre-registered replication experiment of the EC/EO discrimination. For this purpose, we relied on an existing open EEG dataset (Babayan et al., 2019), in which 210 participants alternated short sessions of EC and EO resting state. Since the data are publicly available, data handling and preprocessing was performed by a member of the Painlab Munich group. The main authors who performed the pre-registered analysis (Münster’s research group) did not have access to the preprocessed data until the deposit of the pre-registration (link to osf). Therefore, the authors responsible for the analysis did not have any contact with the data, aside from the information publicly available.

2.2 Main experiment data collection

All 29 subjects provided informed consent in written form prior to their participation in the study. The study protocol was approved by the ethics committee of the Medical Faculty of the University of Münster, and the study was conducted according to the Declaration of Helsinki. These participants were recruited from a pool of subjects who had previously participated in MRI experiments at the University of Münster. They, hence, also provided consent for using the MRI scans in order to allow the source reconstruction in MEG.

Brain magnetic fields were recorded in a magnetically shielded room via a 275 channel whole-head system (OMEGA, CTF Systems Inc, Port Coquitlam, Canada). Data were continuously acquired using a sampling rate of 1200 Hz. Subjects were seated upright, and their head position was comfortably stabilized inside the MEG dewar using pads.

The visual stimuli were projected onto the back of a semi-transparent screen positioned approximately 90 cm in front of the subjects’ nasion using an Optoma EP783S DLP projector with a refresh rate of 60 Hz. The viewing angle ranged from -1.15 to 1.15 in the vertical direction and from -3.86 to 3.86 in the horizontal direction. The subjects’ alertness and compliance were verified by video monitoring.

Participants underwent 5 min of resting state with eyes closed (EC), 5 min of resting state with eyes open (EO), and a speeded visual detection task. The task was the same as used by (Hoogenboom et al., 2006). In brief each trial started with the presentation of a fixation point. After 500 ms, the fixation point contrast was reduced by 40%, which served as a warning. After another 1500 ms, the fixation point was replaced by a foveal, circular sine wave grating (diameter: 5-; spatial frequency: 2 cycles/deg; contrast: 100%). The sine wave grating contracted toward the fixation point (velocity: 1.6 deg/s), and this contraction accelerated (velocity step to 2.2 deg/s) at an unpredictable moment between 50 and 3000 ms after stimulus onset. The subjects’ task was to press a button with their right index finger within 500 ms of this acceleration. In case the button press occurred within 500 ms after the acceleration point, a positive visual feedback was given to the participants during a resting period of 1000 ms, a negative feedback was given otherwise. The stimulus was turned off after a response was given, or in catch trials 3000 ms after stimulus onset. The paradigm consisted of two blocks of 300 trials in total. At the end of each block, feedback with the percentage of correct responses was provided. The total duration of the paradigm was 7 min.

2.3 MEG preprocessing and source reconstruction

MEG data were processed using the FieldTrip toolbox (Oostenveld et al., 2011) for MATLAB 2021a (The MathWorks, Inc.) and in-house MATLAB routines. Continuous data were filtered offline with a fourth-order forward-reverse zero-phase Butterworth high-pass filter with a cutoff-frequency of 0.5 Hz and downsampled to 256 Hz for computational efficiency. Independent components (mean number of rejected components per block M = 5; SD = 5.3) arising from heartbeats and eye movements were visually isolated and removed using the runica ICA algorithm.

We used individual T1-weighted MRIs to extract participants’ head model information. First, MRI images were coregistered with the MEG coordinate system via digitized markers in participants’ earnolds using the fieldtrip toolbox (Oostenveld et al., 2011). Subsequently, the subjects’ cortex was described on the surface by means of a cortical sheet through the Freesurfer analysis pipeline (recon_all command). Then, cortical sheets were further preprocessed with the Connectome Workbench wb_command (v1.1.1). This resulted in 32492 vertices (neural sources) per hemisphere. To further compute the forward projection matrices (leadfields), we generated a single-shell spherical volume conduction model (Nolte, 2003). After computing the leadfields, source activity was estimated on the basis of linear constrained minimum variance (LCMV) beamformer coefficients for each vertex of the cortical sheet (Van Veen et al., 1997). For this, sensor covariance was computed across all trials and lambda regularization parameter was set to 5 %, and for each voxel we extracted one time series for each dipole orientation. We further parcellated the cortical surface according to the HCP-MMP1 atlas, which includes 180 parcels per hemisphere (Glasser et al., 2016). To reduce dimensionality, within each parcel, the source time series were concatenated across voxels and orientations, which were then submitted to a PCA (Chalas et al., 2022). In each parcel, we kept only the first principal component. We further constrained our searchlight exclusively to a subset of 52 parcels marked as visual in the parcellation (see Supplementary Table S1).

In order to prepare the data for classification, we defined 1 s data segments in each condition (EC, EO, BSL, and VS). For the EC/EO conditions, we simply segmented the 5 min of resting state in each of the conditions. The BSL segments consisted of the 1 s prestimulus window immediately preceding the visual stimulation in each trial, whereas we selected for the VS the time window from 1 to 2 s post-stimulus.

2.4 EEG preprocessing

The EEG data were preprocessed using DISCOVER-EEG, an automated preprocessing and feature extraction pipeline, to favor the objectivity of the posterior results (Gil Ávila et al., 2023). Briefly, the publicly available dataset consisted of data from 215 subjects (Babayan et al., 2019). Two resting-state conditions were acquired in a single session for each participant: eyes open resting state and eyes closed resting state. 8 min of each condition were acquired in blocks of 1 min, interleaving eyes open and eyes closed resting states. Out of the 215 recordings, one participant had a corrupted marker file, one participant had an empty marker file making it impossible to separate eyes open and eyes closed conditions, two participants had truncated data, and one participant had a marker file which does not code for the eyes closed condition as disclosed in the original publication. In total, 210 recordings of the eyes closed condition and 211 recordings of the eyes open condition were available for preprocessing. Before preprocessing, the eight blocks of each condition were extracted and concatenated. Preprocessing was performed separately for eyes open and eyes closed files, keeping the same parameters for both conditions. The one subject missing the eyes closed condition was not included in the further analyses. Since we were interested in state changes occurring more prominently in visual cortices, we constrained our analysis to a set of 28 posterior channels: ‘TP7’, ‘CP5’, ‘CP3’, ‘CP1’, ‘CP2’, ‘CP4’, ‘CP6’, ‘TP8’, ‘P7’, ‘P5’, ‘P3’, ‘P1’, ‘P2’, ‘P4’, ‘P6’, ‘P8’, ‘PO9’, ‘PO7’, ‘PO3’, ‘PO4’, ‘PO8’, ‘PO10’, ‘O1’, ‘O2’, ‘CPz’, ‘Pz’, ‘POz’, ‘Oz’.

2.5 Features selection and sets

All features were computed in MATLAB 2022b (the mathworks). The set of features named “TimeFeats” constitutes a novel set of features merging many time series descriptors used traditionally in M-EEG analysis, but also unexplored descriptors in the context of M-EEG and collapsed for the first time in a unitary set. We tried to keep the list as comprehensive as possible, while capping the features’ number to ease computability. A detailed description of all the features used can be found at Supplementary Table S2. These included the first four moments of probability distributions, measures of entropy (Waschke et al., 2019, 2021), and features previously used in M-EEG, such as Hurst exponent, Hjorth parameters, and zero crossings (Gemein et al., 2020). Furthermore, we also included a set of wide purpose features called catch22 (Lubba et al., 2019) tailored to account approximately 90% of the variance explained by the massive amount of features of its mother set hctsa (Fulcher & Jones, 2017). This set includes descriptors of time series of various nature, including two features related to the power spectrum of the signal, like the total power in the lowest fifth of frequencies in the Fourier power spectrum and its centroid. This set also included aperiodic components of the power spectrum, as 1/f slope and offset. Since the separation between periodic and aperiodic components can be problematic in data not averaged across repetitions, especially when the data segments are short (1 s) as in our case, we adopted the following procedure. In each parcel, peaks were found on the average spectra from all the repetitions using the FOOOF algorithm (Donoghue et al., 2020). The frequency ranges containing peaks, defined by the central peak frequency and bandwidth obtained by the FOOOF algorithm, were erased from the spectra of every repetition (1 s segments). Aperiodic estimates of 1/f slope and offsets were obtained by fitting regression lines on the log/log spectra, from 1 to 100 Hz but not including the frequencies previously selected by FOOOF and their corresponding power values. A comparison of this method with the original FOOOF algorithm is provided in the Supplementary Information.

To have a benchmark of classification accuracy based uniquely on spectral power, we created two other sets of features. The first, from hereafter “FreqBands”, contained power estimates averaged in the following frequency bands: delta (1–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), low gamma (30–45 Hz), and high gamma (55–100 Hz). The second, named “fullFFT”, contained the full FFT spectrum from 1 to 100 Hz, at a frequency resolution of 1 Hz. Power spectra were computed in fieldtrip, with “mtmfft” method and hanning tapering. The same features were computed on both the MEG and EEG datasets.

The number of features was different for each set. The TimeFeats set contained 41 features, the FreqBands six features, whereas the fullFFT one-hundred, in all cases considered for each of the cortical parcels examined.

For the analyses comparing the classification accuracy of the different feature sets, TimeFeats were computed on the broadband signal (Figs. 1, 3, and 4). To get a better overview of the impact of specific features from TimeFeats in different frequency bands, we also computed every feature of TimeFeats in the signal bandpassed in every frequency band (Fig. 2).

Fig. 1.

(A) Within-subject classification accuracy, pooled for all subjects. (B) Between-subjects classification accuracy. All subjects left out in every cross-validation fold are pooled together. (C) Across-subjects classification accuracy. Each repetition is one of the 10 folds of the cross-validation, containing data from all subjects pooled together. The feature number is meant in each of the parcels considered, before PCA (see section 2).

Fig. 1.

(A) Within-subject classification accuracy, pooled for all subjects. (B) Between-subjects classification accuracy. All subjects left out in every cross-validation fold are pooled together. (C) Across-subjects classification accuracy. Each repetition is one of the 10 folds of the cross-validation, containing data from all subjects pooled together. The feature number is meant in each of the parcels considered, before PCA (see section 2).

Close modal
Fig. 2.

Across-subjects classification accuracy for each feature, both computed on the broadband signal and separately in each frequency band. The first 41 columns correspond to single features from the TimeFeats set. The column “power” includes power in each frequency band, and in all these frequency bands combined (broadband). The last column (“full_set”) includes all the 41 time feats combined, either computed on the band-passed signal within frequency bands of interest or over the whole signal (broadband). Accuracy refers to the four classes classification: EC/EO/BSL/VS.

Fig. 2.

Across-subjects classification accuracy for each feature, both computed on the broadband signal and separately in each frequency band. The first 41 columns correspond to single features from the TimeFeats set. The column “power” includes power in each frequency band, and in all these frequency bands combined (broadband). The last column (“full_set”) includes all the 41 time feats combined, either computed on the band-passed signal within frequency bands of interest or over the whole signal (broadband). Accuracy refers to the four classes classification: EC/EO/BSL/VS.

Close modal

2.6 Classification and clustering analyses

We compared the overall informativeness of every feature set and of each single feature by computing classification accuracy, either for two-classes (EC-EO, BSL-VS) or all of them combined in a four-classes (EC-EO-BSL-VS) classification task handled with a one-vs-one scheme as default in the Support Vector Classifier from sklearn. We chose to evaluate three different cross-validation approaches to define the training and test sets, all of which are extensively represented in Cognitive and System Neuroscience research. They were defined as follows:

“within subject”: one classifier is cross-validated for each subject; population accuracy is the average of the left-out groups accuracy in the cross-validation process. The folds were defined over contiguous segments of data in order to minimize effects of leakage from filters over short epochs.

“between subjects”: one classifier is trained on the concatenated data from all subjects minus one, and tested on the left out participant (leave-one-out). This means that none of the subjects in the testing set is represented in the training set. The procedure is repeated iteratively for all subjects, and accuracy is computed on all the subjects left out. In the replication on the independent EEG dataset, given the large number of subjects and in order to cap computation time, we adopted a 5 fold partitioning approach instead of a leave one out (diverging in this way from our pre-registration).

“across subjects”: 90% of the data from each participant will be concatenated across all participants to form the training set, whereas the remaining 10% from each participant is concatenated to compose the testing set. This means that the subjects in the testing set are also represented in the training set. The procedure was repeated for each fold, for 10 folds in total, in the analyses on the main dataset focused on the statistical comparisons between different feature sets. As in the within subjects procedure, the folds were defined over contiguous segments of data in order to minimize effects of leakage from filters over short epochs. For those analyses providing descriptions of time features in different frequency bands (Fig. 2), evaluating the correlation and clustering between time features (Fig. 3), and replicating the result on the EEG dataset (Fig. 4), classification accuracy was computed only in one fold, with a 85%–15% split in train and test sets, chosen uniformly in the epochs set of each participant. This procedure was chosen to limit computation time and ease results interpretability.

Regardless of the partition scheme, for each feature, we created a matrix of N observation X M parcels. When considering a unitary set of features (e.g., TimeFeats), all the features were concatenated together on the column dimension, to generate a matrix of N observation X M features*parcels. Both the single features, as well the feature sets underwent a standard preprocessing in python sklearn to standardize values and facilitate the classifier’s convergence. Specifically, missing values along each column were first substituted with the column mean. Then, the values of each column were centered around the median and rescaled according to interquartile range (RobustScaler in sklearn). Data were further squeezed to a [-1 1] interval by applying the hyperbolic tangent function to limit even more the effect of outliers. All the operations of standardization and imputation have been performed on the training set, and the parameters obtained were then used to transform the test set in order to avoid leakage from train to test sets. Finally, we applied a PCA aimed at reducing dimensionality before the SVM, by selecting the number of components accounting for 90% of the total variance. The PCA was applied both when single features and the whole set of features were submitted to classifiers. Nonetheless, in the case of feature sets, the procedure was slightly different from the within to the across subjects. For the latter, one PCA was run over sources, on each feature, before merging the features together. This means that all the features were submitted to the classifier, although each with a varying dimensionality. For the within-subjects condition, we observed that best performance overall was obtained by applying a PCA over all features and sources. This approach, by mixing all features together, does not allow to state exactly how many frequency bands/bins were actually represented in each single subject. An analogous approach was applied in the between subjects, since in that case the classification focused uniquely on the aggregated set of features. This PCA step, standard in many pipelines for classification, is not to be confused with the previous PCA applied on the aggregated vertices and used to select a time series that best represented brain activity in each parcel (see section 2.3).

For the SVM, default gaussian kernel and C = 10 was chosen for all the analyses, except when stated differently: in some cases, in fact a linear kernel was chosen to reduce computation time. Classification accuracy was always computed as balanced accuracy (balanced_accuracy_score in sklearn) to account for potential label disbalance in the sample.

In order to evaluate how the single features clustered together, we computed the distance between pairs of features as 1-abs(r), where r is the Pearson correlation coefficient. Hierarchical clustering was performed with complete linkage in scipy. This analysis was performed on the data from the 52 parcels concatenated across subjects.

2.7 Statistical analyses

All inferential statistical analyses, including repeated-measures ANOVA and t-test, were performed in python with pingouin (v 0.5.3).

3.1 Benchmark of novel features’ accuracy

In the MEG dataset of 29 participants (see section 2.2), we evaluated the informativeness of the novel features (TimeFeats) by comparing their classification accuracy with the benchmarks offered either by the power in canonical frequency bands (FreqBands) or by the full spectral power (fullFFT), all of them based on source-localized MEG data. For both the classification between eyes closed versus eyes open (EC/EO) and baseline versus visual stimulation (BSL/VS), we adopted different partition schemes. In the “within” scheme, cross-validation is performed within each subject, yielding one classification accuracy value for each subject averaged for all the cross-validation folds. In the “between” scheme, the data for all subjects minus one are concatenated together and the test accuracy is computed on the left-out subject. This procedure is repeated for all subjects, leading one accuracy value for each. In the “across” scheme, 90% of data from each subject is concatenated in the training set, and 10% in the test set. The procedure was repeated for 10 folds, giving the possibility to perform inferential statistics between the different feature sets’ classification accuracies. Results are shown in Figure 1.

Fig. 3.

Distance between features computed on broadband signal, represented as correlation matrix in the center. On the left, the dendrogram showing the hierarchical clustering of the features based on the distance. On top of the graph, the across-subjects classification accuracy for the four classes EC/EO/BSL/VS.

Fig. 3.

Distance between features computed on broadband signal, represented as correlation matrix in the center. On the left, the dendrogram showing the hierarchical clustering of the features based on the distance. On top of the graph, the across-subjects classification accuracy for the four classes EC/EO/BSL/VS.

Close modal
Fig. 4.

(A) Within-subject classification accuracy, pooled for all subjects. Every point corresponds to a subject. (B) Between-subjects classification accuracy. All subjects left out in each one of the five cross-validation folds are pooled together. (C) Across-subjects classification accuracy. Since all subjects are pooled in one train/test set, the analysis yields only one accuracy estimate per Feature set.

Fig. 4.

(A) Within-subject classification accuracy, pooled for all subjects. Every point corresponds to a subject. (B) Between-subjects classification accuracy. All subjects left out in each one of the five cross-validation folds are pooled together. (C) Across-subjects classification accuracy. Since all subjects are pooled in one train/test set, the analysis yields only one accuracy estimate per Feature set.

Close modal

In all conditions, we obtained a classification accuracy strongly above chance, and often above .9. The highest accuracy overall was achieved in the “within subject” scheme (Fig. 1A), where TimeFeats scored on average .971 in EC/EO and .943 in BSL/VS. A repeated-measures ANOVA showed a significant effect of feature set (F(2, 56) = 58.565, p < .001, η2 = .209) and a significant interaction between feature set and classification type (F(2, 56) = 43.013, p < .001, η2 = .597). Nonetheless, when evaluating the difference between the two top set of features (FreqBands and TimeFeats), the difference did not reach significance (EC/EO: t(28) = -.212, p = .905, d = -.010; BLS/VS: t(28) = -1.555, p = .157, d = -.16), likely because of the ceiling effect in accuracy in this partition scheme.

The “between subject” scheme (Fig. 1B) yielded expectedly lower accuracy, although still strongly higher than chance, offering a proof of concept that all the features examined here generalize also to different subjects. Also here, the top accuracy is reached from TimeFeats (EC/EO = .783; BSL/VS = .948) and a repeated-measures ANOVA disclosed a significant effect of feature set via (F(2, 56) = 54.535, p < .001, η2 = .04) as well as an effect of the classification type (F(2, 28) = 31.715, p < .001, η2 = .332). This last result is of interest, suggesting that overall the features generalize better between participants during a task, rather than during resting state.

The second best feature set in this case is fullFFT (EC/EO = .756; BSL/VS = .931). This set is likely favored over FreqBands due to the higher number of features, which the classifier can leverage better given the great number of observations coming from concatenating multiple subjects together, an advantage absent in the “within subject” classification scheme. The pairwise comparison of accuracy between fullFFT and TimeFeats still favored the latter (EC/EO: t(28) = -3.187, p = .004, d = .292; BLS/VS: t(28) = -2.912, p = .007, d = .171).

The last partition scheme “across subjects” consolidates the advantage of TimeFeats over the other two models, in particular against FullFFT as the second highest performer. Compared to the latter in fact, TimeFeats yields a significantly higher accuracy in both EC/EO (t(9) = -12.018, p < .001, d = 3.531) and BSL/VS (t(9) = -7.782, p < .001, d = 1.471).

3.2 Comparison of single features

Our analysis compared features grouped in sets, showing that the novel features computed in the time domain perform better than the ones based on spectral power in canonical frequency bands. Nonetheless, each of these sets pools together a great variety of features, whose specific contribution in the discrimination between brain states is yet to be unveiled.

For this reason, we further evaluated how every single feature, in every frequency band, is informative for discriminating between all the four states (EC/EO/BSL/VS) together (Fig. 2). This was achieved by computing classification accuracy for the four different states through a multiclass SVM. To reduce computation time, classification accuracy was computed uniquely “across subjects”.

Once again, the full set of “TimeFeats” resulted in the best classification accuracy overall (Fig. 2: “full set” & “broadband” = .924). Interestingly, the second best performer was a single feature named Mean Curve Length (MCL) and defined as the average absolute value of the first derivative of the signal, that with a classification accuracy of .873 also exceeded the power in all frequency bands (.826).

The most informative frequency band for the classification is high gamma, a result to be expected given the states considered: both the passage from eyes closed and eyes open, as well as the visual stimulation used in this experiment are characterized by an increase in broadband gamma power. What is intriguing is that, within the high gamma band, the classification accuracy scored by power (.764) was substantially lower than the one of several other features describing the distribution of the signal around its mean, and having standard deviation (std) as the best known representative with an accuracy of .863.

We further aimed to seek the common organizational principle underlying the novel feature space, by evaluating how the most relevant features clustered together (Fig. 3).

By combining the feature informativeness and the clusters formed based on the distance between pairs of features, we highlight three central subsets of features. The first (highlighted in blue) is the one that pools together the features describing the shape and width of the statistical distribution of data, like std.

A second cluster is the one highlighted in purple in the figure, and includes median and mean, and describing the signal’s baseline shifts. This is likely due to slow drifts in the signal which would make inter-block differences between EC and EO relevant.

Intriguingly, the best performing feature in the set (MCL, highlighted in saffron) is only weakly correlated with other features.

3.3 Replication study results

To corroborate the importance of the novel features gathered in the main study, we run a pre-registered replication of the main results on an independent EEG dataset with 210 participants, for the classification of EC/EO (see section 2). Results are in Figure 4.

Although even in this dataset we observed a marked ceiling effect in accuracy for the “within” subjects classification, we report a significant effect of feature set (F(2, 418) = 42.959, p < .001, η2 = .02), with TimeFeats performing best, by a close yet significant margin (TimeFeats = .995; FreqBands & FullFFT = .993; TimeFeats vs FreqBands: t(209) = -6.982, p < .001, d = .27). These results were replicated, even more substantially, in the between-subjects condition, where the lack of ceiling effect allows a better quantification of the accuracy gain (TimeFeats = .804; FreqBands = .782; FullFFT = .783; TimeFeats vs FullFFT: t(4) = -6.48, p = .003, d = 1.597), and in the across-subjects condition as well (TimeFeats = .822; FreqBands = .797; FullFFT = .784). So all in all, we were able to confirm the relevance of our novel set of features in two independent datasets, from different recording modalities.

In the present work, we characterized states of cortical activity, such as the presence or absence of visual stimulation, by means of a novel multidimensional feature space. By computing the states’ classification accuracy, we found that some of these features, largely neglected in canonical analyses of brain time series, are highly informative for setting brain states apart. Critically, their contribution often exceeds the benchmark defined by classic features such as power changes in canonical frequency bands of interest. Thus, our study paves the way for brain states characterization beyond spectral parameterization.

Particularly informative is the group of features that describes the distribution of the signal around its mean, with the standard deviation (std) as best representative. While previous works already highlighted the importance of distributional properties in time series classification (Henderson et al., 2023; Shafiei et al., 2020), the comparison between std and total spectral power is here of particular interest: on the one hand, there is a linear relationship between the two, since the variance of a time series with zero mean is the mean of the squared values of the time series, which is the same as the total power obtained by summing across the FFT power excluding the DC component. On the other hand, especially in some frequency bands, like gamma, std is more informative than power itself. This could be a result of the different algorithms for computing power versus std in frequency bands, with the former relying on FFT and the second band passing the signal to then compute the signal variance: the FFT estimate of power in frequency bands necessarily contains information (such as aperiodic components of the spectrum), perhaps not informative for the current classification.

In this regard, it is of particular interest that in our dataset aperiodic components such as 1/f exponent or offset do not seem particularly informative. This seems in contrast with the established relationship between 1/f exponent and different excitability cortical regimes (Gao et al., 2017) which should have differentiated between the different brain states examined. One likely explanation for the poor informativeness of aperiodic components, as much as for slower rhythmic components, is the short duration (1 s) of the time windows considered here. Previous works, focusing on resting-state data, showed that stable power estimates require segments between 30 and 120 s (Wiesman et al., 2022), and such an estimate was confirmed also for several of the novel features computed here (Shafiei et al., 2023). This makes the high classification accuracies obtained in the present study even more remarkable, especially when novel features are considered.

Some of these features, such as Mean Curve Length (MCL), might have been particularly informative compared to others because of the short time window considered. In fact, MCL is computed by taking the average absolute value of the first derivative of the signal. It, therefore, quantifies the overall absolute change or variability in the signal, irrespective of the sign of change (increase or decrease). A second interesting interpretation of MCL stems from the fact that, since the process of first-order differentiation removes a large part of the 1/f component in the signal (Demanuele et al., 2007), MCL can be seen as a time-domain approximation of true oscillatory amplitude. MCL has also recently been associated with complexity metrics as fractal dimension (Yahyaei & Esat Özkurt, 2022), although in the current work it outperformed several measures of entropy and complexity.

One might still argue that these new features do not provide adequate insights into the underlying neurophysiology. In fact, while we capitalize on decades of research of biophysical models capable of, at least in part, explaining the mechanisms underlying brain oscillations (Abeysuriya et al., 2018; Wang, 2010; Womelsdorf et al., 2014), the study of non-periodic features of electromagnetic brain activity is still in its infancy. Nonetheless, it is rapidly growing, and constantly providing novel insights into the biophysical meaning of these features. A recent work (Medel et al., 2023) complemented the association between 1/f slope and cortical excitability (Gao et al., 2017) by showing an inverse, monotonic relationship between 1/f and a measure of signal complexity. Likewise, the importance of variability in the neural signal in determining and predicting behavior is getting increasingly acknowledged (Waschke et al., 2021, for a review), with multiple links between metrics like variability or complexity and arousal being disclosed (Waschke et al., 2019). Furthermore, by using a similar, multidimensional feature approach, Shafiei et al. (2023) recently described a link between the variation of neurophysiological time dynamics and multiple micro-architectural, structural features.

Moreover, these novel features can increase our understanding of brain oscillations as well. In fact, it is plausible that the list of best performing features could change according to the length of the time window on which the features are computed, leading to crucial insights on the characteristics of brain oscillations at multiple timescales. This is also a central aspect of brain states (Greene et al., 2023), since the behavioral readout of brain states exists at different timescales, from milliseconds to days. In this perspective, a richer characterization of brain time series acquires even more importance.

By providing insight on brain states, the novel features discussed here can find applications in a variety of contexts. A successful example is the algorithm for sleep stages classification by (Vallat & Walker, 2021), which yielded improved accuracy by merging estimates of power in different frequency bands with several features included in the present work, from standard descriptive statistics to nonlinear features such as fractal dimension, permutation entropy, and Hjorth parameters of mobility or complexity. This means that novel features can find their relevance directly in clinical practice. Furthermore, the computational simplicity of some of these features, such as MCL, make them particularly suitable for applications where feature extraction has to happen in real time, such as Brain Computer Interfaces (BCIs).

We are not suggesting that the high informativeness of our times series features necessarily generalizes to the classification of other brain states or outperforms spectral features in general. But we believe that our study illustrates the importance of considering time series features (in addition to brain rhythms) when analyzing neural activity and its changes with brain state: First, single time series features can outperform spectral power across the entire frequency band. Second, time series features can capture aspects of neural activity that are not represented in the power spectrum. Third, time series features can provide a more parsimonious, consistent, or robust representation of diagnostic information in neural activity compared to the power spectrum. Fourth, features showing high performance in a given context can point to underlying neural mechanisms that can be tested in computational models or dedicated experimental studies.

Future developments should take into account coupling between sensors or areas, which is another defining characteristic of brain states (Greene et al., 2023). This would allow, on the one hand, to consider the information contained in the phase of oscillations. On the other, it would also allow to leverage recent work exploring novel methods to characterize pairwise interaction (Cliff et al., 2023) to extend even more our repertoire for understanding brain states.

In conclusion, we advocate for the exploration of novel ways to describe electromagnetic brain activity. Some of these new features are more informative for distinguishing between different brain states than canonical ones based on oscillatory power in predefined frequency bands. These results suggest that a multivariate, data-driven, and assumption-free approach to brain time series analysis is a powerful way to study brain states and their transitions.

The pre-registration for the replication experiment can be found at OSF.

Code is available at https://github.com/ElioBalestrieri/MV-eye

Data are available at https://osf.io/mfz3p

Conceptualization: J.G., E.B. Methodology: E.B., J.G., and N.C. Validation: E.B., J.G., C.G.Á., and M.P. Formal Analysis: E.B., J.G. Investigation: N.C. Data Curation: N.C., C.G.Á. Writing—Original Draft: E.B., J.G. Writing—Review & Editing: E.B., N.C., C.S., J.F., C.G.Á., U.D., M.P., and J.G. Visualization: E.B. Supervision: J.G. Project Administration: J.G. Funding Acquisition: J.G.

The authors declare no competing interests.

This work was supported by the Deutsche Forschungsgemeinschaft (DFG), GR 2024/11-1 (J.G.). We acknowledge support from the Open Access Publication Fund of the University of Münster.

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00499.

Abeysuriya
,
R. G.
,
Hadida
,
J.
,
Sotiropoulos
,
S. N.
,
Jbabdi
,
S.
,
Becker
,
R.
,
Hunt
,
B. A. E.
,
Brookes
,
M. J.
, &
Woolrich
,
M. W.
(
2018
).
A biophysical model of dynamic balancing of excitation and inhibition in fast oscillatory large-scale networks
.
PLoS Computational Biology
,
14
(
2
),
e1006007
. https://doi.org/10.1371/journal.pcbi.1006007
Amzica
,
F.
, &
Steriade
,
M.
(
1998
).
Electrophysiological correlates of sleep delta waves
.
Electroencephalography and Clinical Neurophysiology
,
107
(
2
),
69
83
. https://doi.org/10.1016/s0013-4694(98)00051-0
Babayan
,
A.
,
Erbey
,
M.
,
Kumral
,
D.
,
Reinelt
,
J. D.
,
Reiter
,
A. M. F.
,
Röbbig
,
J.
,
Schaare
,
H. L.
,
Uhlig
,
M.
,
Anwander
,
A.
,
Bazin
,
P. L.
,
Horstmann
,
A.
,
Lampe
,
L.
,
Nikulin
,
V. V.
,
Okon-Singer
,
H.
,
Preusser
,
S.
,
Pampel
,
A.
,
Rohr
,
C. S.
,
Sacher
,
J.
, …
Villringer
,
A
. (
2019
).
A mind-brain-body dataset of MRI, EEG, cognition, emotion, and peripheral physiology in young and old adults
.
Scientific Data
,
6
,
180308
. https://doi.org/10.1038/sdata.2018.308
Balestrieri
,
E.
, &
Busch
,
N. A.
(
2022
).
Spontaneous alpha-band oscillations bias subjective contrast perception
.
The Journal of Neuroscience
,
42
(
25
),
5058
5069
. https://doi.org/10.1523/jneurosci.1972-21.2022
Benwell
,
C. S. Y.
,
London
,
R. E.
,
Tagliabue
,
C. F.
,
Veniero
,
D.
,
Gross
,
J.
,
Keitel
,
C.
, &
Thut
,
G.
(
2019
).
Frequency and power of human alpha oscillations drift systematically with time-on-task
.
Neuroimage
,
192
,
101
114
. https://doi.org/10.1016/j.neuroimage.2019.02.067
Berger
,
H.
(
1929
).
Über das elektroenkephalogramm des menschen
.
Archiv für Psychiatrie und Nervenkrankheiten
,
87
(
1
),
527
570
. https://doi.org/10.1007/bf01797193
Chalas
,
N.
,
Daube
,
C.
,
Kluger
,
D. S.
,
Abbasi
,
O.
,
Nitsch
,
R.
, &
Gross
,
J.
(
2022
).
Multivariate analysis of speech envelope tracking reveals coupling beyond auditory cortex
.
NeuroImage
,
258
,
119395
. https://doi.org/10.1016/j.neuroimage.2022.119395
Cliff
,
O. M.
,
Bryant
,
A. G.
,
Lizier
,
J. T.
,
Tsuchiya
,
N.
, &
Fulcher
,
B. D.
(
2023
).
Unifying pairwise interactions in complex dynamics
.
Nature Computational Science
,
3
(
10
),
883
893
. https://doi.org/10.1038/s43588-023-00519-x
Cole
,
S. R.
, &
Voytek
,
B.
(
2017
).
Brain oscillations and the importance of waveform shape
.
Trends in Cognitive Sciences
,
21
(
2
),
137
149
. https://doi.org/10.1016/j.tics.2016.12.008
Demanuele
,
C.
,
James
,
C. J.
, &
Sonuga-Barke
,
E. J.
(
2007
).
Distinguishing low frequency oscillations within the 1/f spectral behaviour of electromagnetic brain signals
.
Behavioral and Brain Functions
,
3
,
62
. https://doi.org/10.1186/1744-9081-3-62
Donoghue
,
T.
,
Haller
,
M.
,
Peterson
,
E. J.
,
Varma
,
P.
,
Sebastian
,
P.
,
Gao
,
R.
,
Noto
,
T.
,
Lara
,
A. H.
,
Wallis
,
J. D.
,
Shestyuk
,
A.
, &
Voytek
,
B.
(
2020
).
Parameterizing neural power spectra into periodic and aperiodic components
.
Nature Neuroscience
,
23
(
12
),
1655
1665
. https://doi.org/10.1038/s41593-020-00744-x
Ergenoglu
,
T.
,
Demiralp
,
T.
,
Bayraktaroglu
,
Z.
,
Ergen
,
M.
,
Beydagi
,
H.
, &
Uresin
,
Y.
(
2004
).
Alpha rhythm of the EEG modulates visual detection performance in humans
.
Brain Research. Cognitive Brain Research
,
20
(
3
),
376
383
. https://doi.org/10.1016/j.cogbrainres.2004.03.009
Fulcher
,
B. D.
, &
Jones
,
N. S.
(
2017
).
hctsa: A computational framework for automated time-series phenotyping using massive feature extraction
.
Cell Systems
,
5
(
5
),
527
-
531.e3
. https://doi.org/10.1016/j.cels.2017.10.001
Gao
,
R.
,
Peterson
,
E. J.
, &
Voytek
,
B.
(
2017
).
Inferring synaptic excitation/inhibition balance from field potentials
.
Neuroimage
,
158
,
70
78
. https://doi.org/10.1016/j.neuroimage.2017.06.078
Gemein
,
L. A. W.
,
Schirrmeister
,
R. T.
,
Chrabąszcz
,
P.
,
Wilson
,
D.
,
Boedecker
,
J.
,
Schulze-Bonhage
,
A.
,
Hutter
,
F.
, &
Ball
,
T.
(
2020
).
Machine-learning-based diagnostics of EEG pathology
.
Neuroimage
,
220
,
117021
. https://doi.org/10.1016/j.neuroimage.2020.117021
Gil Ávila
,
C.
,
Bott
,
F. S.
,
Tiemann
,
L.
,
Hohn
,
V. D.
,
May
,
E. S.
,
Nickel
,
M. M.
,
Zebhauser
,
P. T.
,
Gross
,
J.
, &
Ploner
,
M.
(
2023
).
DISCOVER-EEG: An open, fully automated EEG pipeline for biomarker discovery in clinical neuroscience
.
Scientific Data
,
10
(
1
),
613
. https://doi.org/10.1038/s41597-023-02525-0
Glasser
,
M. F.
,
Coalson
,
T. S.
,
Robinson
,
E. C.
,
Hacker
,
C. D.
,
Harwell
,
J.
,
Yacoub
,
E.
,
Ugurbil
,
K.
,
Andersson
,
J.
Beckmann
,
C. F.
,
Jenkinson
,
M.
,
Smith
S. M.
, &
Van Essen
,
D. C.
(
2016
).
A multi-modal parcellation of human cerebral cortex
.
Nature
,
536
(
7615
),
171
178
. https://doi.org/10.1038/nature18933
Greene
,
A. S.
,
Horien
,
C.
,
Barson
,
D.
,
Scheinost
,
D.
, &
Constable
,
R. T.
(
2023
).
Why is everyone talking about brain state?
Trends in Neurosciences
,
46
(
7
),
508
524
. https://doi.org/10.1016/j.tins.2023.04.001
Henderson
,
T.
,
Bryant
,
A. G.
, &
Fulcher
,
B. D.
(
2023
).
Never a dull moment: Distributional properties as a baseline for time-series classification
.
arXiv
. https://doi.org/10.48550/arXiv.2303.17809
Hoogenboom
,
N.
,
Schoffelen
,
J.-M.
,
Oostenveld
,
R.
, &
Fries
,
P.
(
2010
).
Visually induced gamma-band activity predicts speed of change detection in humans
.
Neuroimage
,
51
(
3
),
1162
1167
. https://doi.org/10.1016/j.neuroimage.2010.03.041
Hoogenboom
,
N.
,
Schoffelen
,
J.-M.
,
Oostenveld
,
R.
,
Parkes
,
L. M.
, &
Fries
,
P.
(
2006
).
Localizing human visual gamma-band activity in frequency, time and space
.
Neuroimage
,
29
(
3
),
764
773
. https://doi.org/10.1016/j.neuroimage.2005.08.043
Jensen
,
O.
(
2023
).
Gating by alpha band inhibition revised: A case for a secondary control mechanism
.
PsyArXiv
. https://doi.org/10.31234/osf.io/7bk32
Jensen
,
O.
, &
Mazaheri
,
A.
(
2010
).
Shaping functional architecture by oscillatory alpha activity: Gating by inhibition
.
Frontiers in Human Neuroscience
,
4
,
186
. https://doi.org/10.3389/fnhum.2010.00186
Jones
,
S. R.
(
2016
).
When brain rhythms aren’t “rhythmic”: Implication for their mechanisms and meaning
.
Current Opinion in Neurobiology
,
40
,
72
80
. https://doi.org/10.1016/j.conb.2016.06.010
van Kerkoerle
,
T.
,
Self
,
M. W.
,
Dagnino
,
B.
,
Gariel-Mathis
,
M.-A.
,
Poort
,
J.
,
van der Togt
,
C.
, &
Roelfsema
,
P. R.
(
2014
).
Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex
.
Proceedings of the National Academy of Sciences of the United States of America
,
111
(
40
),
14332
14341
. https://doi.org/10.1073/pnas.1402773111
Loomis
,
A. L.
,
Harvey
,
E. N.
, &
Hobart
,
G.
(
1935
).
Potential rhythms of the cerebral cortex during sleep
.
Science
,
81
(
2111
),
597
598
. https://doi.org/10.1126/science.81.2111.597
Lubba
,
C. H.
,
Sethi
,
S. S.
,
Knaute
,
P.
,
Schultz
,
S. R.
,
Fulcher
,
B. D.
, &
Jones
,
N. S.
(
2019
).
catch22: CAnonical Time-series CHaracteristics
.
Data mining and knowledge discovery
,
33
(
6
),
1821
1852
. https://doi.org/10.1007/s10618-019-00647-x
Lundqvist
,
M.
,
Rose
,
J.
,
Herman
,
P.
,
Brincat
,
S. L.
,
Buschman
,
T. J.
, &
Miller
,
E. K.
(
2016
).
Gamma and beta bursts underlie working memory
.
Neuron
,
90
(
1
),
152
164
. https://doi.org/10.1016/j.neuron.2016.02.028
Medel
,
V.
,
Irani
,
M.
,
Crossley
,
N.
,
Ossandón
,
T.
, &
Boncompte
,
G.
(
2023
).
Complexity and 1/f slope jointly reflect brain states
.
Scientific Reports
,
13
(
1
),
21700
. https://doi.org/10.1038/s41598-023-47316-0
Michalareas
,
G.
,
Vezoli
,
J.
,
van Pelt
,
S.
,
Schoffelen
,
J.-M.
,
Kennedy
,
H.
, &
Fries
,
P.
(
2016
).
Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas
.
Neuron
,
89
(
2
),
384
397
. https://doi.org/10.1016/j.neuron.2015.12.018
Miller
,
E. K.
,
Lundqvist
,
M.
, &
Bastos
,
A. M.
(
2018
).
Working memory 2.0
.
Neuron
,
100
(
2
),
463
475
. https://doi.org/10.1016/j.neuron.2018.09.023
Nolte
,
G.
(
2003
).
The magnetic lead field theorem in the quasi-static approximation and its use for magnetoencephalography forward calculation in realistic volume conductors
.
Physics in Medicine and Biology
,
48
(
22
),
3637
3652
. https://doi.org/10.1088/0031-9155/48/22/002
Oostenveld
,
R.
,
Fries
,
P.
,
Maris
,
E.
, &
Schoffelen
,
J.-M.
(
2011
).
FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data
.
Computational Intelligence and Neuroscience
,
2011
,
156869
. https://doi.org/10.1155/2011/156869
Pfurtscheller
,
G.
,
Brunner
,
C.
,
Schlögl
,
A.
, &
Lopes da Silva
,
F. H.
(
2006
).
Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks
.
Neuroimage
,
31
(
1
),
153
159
. https://doi.org/10.1016/j.neuroimage.2005.12.003
Pfurtscheller
,
G.
, &
Lopes da Silva
,
F. H.
(
1999
).
Event-related EEG/MEG synchronization and desynchronization: Basic principles
.
Clinical Neurophysiology
,
110
(
11
),
1842
1857
. https://doi.org/10.1016/s1388-2457(99)00141-8
Pfurtscheller
,
G.
,
Stancák
,
A.
, &
Neuper
,
C.
(
1996
).
Event-related synchronization (ERS) in the alpha band—An electrophysiological correlate of cortical idling: A review
.
International Journal of Psychophysiology
,
24
(
1–2
),
39
46
. https://doi.org/10.1016/s0167-8760(96)00066-9
Roelfsema
,
P. R.
(
2023
).
Solving the binding problem: Assemblies form when neurons enhance their firing rate-they don’t need to oscillate or synchronize
.
Neuron
,
111
(
7
),
1003
1019
. https://doi.org/10.1016/j.neuron.2023.03.016
Samaha
,
J.
,
Iemi
,
L.
,
Haegens
,
S.
, &
Busch
,
N. A.
(
2020
).
Spontaneous brain oscillations and perceptual decision-making
.
Trends in Cognitive Sciences
,
24
(
8
),
639
653
. https://doi.org/10.1016/j.tics.2020.05.004
Shafiei
,
G.
,
Fulcher
,
B. D.
,
Voytek
,
B.
,
Satterthwaite
,
T. D.
,
Baillet
,
S.
, &
Misic
,
B.
(
2023
).
Neurophysiological signatures of cortical micro-architecture
.
Nature Communications
,
14
(
1
),
6000
. https://doi.org/10.1038/s41467-023-41689-6
Shafiei
,
G.
,
Markello
,
R. D.
,
Vos de Wael
,
R.
,
Bernhardt
,
B. C.
,
Fulcher
,
B. D.
, &
Misic
,
B.
(
2020
).
Topographic gradients of intrinsic dynamics across neocortex
.
eLife
,
9
,
e62116
. https://doi.org/10.7554/elife.62116
da Silva
,
F. H.
,
van Lierop
,
T. H.
,
Schrijer
,
C. F.
, &
van Leeuwen
,
W. S.
(
1973
).
Organization of thalamic and cortical alpha rhythms: Spectra and coherences
.
Electroencephalography and Clinical Neurophysiology
,
35
(
6
),
627
639
. https://doi.org/10.1016/0013-4694(73)90216-2
Studenova
,
A. A.
,
Villringer
,
A.
, &
Nikulin
,
V. V.
(
2022
).
Non-zero mean alpha oscillations revealed with computational model and empirical data
.
PLoS Computational Biology
,
18
(
7
),
e1010272
. https://doi.org/10.1371/journal.pcbi.1010272
Vallat
,
R.
, &
Walker
,
M. P.
(
2021
).
An open-source, high-performance tool for automated sleep staging
.
eLife
,
10
,
e70092
. https://doi.org/10.7554/elife.70092
Van Veen
,
B. D.
,
van Drongelen
,
W.
,
Yuchtman
,
M.
, &
Suzuki
,
A.
(
1997
).
Localization of brain electrical activity via linearly constrained minimum variance spatial filtering
.
IEEE Transactions on Bio-Medical Engineering
,
44
(
9
),
867
880
. https://doi.org/10.1109/10.623056
Vinck
,
M.
,
Uran
,
C.
,
Spyropoulos
,
G.
,
Onorato
,
I.
,
Broggini
,
A. C.
,
Schneider
,
M.
, &
Canales-Johnson
,
A.
(
2023
).
Principles of large-scale neural interactions
.
Neuron
,
111
(
7
),
987
1002
. https://doi.org/10.1016/j.neuron.2023.03.015
Wang
,
X.-J.
(
2010
).
Neurophysiological and computational principles of cortical rhythms in cognition
.
Physiological Reviews
,
90
(
3
),
1195
1268
. https://doi.org/10.1152/physrev.00035.2008
Waschke
,
L.
,
Kloosterman
,
N. A.
,
Obleser
,
J.
, &
Garrett
,
D. D.
(
2021
).
Behavior needs neural variability
.
Neuron
,
109
(
5
),
751
766
. https://doi.org/10.1016/j.neuron.2021.01.023
Waschke
,
L.
,
Tune
,
S.
, &
Obleser
,
J.
(
2019
).
Local cortical desynchronization and pupil-linked arousal differentially shape brain states for optimal sensory performance
.
eLife
,
8
,
e51501
. https://doi.org/10.7554/elife.51501
Wiesman
,
A. I.
,
da Silva
Castanheira
, J., &
Baillet
,
S.
(
2022
).
Stability of spectral estimates in resting-state magnetoencephalography: Recommendations for minimal data duration with neuroanatomical specificity
.
Neuroimage
,
247
,
118823
. https://doi.org/10.1016/j.neuroimage.2021.118823
Womelsdorf
,
T.
,
Valiante
,
T. A.
,
Sahin
,
N. T.
,
Miller
,
K. J.
, &
Tiesinga
,
P.
(
2014
).
Dynamic circuit motifs underlying rhythmic gain control, gating and integration
.
Nature Neuroscience
,
17
(
8
),
1031
1039
. https://doi.org/10.1038/nn.3764
Yahyaei
,
R.
, &
Esat Özkurt
,
T.
(
2022
).
Mean curve length: An efficient feature for brainwave biometrics
.
Biomedical Signal Processing and Control
,
76
,
103664
. https://doi.org/10.1016/j.bspc.2022.103664
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data