Abstract

Human rhythmic movements spontaneously synchronize with auditory rhythms at various frequency ratios. The emergence of more complex relationships—for instance, frequency ratios of 1:2 and 1:3—is enhanced by adding a congruent accentuation pattern (binary for 1:2 and ternary for 1:3), resulting in a 1:1 movement–accentuation relationship. However, this benefit of accentuation on movement synchronization appears to be stronger for the ternary pattern than for the binary pattern. Here, we investigated whether this difference in accent-induced movement synchronization may be related to a difference in the neural tracking of these accentuation profiles. Accented and control unaccented auditory sequences were presented to participants who concurrently produced finger taps at their preferred frequency, and spontaneous movement synchronization was measured. EEG was recorded during passive listening to each auditory sequence. The results revealed that enhanced movement synchronization with ternary accentuation was accompanied by enhanced neural tracking of this pattern. Larger EEG responses at the accentuation frequency were found for the ternary pattern compared with the binary pattern. Moreover, the amplitude of accent-induced EEG responses was positively correlated with the magnitude of accent-induced movement synchronization across participants. Altogether, these findings show that the dynamics of spontaneous auditory–motor synchronization is strongly driven by the multi-time-scale sensory processing of auditory rhythms, highlighting the importance of considering neural responses to rhythmic sequences for understanding and enhancing synchronization performance.

INTRODUCTION

Human movements are highly sensitive to surrounding auditory events such as the presence of external rhythms (e.g., repetitive sounds or music). Without any intention to do so, an individual's periodic movements can become transiently synchronized with an auditory sequence, a phenomenon referred to as spontaneous or unintentional auditory–motor synchronization. Unintentional synchronization has been revealed for a large repertoire of human movements, namely, tapping, walking, running, clapping, and postural sway (Coste, Salesse, Gueugnon, Marin, & Bardy, 2018; Hattori, Tomonaga, & Matsuzawa, 2015; Van Dyck et al., 2015; Peckel, Pozzo, & Bigand, 2014; Repp & Su, 2013; Demos, Chaffin, Begosh, Daniels, & Marsh, 2012; Bernardi et al., 2009), and auditory stimuli, namely, simple metronome, music, and even subliminal rhythmic stimuli (Varlet, Williams, & Keller, 2020; Coste et al., 2018; Schurger, Faivre, Cammoun, Trovó, & Blanke, 2017; Van Dyck et al., 2015).

Although previous studies mostly focused on the occurrence of synchronization at a simple 1:1 frequency relationship between the movement and the auditory stimulus (i.e., one movement cycle for one stimulus cycle), human movements can also entrain to more complex frequency ratios such as 1:2 and 1:3 (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019; Bouvet, Varlet, Dalla Bella, Keller, Zelic, et al., 2019; Varlet, Williams, Bouvet, & Keller, 2018; Peckel et al., 2014). The occurrence and dynamic stability of these frequency ratios, however, are lower than that of the 1:1 frequency ratio (Bouvet, Varlet, Dalla Bella, Keller, Zelic, et al., 2019), as predicted by the mathematical concept of the Farey tree (Cvitanovic, Shraiman, & Söderberg, 1985; Hardy & Wright, 1979). The Farey tree orders different frequency ratios from more complex and least stable ratios to the simplest and more stable ones (e.g., 1:3 < 1:2 < 1:1). The most stable ratios tend to spontaneously emerge and be consistently maintained, as reported in previous studies on bimanual synchronization (Peper, Beek, & van Wieringen, 1995a, 1995b; Treffner & Turvey, 1993; Kelso & De Guzman, 1988) and locomotion–respiration coupling (e.g., Hoffmann & Bardy, 2015).

Interestingly, the occurrence of such various frequency ratios can be facilitated when adding a congruent accentuation pattern to the auditory sequences (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019). Accentuation corresponds to an acoustic event that is salient in comparison to its surrounding context (Bouwer, Burgoyne, Odijk, Honing, & Grahn, 2018; London, 2012; Ellis & Jones, 2009; Dawe, Platt, & Racine, 1993, 1995; Palmer & Krumhansl, 1990) and can arise from features of the auditory stimulus (Lerdahl & Jackendoff, 1983). For example, modulating the intensity of specific events in the signal (e.g., accents on vs. off the beat) can enhance beat perception (Bouwer et al., 2018; Fujioka, Ross, & Trainor, 2015; Schaefer, Vlek, & Desain, 2011; Grahn & Rowe, 2009; Drake, 1993) and intentional movement synchronization (Etani, Miura, Okano, Shinya, & Kudo, 2019; Repp, 2005). Moreover, in a previous study, we have recently reported (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019) that spontaneous movement synchronization at 1:2 and 1:3 frequency ratios is facilitated when auditory sequences are binary and ternary accented, respectively. Enhanced synchronization to 1:2 and 1:3 frequency ratios might result from accent-induced modulation of attention, as a foundation for auditory–motor interaction. This possibility is illustrated by a theory named Dynamic Attending Theory (Drake, Jones, & Baruch, 2000; Jones, 1976). According to the Dynamic Attending Theory, attention can be modeled as a self-sustained oscillator whose dynamics can be influenced by the presence of other oscillators, oscillators representing periodic stimuli or periodic movements, for instance. Consistent with such a process, EEG recordings have captured some forms of enhanced responses at frequencies specifically related to movement and components of the auditory stimulus' frequencies (Chemin, Mouraux, & Nozaradan, 2014; Nozaradan, Zerouali, Peretz, & Mouraux, 2013; Nozaradan, Peretz, & Mouraux, 2012; Nozaradan, Peretz, Missal, & Mouraux, 2011). Subcortical and cortical mechanisms, supporting lower- and higher-level processing of auditory rhythms, respectively, have been suggested to underpin the occurrence of these periodicities in neural activity (Nozaradan, Schönwiesner, Keller, Lenc, & Lehmann, 2018; Rajendran, Harper, Garcia-Lazaro, Lesica, & Schnupp, 2017). Here, we hypothesized that accentuation may result in neural periodicity that fosters the emergence of spontaneous movement synchronization. Accent-induced neural periodicity congruent with an individual's movement timing could facilitate synchronization by simplifying complex relations between the movement and an auditory sequence to a more stable 1:1 accentuation–movement relationship.

Importantly, we recently showed that the benefit of congruent accentuation patterns on spontaneous movement synchronization was stronger for 1:3 ternary accentuation than for 1:2 binary accentuation (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019). The origin of this difference remains unclear. A possible explanation is that ternary accentuation might drive neural activity to a greater degree than binary accentuation because of a perceptual contrast difference between the two accentuation patterns. Binary structure entails a symmetric alternation of accented and unaccented sounds, whereas ternary structure is composed of one accented sound followed by two unaccented sounds before repetition of the pattern, thus resulting in greater perceptual contrast. Repetitions induce stimulus-specific adaptation of the auditory perceptual system, interrupted by a second unaccented sound in the ternary accentuation pattern (Antunes & Malmierca, 2014). Such difference might result in larger neural responses to ternary accentuation compared to binary accentuation, despite identical frequencies and amplitudes, and might underpin enhanced accent-induced movement synchronization.

In the current study, we used EEG to measure neural responses to unaccented and accented auditory sequences and to determine if the magnitude of the neural tracking of binary and ternary patterns of accentuation differs. The term “neural tracking” here does not refer to any particular underlying mechanism, or form of entrainment, which would explain how the brain tracks periodicities in auditory sequences. Whether the tracking of periodicities in auditory rhythms is underpinned by dynamic modulations in evoked neural responses and/or endogenous neural oscillations is currently debated (Doelling, Assaneo, Bevilacqua, Pesaran, & Poeppel, 2019; Lenc, Keller, Varlet, & Nozaradan, 2018a, 2019; Rajendran & Schnupp, 2019; Novembre & Iannetti, 2018), but this level of neurophysiological detail is beyond the scope of the current study. Furthermore, we measured the occurrence of spontaneous movement synchronization toward 1:2 and 1:3 frequency ratios to assess whether individual differences in accent-induced movement synchronization are associated with differences in accent-induced EEG responses. We assessed participants' neural response to the accentuation patterns during passive listening trials of unaccented and accented auditory metronomes. During movement trials, we asked participants to produce cyclic finger taps at their preferred frequency while listening to the auditory sequences. It was predicted that the amplitude of accent-induced EEG responses would be larger for ternary accentuation than for binary accentuation and that the amplitude of accent-induced EEG responses would predict the magnitude of accent-induced movement synchronization.

METHODS

Participants

Nineteen right-handed participants (10 women) with a mean age of 27.48 years (SD = 4.2 years, range = 23–39 years) and various levels of musical training (mean = 7.84 years, SD = 9.42 years, range = 0–30 years) were tested in this experiment. All participants were selected based on a preferred movement tempo between 1.6 and 2 Hz, which corresponds to the average preferred tempo reported in previous studies (1.65–2 Hz; Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019; Bouvet, Varlet, Dalla Bella, Keller, Zelic, et al., 2019; Large, 2008; Moelants, 2002; Collyer, Broadbent, & Church, 1994; Fraisse, 1974, 1982). Participants were prescreened to make sure they fulfilled this criterion using an online tapping test (www.all8.com/tools/bpm.htm) where they were asked to “tap with your right index finger, wrist on a flat surface, on any key of your keyboard at your most comfortable tapping frequency that you can maintain for at least 60 taps.” All participants had no history of hearing impairment and provided written informed consent before the experiment. This study was approved by the human research ethics committee of Western Sydney University.

Apparatus and Stimuli

Participants were seated comfortably in a chair with their forearms resting on customized armrests configured to perform an air-tapping task, that is, vertical flexion–extension of the right index finger without tactile contact. A sensor was fixed on the participant's right index finger to record the vertical oscillations at a 240-Hz sampling rate with a Polhemus Liberty motion tracker (Polhemus Ltd.). EEG was recorded at a sampling rate of 1024 Hz using a Biosemi Active-Two system (Biosemi) with 64 Ag–AgCl electrodes placed over the participant's scalp according to the International 10–20 system. Four additional external electrodes were placed over the participant's face to control for blinks and horizontal eye movements, and two others were placed on the participant's mastoids to be used as reference.

The auditory metronomes were presented to the participant using MATLAB software (Version 2014B, The MathWorks, Inc.) via in-ear headphones (ER-1, Etymotic Research). To replicate the 1:2 and 1:3 accented congruent conditions of our previous experiment (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019), stimuli were presented at two different tempi, ensuring that the time interval between accents was the same across accent patterns. We used 3.6-Hz auditory sequences with binary accentuation and 5.4 Hz with ternary accentuation to examine spontaneous movement synchronization to 1:2 and 1:3 frequency ratios, considering an average preferred tempo of our participants around 1.8 Hz, in line with previous studies (Large, 2008), as illustrated in Figure 1. These combinations resulted in binary and ternary accentuations at 1.8 Hz corresponding to participants' preferred tempo. To test the effect of accentuation, 3.6- and 5.4-Hz sequences were also presented without accentuation. The four resulting sequences were composed of repetitions of a 50-msec drumbeat with 10-msec fade-in and fade-out and lasted 60 sec. The auditory sequences were presented at a comfortable intensity kept constant for all participants, and accentuation was produced by doubling the amplitude of the sound. Unaccented and accented sounds were presented at 60 and 66 dB, respectively. Accentuation patterns led to clear frequency peaks and corresponding harmonics specific to the binary and ternary accentuations in the frequency domain, as observed in Figure 1. A VIEWPixx monitor (VPixx Technologies) was positioned in front of the participant. A fixation cross was displayed at the center of the monitor throughout the trials to control for participant's eye movements. The monitor was also used to provide instructions for the different conditions.

Figure 1. 

Frequency-domain amplitude spectra with baseline subtraction for the different auditory sequences presented to participants and for the corresponding EEG responses to the binary (left) and ternary (right) 1.8-Hz accentuated sequences (orange) and unaccented sequences (black). EEG spectra correspond to the amplitude averaged across all participants in the seven selected fronto-central electrodes.

Figure 1. 

Frequency-domain amplitude spectra with baseline subtraction for the different auditory sequences presented to participants and for the corresponding EEG responses to the binary (left) and ternary (right) 1.8-Hz accentuated sequences (orange) and unaccented sequences (black). EEG spectra correspond to the amplitude averaged across all participants in the seven selected fronto-central electrodes.

Procedure

Before EEG preparation, participants were briefed regarding the EEG and motion capture technology but were not informed about the different accentuation patterns. The four different auditory sequences (i.e., unaccented and accented at 3.6 and 5.4 Hz) were presented in two types of trials—tapping and listening trials. In the tapping trials, participants tapped in the air with their right index finger at their most comfortable frequency following the instruction: “Try to adopt a tempo that you could maintain for a long period of time without fatigue.” They were instructed to do their best to maintain this frequency even when auditory stimuli were presented. Each tapping trial began with a period of silence that randomly lasted between 5 and 10 sec to ensure that participants found their preferred tempo before the beginning of the sequence. Participants' tapping was also measured in control trials in which auditory stimuli were not presented. During the listening trials, participants were asked to stay still. In all trials, participants were instructed to stay relaxed and to keep their eyes on the fixation cross.

Each participant performed 54 trials in total, including six trials for each of the nine different conditions (i.e., five tapping conditions: four auditory stimuli and one silence control, plus four listening conditions). Six blocks with nine trials, one for each condition, were presented to each participant in a randomized order (within and between blocks). After EEG preparation and instruction delivery, the experiment lasted about 1 hr. The participants were debriefed at the end of the experiment. Most of them reported interference from the auditory sequences on their capacity to maintain a steady movement tempo, but none of them reported that they intentionally synchronized with the sequences.

Data Analysis

Spontaneous Movement Synchronization

Vertical position data for movements of participants' right index fingers were extracted, centered on zero and band-pass filtered with 0.1- and 10-Hz cutoff frequencies. Movement peaks (flexions) were determined using the peakdet function in MATLAB that identifies lower and higher points in local areas (Billauer, 2012). The timing positions associated with maximal finger flexion of the right index finger were then used to compute the percentage of occurrence of synchrony relative to the expected 1:2 or 1:3 frequency ratios using Index of Stability analysis (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019; Bouvet, Varlet, Dalla Bella, Keller, Zelic, et al., 2019; Zelic, Varoqui, Kim, & Davis, 2017). This method allows the measure of the occurrence of stable frequency ratios between two oscillatory components by considering the local modes of relationship emerging between the two time series.

Before focusing more specifically on the effect of accentuation, we first examined the spontaneous movement synchronization occurring in unaccented and accented conditions separately. An ANOVA with the factors Accentuation presence (accented vs. unaccented) and Frequency ratio (1:2 vs. 1:3) was conducted on the percentage of occurrence at 1:2 and 1:3 ratios after subtraction of the incidental synchrony (percentage at these ratios in control trials without auditory stimuli).

The benefit of accentuation on spontaneous movement synchronization between binary accented and ternary accented conditions was then examined by subtracting the mean percentage occurrence to unaccented sequences from the mean percentage occurrence to accented sequences in the corresponding condition. Data for accent-induced synchrony at the expected ratios (i.e., 1:2 and 1:3) were submitted to a paired-sample t test between the two accentuation patterns (binary vs. ternary) and to a one-sample t test against 0. Statistical analyses were carried out using R (Version 3.4.1; www.R-project.org).

EEG Preprocessing

The EEG processing was carried out using the Fieldtrip toolbox (Oostenveld, Fries, Maris, & Schoffelen, 2011) using MATLAB software. EEG data were first high-pass filtered using a fourth-order Butterworth filter with a cutoff frequency of 0.1 Hz and notch filtered to remove 50-Hz (and corresponding harmonics) electrical power contamination. After removal of eye movement and blink artifacts via independent component analysis using the FastICA algorithm, as implemented in Fieldtrip, EEG data were segmented into 60-sec epochs for each passive listening trial. Each listening trial considered for the EEG analysis was controlled for uninstructed finger movement and discarded if necessary (<1%). The data were rereferenced to the mastoids and then averaged across trials in the time domain separately for each condition and participant to increase the signal-to-noise ratio (Lenc, Keller, Varlet, & Nozaradan, 2018b; Handy, 2005).

Frequency-Domain Analysis of the EEG Responses

EEG for the 60-sec trials of passive listening was transformed into the frequency domain using a fast Fourier transform, resulting in a frequency spectrum of signal amplitude (μV) ranging from 0 to 512 Hz, with a frequency resolution of ≈0.0167 Hz (i.e., 1/60 sec). To reduce background noise in the spectrum, the average amplitude of the surrounding frequency bins (from the second to fifth bins on each side) was subtracted from the amplitude at each frequency bin (Varlet, Nozaradan, Nijhuis, & Keller, 2020; Nozaradan et al., 2012; Mouraux et al., 2011). These noise-subtracted amplitudes were then used to measure EEG responses at 1.8 Hz corresponding to the frequency of the auditory accentuation in the two rhythmic sequences and at 3.6 and 5.4 Hz corresponding to the two first harmonics of 1.8 Hz that also showed responses to accentuation (see Figure 1). Including harmonics at which there was significant amplitude is critical to measure global amplitude changes at the accentuation frequency, as these harmonics capture deviations from a perfectly sinusoidal 1.8-Hz waveform. To compare the amplitude of EEG responses between binary and ternary accentuation when controlling for the influence of the difference of tempo between the two rhythmic sequences (i.e., 3.6 vs. 5.4 Hz), EEG responses to unaccented sequences at 1.8, 3.6, and 5.4 Hz were subtracted from EEG responses to accented sequences in the corresponding condition. Amplitudes at 1.8, 3.6, and 5.4 Hz were then averaged separately for binary and ternary accentuations.

In line with previous studies on auditory perception (Nozaradan, Mouraux, & Cousineau, 2017; Näätänen, Paavilainen, Rinne, & Alho, 2007), the average of a selection of seven fronto-central EEG electrodes, namely, Fz, FCz, Cz, F1, F2, FC1, and FC2, was considered for statistical analyses. The accent-induced EEG responses were submitted to a two-tailed paired-sample t test to evaluate the reliability of the difference between the two accentuation patterns (binary vs. ternary) and to one-sample t tests to evaluate differences from zero.

Time-Domain Analysis of the EEG Responses

To further examine the neural response to accentuation, EEG data were also analyzed in the time domain. EEG data were bandpass filtered using a fourth-order Butterworth filter with cutoff frequencies of 0.03 and 30 Hz in line with previous research (Edagawa & Kawasaki, 2017). The 60-sec passive listening trials were segmented into ≈0.56-sec epochs (i.e., two interonset intervals for 3.6 Hz and three interonset intervals for 5.4 Hz) that were then averaged across participants for each condition. To compare EEG time-domain responses to accentuation in 1:2 binary and 1:3 ternary conditions, we also examined EEG responses to accented sequences after subtraction of EEG responses to unaccented sequences in the corresponding frequency ratio condition.

We used cluster-based permutation testing to determine significant differences between accented and unaccented responses (Oostenveld et al., 2011). Point-by-point paired-sample t tests were used to compare accented and unaccented responses, and point-by-point one-sample t tests were used to compare the difference between accented and unaccented responses against 0. We then estimated clusters of adjacent time points above the critical t value (alpha level = .05) for a parametric two-sided test and the magnitude of each cluster by computing the sum of the absolute t values constituting each cluster. One thousand random permutations of each participant's waveforms were calculated to obtain a reference distribution of maximum cluster magnitude. The proportion of random partitions that resulted in a larger cluster-level statistic than the observed one (p < .05) was computed. Clusters were considered as significant in observed data if they had a magnitude exceeding the threshold of the 95th percentile of the permutation distribution.

Relationship between Accent-induced Synchrony and EEG Response

To determine whether neural activity at the frequency of the accentuation patterns was directly related to participants' accent-induced spontaneous movement synchronization, Spearman correlation analyses between accent-induced movement synchrony and accent-induced EEG responses were conducted separately for the binary and ternary conditions.

RESULTS

Accent-induced Movement Synchronization

The ANOVA with the factors Accentuation presence (accented vs. unaccented) and Frequency ratio (1:2 vs. 1:3) on the percentage of occurrence at 1:2 and 1:3 ratios after subtraction of the incidental synchrony (percentages in control trials) indicated no main effect of Accentuation presence, F(1, 18) = 2.86, p = .11, ηG2 = .013, and Frequency ratio, F(1, 18) = 3.82, p = .07, ηG2 = .021, but a significant interaction between the two factors, F(1, 18) = 5.41, p = .03, ηG2 = .018. Pairwise comparisons with Bonferroni correction (six in total) revealed a larger percentage of occurrence for 1:2 than 1:3 ratio in unaccented conditions, t(33.91) = 2.97, p = .03, d = .52. These results show that spontaneous synchronization with unaccented sequences was stronger toward the 1:2 than 1:3 frequency ratio, as displayed in Figure 2.

Figure 2. 

Spontaneous movement synchronization toward 1:2 and 1:3 frequency ratios for accented and unaccented auditory sequences. The data presented correspond to the percentages of occurrence after subtraction of the incidental synchrony (percentages in the control trials). Error bars represent Cousineau–Morey within-subject 95% confidence intervals. The asterisk represents p < .05.

Figure 2. 

Spontaneous movement synchronization toward 1:2 and 1:3 frequency ratios for accented and unaccented auditory sequences. The data presented correspond to the percentages of occurrence after subtraction of the incidental synchrony (percentages in the control trials). Error bars represent Cousineau–Morey within-subject 95% confidence intervals. The asterisk represents p < .05.

When focusing more specifically on the influence of the accentuation pattern, the results indicated greater accent-induced spontaneous movement synchrony (i.e., accented–unaccented difference in percentage of occurrence at the expected frequency ratio [1:2 or 1:3]) for the ternary accentuation than the binary accentuation, as illustrated in Figure 3A. The paired-sample t test on accent-induced synchrony data indicated a significant difference between the binary and ternary accentuation patterns, t(18) = −2.33, p = .03, d = .53. Moreover, one-sample t tests against 0 revealed no significant difference for the binary accentuation pattern, t(18) = −0.31, p = .76, d = .07, but a significant difference for the ternary accentuation pattern, t(18) = 2.23, p = .04, d = .51. These results show that only ternary accentuation strengthens participants' spontaneous movement synchronization.

Figure 3. 

(A) Accent-induced movement synchrony for the binary and ternary accentuation patterns, obtained by subtracting the percentage of occurrence of the expected ratio when listening to unaccented sequences from that when listening to accented sequences. (B) Accent-induced EEG responses to binary and ternary accentuation patterns. EEG responses correspond to the mean of the baseline-subtracted amplitudes at the frequency and corresponding harmonics (1.8, 3.6, and 5.4 Hz) exclusive to accentuation (after subtraction of EEG responses to unaccented sequences) averaged across seven fronto-central electrodes, in accordance with the corresponding grand-averaged topographical maps. (C) Correlations between accent-induced EEG response and accent-induced movement synchrony for binary accentuation (blue) and ternary accentuation (red). The colored diagonals indicate the lines of best fit. Error bars represent Cousineau–Morey within-subject 95% confidence intervals. Asterisks represent p < .05. t Tests and correlation analyses were conducted and deemed appropriate after checking that they give equivalent results when removing the percentage of occurrence outlier (> 3 × SD of the mean).

Figure 3. 

(A) Accent-induced movement synchrony for the binary and ternary accentuation patterns, obtained by subtracting the percentage of occurrence of the expected ratio when listening to unaccented sequences from that when listening to accented sequences. (B) Accent-induced EEG responses to binary and ternary accentuation patterns. EEG responses correspond to the mean of the baseline-subtracted amplitudes at the frequency and corresponding harmonics (1.8, 3.6, and 5.4 Hz) exclusive to accentuation (after subtraction of EEG responses to unaccented sequences) averaged across seven fronto-central electrodes, in accordance with the corresponding grand-averaged topographical maps. (C) Correlations between accent-induced EEG response and accent-induced movement synchrony for binary accentuation (blue) and ternary accentuation (red). The colored diagonals indicate the lines of best fit. Error bars represent Cousineau–Morey within-subject 95% confidence intervals. Asterisks represent p < .05. t Tests and correlation analyses were conducted and deemed appropriate after checking that they give equivalent results when removing the percentage of occurrence outlier (> 3 × SD of the mean).

Accent-induced EEG Responses

Frequency-Domain Response

As seen in Figure 3B, accent-induced EEG responses (1.8 Hz and its harmonics), obtained by subtracting EEG responses to unaccented sequences from EEG responses to accented sequences, differed between the binary accentuation and ternary accentuation conditions. The paired t test on these data indicated significantly larger responses to ternary accentuation compared to binary accentuation, t(18) = −2.45, p = .02, d = .56. Furthermore, one-sample t tests against 0 revealed a significant difference for the binary accent-induced EEG response, t(18) = 2.12, p = .047, d = .49, and for the ternary accent-induced EEG response, t(18) = 4.47, p < .001, d = 1.03. There were significant accent-induced EEG responses to both binary and ternary accentuations, but the amplitude of the response was larger for ternary accentuation. Topographic maps of these accent-induced EEG responses displayed at the top of Figure 3B are consistent with differences in amplitude that may originate from auditory regions.

Time-Domain Response

Figures 4 and 5 present EEG time-domain responses to the binary and ternary accentuation patterns with the corresponding control unaccented conditions. The difference between accented and unaccented conditions for the ternary accentuation pattern is numerically greater than the difference observed for the binary accentuation pattern. After subtraction of the unaccented responses from the accented responses (Figures 4 and 5, bottom), significant positive and negative deflections were found (i.e., significant clusters as indicated by gray-shaded sections), peaking between the unaccented (black vertical bar) and accented (orange vertical bar) sound, especially for ternary accentuation. Topographic maps of the amplitude difference between the two conditions averaged within interonset intervals are consistent with this enhancement of the response to accentuation occurring in auditory regions.

Figure 4. 

Time-domain EEG responses to 3.6-Hz binary accented sequence and the corresponding unaccented controls in the seven selected fronto-central electrodes. Bold vertical bars represent unaccented (gray) and accented (orange) auditory sounds in ≈556-msec epochs. The top section of the figure depicts the average amplitude in the unaccented (black) and accented (orange) conditions. The bottom section represents the contrast between the accented and unaccented conditions with the corresponding topographic maps averaged within interonset intervals. Shaded areas represent Cousineau–Morey within-participant 95% confidence intervals, and sections shaded in gray correspond to significant clusters (p < .05) after random permutations.

Figure 4. 

Time-domain EEG responses to 3.6-Hz binary accented sequence and the corresponding unaccented controls in the seven selected fronto-central electrodes. Bold vertical bars represent unaccented (gray) and accented (orange) auditory sounds in ≈556-msec epochs. The top section of the figure depicts the average amplitude in the unaccented (black) and accented (orange) conditions. The bottom section represents the contrast between the accented and unaccented conditions with the corresponding topographic maps averaged within interonset intervals. Shaded areas represent Cousineau–Morey within-participant 95% confidence intervals, and sections shaded in gray correspond to significant clusters (p < .05) after random permutations.

Figure 5. 

Time-domain EEG responses to 5.4-Hz ternary accented sequences and their corresponding unaccented controls in the seven selected fronto-central electrodes. Bold vertical bars represent unaccented (gray) and accented (orange) auditory sounds in ≈556-msec epochs. The top section of the figure depicts the average amplitude in the unaccented (black) and accented (orange) conditions. The bottom section represents the contrast between the accented and unaccented conditions with the corresponding topographic maps averaged within interonset intervals. Shaded areas represent Cousineau–Morey within-participant 95% confidence intervals, and sections shaded in gray correspond to significant clusters (p < .05) after random permutations.

Figure 5. 

Time-domain EEG responses to 5.4-Hz ternary accented sequences and their corresponding unaccented controls in the seven selected fronto-central electrodes. Bold vertical bars represent unaccented (gray) and accented (orange) auditory sounds in ≈556-msec epochs. The top section of the figure depicts the average amplitude in the unaccented (black) and accented (orange) conditions. The bottom section represents the contrast between the accented and unaccented conditions with the corresponding topographic maps averaged within interonset intervals. Shaded areas represent Cousineau–Morey within-participant 95% confidence intervals, and sections shaded in gray correspond to significant clusters (p < .05) after random permutations.

Relationship between Accent-induced EEG Responses and Movement Synchrony

As seen in Figure 3C, correlation analyses revealed a significant correlation between the magnitude of participants' accent-induced EEG (frequency-domain) responses and accent-induced movement synchrony for the ternary accentuation pattern, r2 = .26, p = .03, but not for the binary accentuation pattern, r2 = .015, p = .62. These results indicate that the larger the magnitude of participants' EEG responses to ternary accentuation, the greater the facilitation of the occurrence of spontaneous movement synchronization with ternary accentuation.

DISCUSSION

This study investigated the neural activity underlying the spontaneous synchronization of human movements to accented auditory sequences. The amplitude of accent-induced EEG responses and movement synchronization was examined for auditory sequences with congruent binary and ternary accentuation patterns (i.e., in a matched 1:1 relationship with movement frequency) to determine if differences in EEG responses could explain individual differences in spontaneous movement synchronization occurring across the two accentuation patterns.

The results corroborated recent evidence showing greater benefit of ternary accentuation than binary accentuation on an individual's movement synchronization (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019). Synchronization enhancement induced by binary accentuation did not even reach statistical significance in the current study. This negative finding may be because of higher variability in participants' preferred movement frequency as compared to Bouvet, Varlet, Dalla Bella, Keller, and Bardy (2019). It might also be because of a higher degree of synchronization without accentuation for the 1:2 than 1:3 ratio in line with the predictions of the Farey tree and previous studies (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019; Peper et al., 1995a; Cvitanovic et al., 1985), which might leave less room for accent-induced enhancement. Together, these results confirm the beneficial effects of accentuation on auditory–motor synchronization (Etani et al., 2019; Repp, 2005; Semjen & Vos, 2002) and the weaker effect of binary accentuation on spontaneous movement synchronization we previously found (Bouvet, Varlet, Dalla Bella, Keller, & Bardy, 2019). Although it is true that participants spontaneously synchronized more toward 1:2 than 1:3 without accentuation, these accentuation effects, at both neural and behavioral levels, contrast with previous works suggesting that humans have a general preference for binary patterns in perception and coordination (Collier & Wright, 1995; Povel, 1981; Fraisse, 1956), encouraging further exploration in future studies.

Our observation of the influence of accentuation pattern could be considered in line with the subdivision benefit. In intentional bimanual, locomotion, and auditory–motor synchronization (Roerdink et al., 2009; Kudo, Park, Kay, & Turvey, 2006; Repp, 2003; Fink, Foo, Jirsa, & Kelso, 2000), the addition of a cue between the intervals of the rhythm to be synchronized with (i.e., frequency × 2 for duple division) has been found to be associated with an improvement of the movement performance and an extension of the range of frequencies an individual can synchronize with. However, the subdivision benefit was not observed for triple subdivision (Repp, 2003) or for duple subdivision in spontaneous auditory–motor synchronization (Varlet, Williams, Bouvet, et al., 2018). Interestingly, Repp (2003) hypothesized that, if the target sound of the triple subdivision was accented, coordination might be enhanced. Therefore, by adding accentuation pattern to a monotone auditory isochronous sequence, we reproduced duple subdivision benefit and observed triple benefit for the first time in spontaneous auditory–motor coordination.

EEG recordings revealed significant responses to both accentuation patterns during passive listening. Notably, these responses were larger for ternary accentuation than binary accentuation, in line with participants' accent-induced movement synchronization. Frequency-domain analyses revealed larger responses at the accentuation frequency (1.8 Hz and its harmonics) for the ternary accentuation compared to the binary accentuation. This difference was also suggested by the time domain data, as indicated by an accent-induced periodic modulation of larger amplitude for the ternary pattern compared to the binary pattern. This modulation peaked for ternary accentuation after the second unaccented sound, which is assumed to be because of the contrast structure of the rhythmic sequence. The second unaccented sound breaks the accented-unaccented repetition, consequently releasing the auditory perceptual system from stimulus-specific adaptation (Antunes & Malmierca, 2014). Although the modulation induced by binary accentuation is of lower amplitude, similar dynamics can be observed with larger amplitude occurring before the accented sound. Interestingly, spontaneous movement synchronization benefits from accentuation even if accented sounds do not seem to drive the largest EEG responses in the current study. These results encourage further exploration as larger responses to accented sounds have been reported (Fujioka et al., 2015; Schaefer et al., 2011; Fujioka, Zendel, & Ross, 2010). Such differences could originate from variations in the characteristics of the auditory sequences presented in these studies, including their frequency as well as the form and magnitude of the accentuation. In other words, because of the relatively fast rates of the acoustic events used here, the various components of the neural response to the events may have been highly overlapping over time. This resulted in a complex waveform whose absolute latency of the components may not be meaningful because of various conduction delays, potentially leading to shifts of phases of these components captured by surface EEG. Overcoming this limitation of time domain analysis, the frequency analysis used here did not assume a particular shape or latency of the response and was thus helpful at demonstrating that enhanced accent-induced movement synchronization occurring with ternary accentuation originates from enhanced neural tracking of this accentuation pattern.

This statement is further supported by the observation of a positive correlation between ternary accent-induced EEG responses and movement synchrony. Participants who exhibited larger neural response to the ternary accentuation during passive listening also exhibited stronger spontaneous movement synchronization to ternary accentuation. These findings demonstrate the link between the magnitude of the neural tracking of auditory rhythms and the strength of movement synchronization occurring spontaneously with these rhythms. Thus, these results further demonstrate the relevance of EEG activity for understanding individual differences in movement synchronization with auditory rhythms, extending previous research on intentional synchronization that showed how the amplitude of an individual's EEG responses predicts capacity to move in synchrony at these specific frequencies (Nozaradan, Peretz, & Keller, 2016; Chemin et al., 2014; Nozaradan et al., 2012).

Together, our results suggest that the lower benefit of binary accentuation on spontaneous movement synchronization is because of weaker neural tracking of this pattern. Although significant EEG responses to binary accentuation were found in the current study, the amplitude of those responses might have been insufficient to support spontaneous movement synchronization. As noted above, it is also possible that the higher degree of synchronization without accentuation for the 1:2 than 1:3 ratio might have contributed to limit the benefit of binary accentuation. It can be assumed that a certain amplitude threshold of EEG responses to accentuation is necessary to observe accent-induced spontaneous movement synchronization. Future research is needed to further explore the causality between neural tracking and motor response, by testing if manipulating the intensity of the accentuation helps to enhance EEG responses to accentuation and thus accentuation-induced movement synchronization, including for binary patterns.

It would also be informative in future research to further examine the exact nature of the mechanisms underlying periodic EEG responses at the stimulus frequency and how they are modulated by accentuation. Modulations in evoked responses and/or endogenous neural oscillations have been discussed as possible underlying mechanisms (Doelling et al., 2019; Lenc et al., 2018a; Novembre & Iannetti, 2018). Distinguishing these two mechanisms from EEG recordings is challenging, especially under conditions as in this study where fast stimulus rates result in large overlap between successive evoked responses. Nevertheless, with slower auditory sequences (Fujioka et al., 2010, 2015; Schaefer et al., 2011), future studies could aim at addressing these questions more specifically and furthering the understanding of how accentuation modulates neural activity and spontaneous movement synchronization.

To conclude, the results of the current study demonstrate that differences in spontaneous auditory–motor synchronization across accentuation patterns and participants might be because of differences in the neural tracking of auditory rhythms while listening without overtly moving on the rhythms. The amplitude of EEG responses can explain the difference of movement synchronization between the binary and ternary accentuation patterns and individual differences between participants. Our results extend our understanding of the interaction between perception and action by showing that the magnitude of sensory auditory responses modulates the strength of spontaneous movement synchronization. These findings encourage future research to further consider interindividual and intraindividual differences in perceptual responses to understand spontaneous auditory–motor synchronization. Addressing these considerations could help to improve interventions that use auditory–motor synchronization and accentuation for enhancing motor performances in medical (Dalla Bella, 2020; Cochen de Cock et al., 2018; Dalla Bella, Dotov, Bardy, & de Cock, 2018; Benoit et al., 2014) and sport (Pfleiderer, Steidl-Müller, Schiltges, & Raschner, 2019; Bood, Nijssen, Van Der Kamp, & Roerdink, 2013) settings.

Acknowledgments

This work was supported by an Australian Research Council Discovery project grant (DP170104322) and by EnTimeMent, a project funded by the European Union (H2020-FETPROACT-2018-0, Contract #824160).

Reprint requests should be sent to Manuel Varlet, The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith NSW 2751, Australia, or via e-mail: m.varlet@westernsydney.edu.au.

REFERENCES

Antunes
,
F. M.
, &
Malmierca
,
M. S.
(
2014
).
An overview of stimulus-specific adaptation in the auditory thalamus
.
Brain Topography
,
27
,
480
499
.
Benoit
,
C.-E.
,
Dalla Bella
,
S.
,
Farrugia
,
N.
,
Obrig
,
H.
,
Mainka
,
S.
, &
Kotz
,
S. A.
(
2014
).
Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease
.
Frontiers in Human Neuroscience
,
8
,
494
.
Bernardi
,
L.
,
Porta
,
C.
,
Casucci
,
G.
,
Balsamo
,
R.
,
Bernardi
,
N. F.
,
Fogari
,
R.
, et al
(
2009
).
Dynamic interactions between musical, cardiovascular, and cerebral rhythms in humans
.
Circulation
,
119
,
3171
3180
.
Billauer
,
E.
(
2012
).
Peakdet: Peak detection using MATLAB
. .
Bood
,
R. J.
,
Nijssen
,
M.
,
Van Der Kamp
,
J.
, &
Roerdink
,
M.
(
2013
).
The power of auditory–motor synchronization in sports: Enhancing running performance by coupling cadence with the right beats
.
PLoS One
,
8
,
e70758
.
Bouvet
,
C. J.
,
Varlet
,
M.
,
Dalla Bella
,
S.
,
Keller
,
P. E.
, &
Bardy
,
B. G.
(
2019
).
Accent-induced stabilization of spontaneous auditory–motor synchronization
.
Psychological Research
,
84
,
2196
2209
.
Bouvet
,
C. J.
,
Varlet
,
M.
,
Dalla Bella
,
S.
,
Keller
,
P. E.
,
Zelic
,
G.
, &
Bardy
,
B. G.
(
2019
).
Preferred frequency ratios for spontaneous auditory–motor synchronization: Dynamical stability and hysteresis
.
Acta Psychologica
,
196
,
33
41
.
Bouwer
,
F. L.
,
Burgoyne
,
J. A.
,
Odijk
,
D.
,
Honing
,
H.
, &
Grahn
,
J. A.
(
2018
).
What makes a rhythm complex? The influence of musical training and accent type on beat perception
.
PLoS One
,
13
,
e0190322
.
Chemin
,
B.
,
Mouraux
,
A.
, &
Nozaradan
,
S.
(
2014
).
Body movement selectively shapes the neural representation of musical rhythms
.
Psychological Science
,
25
,
2147
2159
.
Cochen De Cock
,
V.
,
Dotov
,
D. G.
,
Ihalainen
,
P.
,
Bégel
,
V.
,
Galtier
,
F.
,
Lebrun
,
C.
, et al
(
2018
).
Rhythmic abilities and musical training in Parkinson's disease: Do they help?
NPJ Parkinson's Disease
,
4
,
8
.
Collier
,
G. L.
, &
Wright
,
C. E.
(
1995
).
Temporal rescaling of simple and complex ratios in rhythmic tapping
.
Journal of Experimental Psychology: Human Perception and Performance
,
21
,
602
627
.
Collyer
,
C. E.
,
Broadbent
,
H. A.
, &
Church
,
R. M.
(
1994
).
Preferred rates of repetitive tapping and categorical time production
.
Perception & Psychophysics
,
55
,
443
453
.
Coste
,
A.
,
Salesse
,
R. N.
,
Gueugnon
,
M.
,
Marin
,
L.
, &
Bardy
,
B. G.
(
2018
).
Standing or swaying to the beat: Discrete auditory rhythms entrain stance and promote postural coordination stability
.
Gait & Posture
,
59
,
28
34
.
Cvitanovic
,
P.
,
Shraiman
,
B.
, &
Söderberg
,
B.
(
1985
).
Scaling laws for mode lockings in circle maps
.
Physica Scripta
,
32
,
263
.
Dalla Bella
,
S.
(
2020
).
The use of rhythm in rehabilitation for patients with movement disorders
. In
L.
Cuddy
,
S.
Belleville
, &
A.
Moussard
(Eds.),
Music and the aging brain
(pp.
383
406
).
Cambridge
:
Academic Press
.
Dalla Bella
,
S.
,
Dotov
,
D.
,
Bardy
,
B.
, &
de Cock
,
V. C.
(
2018
).
Individualization of music-based rhythmic auditory cueing in Parkinson's disease
.
Annals of the New York Academy of Sciences
,
1423
,
308
317
.
Dawe
,
L. A.
,
Platt
,
J. R.
, &
Racine
,
R. J.
(
1993
).
Harmonic accents in inference of metrical structure and perception of rhythm patterns
.
Perception & Psychophysics
,
54
,
794
807
.
Dawe
,
L. A.
,
Platt
,
J. R.
, &
Racine
,
R. J.
(
1995
).
Rhythm perception and differences in accent weights for musicians and nonmusicians
.
Perception & Psychophysics
,
57
,
905
914
.
Demos
,
A. P.
,
Chaffin
,
R.
,
Begosh
,
K. T.
,
Daniels
,
J. R.
, &
Marsh
,
K. L.
(
2012
).
Rocking to the beat: Effects of music and partner's movements on spontaneous interpersonal coordination
.
Journal of Experimental Psychology: General
,
141
,
49
53
.
Doelling
,
K. B.
,
Assaneo
,
M. F.
,
Bevilacqua
,
D.
,
Pesaran
,
B.
, &
Poeppel
,
D.
(
2019
).
An oscillator model better predicts cortical entrainment to music
.
Proceedings of the National Academy of Sciences, U.S.A.
,
116
,
10113
10121
.
Drake
,
C.
(
1993
).
Reproduction of musical rhythms by children, adult musicians, and adult non-musicians
.
Perception & Psychophysics
,
53
,
25
33
.
Drake
,
C.
,
Jones
,
M. R.
, &
Baruch
,
C.
(
2000
).
The development of rhythmic attending in auditory sequences: Attunement, referent period, focal attending
.
Cognition
,
77
,
251
288
.
Edagawa
,
K.
, &
Kawasaki
,
M.
(
2017
).
Beta phase synchronization in the frontal–temporal–cerebellar network during auditory-to-motor rhythm learning
.
Scientific Reports
,
7
,
42721
.
Ellis
,
R. J.
, &
Jones
,
M. R.
(
2009
).
The role of accent salience and joint accent structure in meter perception
.
Journal of Experimental Psychology: Human Perception and Performance
,
35
,
264
280
.
Etani
,
T.
,
Miura
,
A.
,
Okano
,
M.
,
Shinya
,
M.
, &
Kudo
,
K.
(
2019
).
Accent stabilizes 1:2 sensorimotor synchronization of rhythmic knee flexion–extension movement in upright stance
.
Frontiers in Psychology
,
10
,
888
.
Fink
,
P. W.
,
Foo
,
P.
,
Jirsa
,
V. K.
, &
Kelso
,
J. A.
(
2000
).
Local and global stabilization of coordination by sensory information
.
Experimental Brain Research
,
134
,
9
20
.
Fraisse
,
P.
(
1956
).
Les structures rhythmiques: Étude psychologique
.
Paris-Bruxelles
:
Publications Universitaires de Louvain
.
Fraisse
,
P.
(
1974
).
Cues in sensori-motor synchronization
. In
L. E.
Scheving
,
F.
Halberg
, &
J. E.
Pauly
(Eds.),
Chronobiology
(pp.
517
522
).
Tokyo
:
Igaku Shoin
.
Fraisse
,
P.
(
1982
).
Rhythm and tempo
. In
D.
Deutsch
(Ed.),
The psychology of music
(pp.
149
180
).
Orlando, FL
:
Academic Press
.
Fujioka
,
T.
,
Ross
,
B.
, &
Trainor
,
L. J.
(
2015
).
Beta-band oscillations represent auditory beat and its metrical hierarchy in perception and imagery
.
Journal of Neuroscience
,
35
,
15187
15198
.
Fujioka
,
T.
,
Zendel
,
B. R.
, &
Ross
,
B.
(
2010
).
Endogenous neuromagnetic activity for mental hierarchy of timing
.
Journal of Neuroscience
,
30
,
3458
3466
.
Grahn
,
J. A.
, &
Rowe
,
J. B.
(
2009
).
Feeling the beat: Premotor and striatal interactions in musicians and non-musicians during beat perception
.
Journal of Neuroscience
,
29
,
7540
7548
.
Handy
,
T. C.
(
2005
).
Event-related potentials: A methods handbook
.
Cambridge, MA
:
MIT Press
.
Hardy
,
G. H.
, &
Wright
,
E. M.
(
1979
).
An introduction to the theory of numbers
.
Oxford
:
Oxford University Press
.
Hattori
,
Y.
,
Tomonaga
,
M.
, &
Matsuzawa
,
T.
(
2015
).
Distractor effect of auditory rhythms on self-paced tapping in chimpanzees and humans
.
PLoS One
,
10
,
e0130682
.
Hoffmann
,
C. P.
, &
Bardy
,
B. G.
(
2015
).
Dynamics of the locomotor–respiratory coupling at different frequencies
.
Experimental Brain Research
,
233
,
1551
1561
.
Jones
,
M. R.
(
1976
).
Time, our lost dimension: Toward a new theory of perception, attention, and memory
.
Psychological Review
,
83
,
323
355
.
Kelso
,
J. A. S.
, &
De Guzman
,
C.
(
1988
).
Order in time: How cooperation between the hands informs the design of the brain
. In
H.
Haken
(Ed.),
Neural and synergetic computers
(pp.
180
196
).
Berlin, Germany
:
Springer
.
Kudo
,
K.
,
Park
,
H.
,
Kay
,
B. A.
, &
Turvey
,
M. T.
(
2006
).
Environmental coupling modulates the attractors of rhythmic coordination
.
Journal of Experimental Psychology: Human Perception an Performance
,
32
,
599
609
.
Large
,
E. W.
(
2008
).
Resonating to musical rhythm: Theory and experiment
. In
S.
Grondin
(Ed.),
The psychology of time
(pp.
189
231
).
Bingley, UK
:
Emerald Publishers
.
Lenc
,
T.
,
Keller
,
P. E.
,
Varlet
,
M.
, &
Nozaradan
,
S.
(
2018a
).
Reply to Novembre and Iannetti: Conceptual and methodological issues
.
Proceedings of the National Academy of Sciences, U.S.A.
,
115
,
E11004
.
Lenc
,
T.
,
Keller
,
P. E.
,
Varlet
,
M.
, &
Nozaradan
,
S.
(
2018b
).
Neural tracking of the musical beat is enhanced by low-frequency sounds
.
Proceedings of the National Academy of Sciences, U.S.A.
,
115
,
8221
8226
.
Lenc
,
T.
,
Keller
,
P. E.
,
Varlet
,
M.
, &
Nozaradan
,
S.
(
2019
).
Reply to Rajendran and Schnupp: Frequency tagging is sensitive to the temporal structure of signals
.
Proceedings of the National Academy of Sciences, U.S.A.
,
116
,
2781
2782
.
Lerdahl
,
F.
, &
Jackendoff
,
R.
(
1983
).
A generative theory of tonal music
.
Cambridge, MA
:
MIT Press
.
London
,
J.
(
2012
).
Hearing in time: Psychological aspects of musical meter
(2nd ed.).
Oxford, UK
:
Oxford University Press
.
Moelants
,
D.
(
2002
).
Preferred tempo reconsidered
. In
C.
Stevens
,
D.
Burnham
,
G.
McPherson
,
E.
Schubert
, &
J.
Renwick
(Eds.),
Proceedings of the 7th International Conference on Music Perception and Cognition, Sydney, Australia
(pp.
580
583
).
Adelaide, South Australia
:
Causal Productions
.
Mouraux
,
A.
,
Iannetti
,
G. D.
,
Colon
,
E.
,
Nozaradan
,
S.
,
Legrain
,
V.
, &
Plaghki
,
L.
(
2011
).
Nociceptive steady-state evoked potentials elicited by rapid periodic thermal stimulation of cutaneous nociceptors
.
Journal of Neuroscience
,
31
,
6079
6087
.
Näätänen
,
R.
,
Paavilainen
,
P.
,
Rinne
,
T.
, &
Alho
,
K.
(
2007
).
The mismatch negativity (MMN) in basic research of central auditory processing: A review
.
Clinical Neurophysiology
,
118
,
2544
2590
.
Novembre
,
G.
, &
Iannetti
,
G. D.
(
2018
).
Tagging the musical beat: Neural entrainment or event-related potentials?
Proceedings of the National Academy of Sciences, U.S.A.
,
115
,
E11002
E11003
.
Nozaradan
,
S.
,
Mouraux
,
A.
, &
Cousineau
,
M.
(
2017
).
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences
.
Journal of Neurophysiology
,
118
,
243
253
.
Nozaradan
,
S.
,
Peretz
,
I.
, &
Keller
,
P. E.
(
2016
).
Individual differences in rhythmic cortical entrainment correlate with predictive behavior in sensorimotor synchronization
.
Scientific Reports
,
6
,
20612
.
Nozaradan
,
S.
,
Peretz
,
I.
,
Missal
,
M.
, &
Mouraux
,
A.
(
2011
).
Tagging the neuronal entrainment to beat and meter
.
Journal of Neuroscience
,
31
,
10234
10240
.
Nozaradan
,
S.
,
Peretz
,
I.
, &
Mouraux
,
A.
(
2012
).
Selective neuronal entrainment to the beat and meter embedded in a musical rhythm
.
Journal of Neuroscience
,
32
,
17572
17581
.
Nozaradan
,
S.
,
Schönwiesner
,
M.
,
Keller
,
P. E.
,
Lenc
,
T.
, &
Lehmann
,
A.
(
2018
).
Neural bases of rhythmic entrainment in humans: Critical transformation between cortical and lower-level representations of auditory rhythm
.
European Journal of Neuroscience
,
47
,
321
332
.
Nozaradan
,
S.
,
Zerouali
,
Y.
,
Peretz
,
I.
, &
Mouraux
,
A.
(
2013
).
Capturing with EEG the neural entrainment and coupling underlying sensorimotor synchronization to the beat
.
Cerebral Cortex
,
25
,
736
747
.
Oostenveld
,
R.
,
Fries
,
P.
,
Maris
,
E.
, &
Schoffelen
,
J. M.
(
2011
).
FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data
.
Computational Intelligence and Neuroscience
,
2011
,
156869
.
Palmer
,
C.
, &
Krumhansl
,
C. L.
(
1990
).
Mental representations for musical meter
.
Journal of Experimental Psychology: Human Perception and Performance
,
16
,
728
741
.
Peckel
,
M.
,
Pozzo
,
T.
, &
Bigand
,
E.
(
2014
).
The impact of the perception of rhythmic music on self-paced oscillatory movements
.
Frontiers in Psychology
,
5
,
1037
.
Peper
,
C. E.
,
Beek
,
P. J.
, &
van Wieringen
,
P. C.
(
1995a
).
Frequency-induced phase transitions in bimanual tapping
.
Biological Cybernetics
,
73
,
301
309
.
Peper
,
C. E.
,
Beek
,
P. J.
, &
van Wieringen
,
P. C.
(
1995b
).
Multifrequency coordination in bimanual tapping: Asymmetrical coupling and signs of supercriticality
.
Journal of Experimental Psychology: Human Perception and Performance
,
21
,
1117
1138
.
Pfleiderer
,
L. M.
,
Steidl-Müller
,
L.
,
Schiltges
,
J.
, &
Raschner
,
C.
(
2019
).
Effects of synchronous, auditory stimuli on running performance and heart rate
.
Current Issues in Sport Science
,
4
,
5
.
Povel
,
D.-J.
(
1981
).
Internal representation of simple temporal patterns
.
Journal of Experimental Psychology: Human Perception and Performance
,
7
,
3
18
.
Rajendran
,
V. G.
,
Harper
,
N. S.
,
Garcia-Lazaro
,
J. A.
,
Lesica
,
N. A.
, &
Schnupp
,
J. W.
(
2017
).
Midbrain adaptation may set the stage for the perception of musical beat
.
Proceedings of the Royal Society of London, Series B, Biological Sciences
,
284
,
20171455
.
Rajendran
,
V. G.
, &
Schnupp
,
J. W.
(
2019
).
Frequency tagging cannot measure neural tracking of beat or meter
.
Proceedings of the National Academy of Sciences, U.S.A.
,
116
,
2779
2780
.
Repp
,
B. H.
(
2003
).
Rate limits in sensorimotor synchronization with auditory and visual sequences: The synchronization threshold and the benefits and costs of interval subdivision
.
Journal of Motor Behavior
,
35
,
355
370
.
Repp
,
B. H.
(
2005
).
Rate limits of on-beat and off-beat tapping with simple auditory rhythms: 2. The roles of different kinds of accent
.
Music Perception
,
23
,
165
188
.
Repp
,
B. H.
, &
Su
,
Y.-H.
(
2013
).
Sensorimotor synchronization: A review of recent research (2006–2012)
.
Psychonomic Bulletin & Review
,
20
,
403
452
.
Roerdink
,
M.
,
Lamoth
,
C. J. C.
,
van Kordelaar
,
J.
,
Elich
,
P.
,
Konijnenbelt
,
M.
,
Kwakkel
,
G.
, et al
(
2009
).
Rhythm perturbations in acoustically paced treadmill walking after stroke
.
Neurorehabilitation and Neural Repair
,
23
,
668
678
.
Schaefer
,
R. S.
,
Vlek
,
R. J.
, &
Desain
,
P.
(
2011
).
Decomposing rhythm processing: Electroencephalography of perceived and self-imposed rhythmic patterns
.
Psychological Research
,
75
,
95
106
.
Schurger
,
A.
,
Faivre
,
N.
,
Cammoun
,
L.
,
Trovó
,
B.
, &
Blanke
,
O.
(
2017
).
Entrainment of voluntary movement to undetected auditory regularities
.
Scientific Reports
,
7
,
14867
.
Semjen
,
A.
, &
Vos
,
P. G.
(
2002
).
The impact of metrical structure on performance stability in bimanual 1:3 tapping
.
Psychological Research
,
66
,
50
59
.
Treffner
,
P. J.
, &
Turvey
,
M. T.
(
1993
).
Resonance constraints on rhythmic movement
.
Journal of Experimental Psychology: Human Perception and Performance
,
19
,
1221
1237
.
Van Dyck
,
E.
,
Moens
,
B.
,
Buhmann
,
J.
,
Demey
,
M.
,
Coorevits
,
E.
,
Dalla Bella
,
S.
, et al
(
2015
).
Spontaneous entrainment of running cadence to music tempo
.
Sports Medicine - Open
,
1
,
15
.
Varlet
,
M.
,
Nozaradan
,
S.
,
Nijhuis
,
P.
, &
Keller
,
P. E.
(
2020
).
Neural tracking and integration of ‘self’ and ‘other’ in improvised interpersonal coordination
.
Neuroimage
,
206
,
116303
.
Varlet
,
M.
,
Williams
,
R.
,
Bouvet
,
C.
, &
Keller
,
P. E.
(
2018
).
Single (1:1) vs. double (1:2) metronomes for the spontaneous entrainment and stabilisation of human rhythmic movements
.
Experimental Brain Research
,
236
,
3341
3350
.
Varlet
,
M.
,
Williams
,
R.
, &
Keller
,
P. E.
(
2020
).
Effects of pitch and tempo of auditory rhythms on spontaneous movement entrainment and stabilisation
.
Psychological Research
,
84
,
568
584
.
Zelic
,
G.
,
Varoqui
,
D.
,
Kim
,
J.
, &
Davis
,
C.
(
2017
).
A flexible and accurate method to estimate the mode and stability of spontaneous coordinated behaviors: The index-of-stability (IS) analysis
.
Behavior Research Methods
,
50
,
182
194
.