Abstract
Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
INTRODUCTION
Our ability to discriminate and assign meaning to speech sounds relies on the identification and integration of spectrotemporal cues carried by the acoustic speech signal. A common approach to study the mechanisms underlying speech cue integration is to present distinct cues to different ears (dichotic listening) and to investigate the conditions under which they give rise to the subjective experience of an integrated unified speech sound (Preisig & Sjerps, 2019; Mathiak, Hertrich, Lutzenberger, & Ackermann, 2001; Liberman & Mattingly, 1989; Rand, 1974). Although the auditory nerve projects from each ear to both cerebral hemispheres, processing of acoustic input is initially dominant in the neural pathway, including the auditory cortex that is “contralateral” to the ear of presentation (Pollmann, Maertens, von Cramon, Lepsien, & Hugdahl, 2002; Sparks & Geschwind, 1968; Kimura, 1967); for reviews, see (Hugdahl & Westerhausen, 2016; Westerhausen & Hugdahl, 2008). Therefore, the unification of the binaurally presented speech cues requires interhemispheric integration, that is, the grouping and fusion of cues that are initially processed by different cerebral hemispheres. Moreover, processing of speech and language, for example, phoneme recognition, is dominant in the left hemisphere (Mesgarani, Cheung, Johnson, & Chang, 2014; Giraud & Poeppel, 2012; Chang et al., 2010; Obleser, Zimmermann, Van Meter, & Rauschecker, 2007; Jäncke, 2002; Zatorre & Belin, 2001). Thus, the integration of binaurally presented speech cues may require interhemispheric transfer of information from the right to the left auditory cortex via the corpus callosum, as described in the so-called callosal relay model (Steinmann et al., 2014, 2018; Bayazıt, Oniz, Hahn, Güntürkün, & Ozgören, 2009; Westerhausen, Grüner, Specht, & Hugdahl, 2009; Jäncke, 2002).
The interhemispheric transfer and integration of sensory information has been suggested to be facilitated through phase synchronization between neural oscillations in the two hemispheres (Fell & Axmacher, 2011; Fries, 2005). Hence, interhemispheric phase synchronization may play a crucial role for the integration of dichotic speech cues. In support of this idea, Steinmann et al. (2014) have shown modulation of interhemispheric gamma (30–100 Hz) phase synchronization during dichotic speech listening. More precisely, increased gamma functional connectivity was observed in a condition requiring transfer of speech cues for phoneme recognition, and this connectivity was directed from the right to the left secondary auditory cortex (Steinmann et al., 2018). Thus, interhemispheric phase synchronization in the gamma band in the posterior superior temporal cortex plays a role in the interhemispheric integration of speech.
It is still unclear whether this synchronization contributes functionally to speech integration or merely results from it (Zaehle, Lenz, Ohl, & Herrmann, 2010). Moreover, it is unclear whether its role is limited to oscillations in the gamma range. Besides their role in interhemispheric integration of speech, cortical oscillations in the lower gamma range (25–40 Hz) are also important for the processing of phonetic information such as formant transitions or voicing (Rufener, Oechslin, Zaehle, & Meyer, 2016; Giraud & Poeppel, 2012; Shamir, Ghitza, Epstein, & Kopell, 2009; Poeppel, 2003). Slower oscillations in the delta and theta (∼1–8 Hz) band overlap with intelligibility-relevant temporal fluctuations in the acoustic speech signal and may contribute functionally to the processing of syllabic information during diotic speech perception (Riecke, Formisano, Sorger, Başkent, & Gaudrain, 2018; Zoefel, Archer-Boyd, & Davis, 2018; Keitel, Ince, Gross, & Kayser, 2017; Rimmele, Zion Golumbic, Schröger, & Poeppel, 2015; Gross et al., 2013; Luo & Poeppel, 2007; for a comprehensive review, see Kösem & Wassenhove, 2017). Thus, these slow oscillations may contribute to diotic speech perception, but there exists no evidence to suggest that they play a role in interhemispheric integration.
In this study, we investigated the mechanisms underlying dichotic speech cue integration. We tested the hypothesis that interhemispheric phase synchronization plays a functional role in interhemispheric speech integration. We experimentally manipulated interhemispheric phase synchronization by applying transcranial alternating current stimulation (TACS) simultaneously above the auditory cortex in the lateral superior temporal lobe of each hemisphere. To functionally couple the two regions, we fixed the phase of TACS across the two stimulation sites (in-phase condition). Conversely, to functionally decouple the two regions, we reversed the phase of TACS at one site (anti-phase condition; Preisig, Sjerps, Kösem, & Riecke, 2019; Saturnino, Madsen, Siebner, & Thielscher, 2017). This approach has already been successfully applied to modulate bistable perception in the visual domain (Helfrich et al., 2014). To test for a specific role of gamma oscillations for interhemispheric speech sound integration, we applied TACS at 40 Hz (gamma condition). Furthermore, we included 3.125-Hz TACS (delta condition) and sham stimulation as control conditions to enable establishing frequency specificity of the putative effect of gamma TACS on interhemispheric speech integration.
Interhemispheric speech integration was assessed using a dichotic listening task. An ambiguous speech sound (“base,” perceptually intermediate between the syllables /ga/ and /da/) was presented to the participants' right ear and a disambiguating acoustic cue (“chirp,” which was either a low or high third formant, F3) to their left ear. Interhemispheric integration of the base and the chirp is reflected by an increased number of /ga/ reports in the low-F3 condition and an increased number of /da/ reports in the high-F3 condition (Preisig & Sjerps, 2019).
We predicted that interhemispheric phase synchronization (in-phase condition) would significantly increase interhemispheric speech integration, as reflected by an increased number of /ga/ reports in the low-F3 condition and an increased number of /da/ reports in the high-F3 condition, compared with interhemispheric phase desynchronization (anti-phase condition). Furthermore, we predicted that functional coupling of bilateral auditory cortices in the gamma, but not delta frequency band, would strengthen interhemispheric speech integration, compared with sham stimulation.
METHODS
Participants
Thirty-six right-handed volunteers (M = 22.56 years, SD = 2.93; 14 men) participated in the study. All participants had normal or corrected-to-normal visual acuity. The participants reported no history of neurological, psychiatric, or hearing disorders, and all had normal hearing (hearing thresholds of less than 25 dB HL at 250, 500, 750, 1000, 1500, 3000, and 4000 Hz, tested on both ears separately using pure tone audiometry) and no threshold difference between the left and the right ear larger than 5 dB for any of the tested frequencies. All participants gave written informed consent before the experiment. Ethical approval to conduct this study was provided by the local ethics committee (CMO region Arnhem-Nijmegen). This study was conducted in accordance with the principles of the latest version of the Declaration of Helsinki.
Electric Stimulation
Electric currents were applied through two high-density electrode configurations, each consisting of concentric rubber electrodes: a central circular electrode (radius = 1.25 cm) and a surrounding ring electrode (inner radius = 3.5 cm, outer radius = 4.8 cm). Each electrode configuration was connected to a separate battery-driven transcranial current stimulator (Neuroconn, Ilmenau, Germany), similar to previous two-channel approaches (Ten Oever et al., 2016; Riecke, Formisano, Herrmann, & Sack, 2015). The electrode configurations were centered according to the international 10–20 system over CP5 (above the left cerebral hemisphere) and CP6 (above the right cerebral hemisphere; see Figure 1). These scalp locations were chosen to produce relatively strong currents in the target regions over the auditory speech areas (i.e., left and right lateral superior temporal lobe), as suggested by prior electric field simulations on a standard head model using the simnibs toolbox (Thielscher, Antunes, & Saturnino, 2015).
TACS was applied at a frequency in the low gamma band (40 Hz) or in the delta band (3.125 Hz), the latter matched the timescale of the syllabic envelope, that is, the duration of the syllable matched the length of a half cycle of the delta TACS. Before starting the actual experiment, we ensured that all participants tolerated the TACS well. TACS intensity was adjusted individually to the point for which the participant reported feeling comfortable or uncertain about the presence of the current (1.4 ± 0.1 mA peak-to-peak, mean ± SD across participants). Impedance was kept below 10 kΩ. The average current density was 0.2 mA/cm2 at the center electrode and 0.06 mA/cm2 at the concentric ring electrode. Stimulation was ramped over the first and the last 10 sec of each experimental block using raised cosine ramps.
The timing of the electric and auditory stimuli was controlled using a multichannel D/A converter (National Instruments, sampling rate: 11 kHz) and Datastreamer software (Ten Oever et al., 2016). Visual stimulation and response recording were controlled using Presentation software (Version 18.0, Neurobehavioral Systems, Inc., Berkeley, CA).
Behavioral Pretest
Interhemispheric speech sound integration was assessed by simultaneously presenting an ambiguous base and a disambiguating chirp (F3) to the right and left ear, respectively. The chirp supported either a /da/ (high F3 ∼ 2.9 kHz) or a /ga/ (low F3 ∼ 2.5 kHz) interpretation of the ambiguous base. Because perceptual category boundaries may vary across individuals, a pretest was used to define participant-specific ambiguous base stimuli for the main experiment. The pretest included the presentation of nine stimuli of the /da/–/ga/ continuum, each 16 times, in random order. To make this pretest most similar to the main experiment, syllables of the /da/–/ga/ continuum were presented to the right ear, and a single F3 chirp (identical to the F3 component in the ambiguous base stimulus) to the left ear. Subjective category boundaries were estimated by assessing individual psychometric curves and identifying the point at which participants reported perceiving the stimulus as /da/ or /ga/ in ∼50% of the trials. The stimulus associated with this individual category boundary was then used as the base stimulus for the subsequent main experiment (Preisig & Sjerps, 2019). Further detail concerning stimulus creation is reported in a previous publication using the same materials (Preisig & Sjerps, 2019).
Experimental Design and Task
The experiment included four stimulation conditions and sham stimulation. Electric stimulation was applied at one of the two frequencies, 40 Hz and 3.125 Hz. Each of these frequency conditions was presented in two interhemispheric phase synchronization conditions: (A) “In-phase stimulation” was applied with a phase lag of 0° between the central electrodes placed over the left and the right auditory speech areas (i.e., bilateral superior temporal lobe) to induce interhemispheric synchronization. (B) “Antiphase stimulation” was applied with a relative phase lag of 180° to induce interhemispheric desynchronization (Preisig et al., 2019; Saturnino et al., 2017; see Figure 1). During “sham stimulation” (placebo), the onset ramp was followed immediately by an offset ramp, that is, no electric stimulation was applied during the actual experiment. The ramp was repeated at the end of the block.
The experiment consisted of 10 experimental blocks. Each block consisted of 48 trials containing the ambiguous base stimulus (for which the F3 frequency was set at the participant-specific subjective category boundary value that was obtained in the pretest) and 12 trials containing unambiguous base stimuli (which contained an F3 component that supported a clear /da/ interpretation [∼2.9 kHz] or a /ga/ interpretation [∼2.5 kHz]) presented to the right ear. The ambiguous base stimulus was paired with a disambiguating F3 chirp presented to the left ear (24 trials with the high F3 chirp and 24 trials with the low F3 chirp). In the unambiguous stimuli, a chirp with the same F3 frequency as the base was presented to the left ear. Unambiguous stimuli did not require interhemispheric integration for disambiguation because participants could readily identify these stimuli based on monaural input alone, that is, the unambiguous base stimulus presented to the right ear. The first half of the experiment included five blocks of trials: gamma in-phase TACS, gamma anti-phase TACS, delta in-phase TACS, delta anti-phase TACS, and sham. The order of the first five experimental blocks was reversed in the second half of the experiment. The order of all blocks was pseudorandomized, such that blocks of the same TACS frequency followed upon each other and counterbalanced across participants. This pseudorandomization scheme was used to account for potential cross-frequency carryover effects (Vossen, Gross, & Thut, 2015).
After each block, participants were asked to rate the subjective strength of any sensations induced by the stimulation on a visual analogue scale from 0 cm (no subjective sensations) to 10 cm (strong subjective sensations). Although sensation ratings were relatively low in all conditions, TACS blocks (M = 2.64, SD = 1.38) were rated significantly higher than sham blocks (M = 1.77, SD = 1.46), t(25) = 3.11, p < .01. However, even though participants rated TACS and sham blocks differently (Zoefel, Allard, Anil, & Davis, 2020; Turi et al., 2019), this unlikely influenced our main results, as we found no association between sensation ratings and behavioral performance, Pearson's R(128) = −0.14, p = .10, across stimulation conditions.
The auditory stimuli were presented with an ISI of on average 3.5 sec. The exact ISI was set so that the syllable onset occurred at one of six predefined, equidistant TACS phases (TACS/syllable onset lag: 30°, 90°, 150°, 210°, 270°, 330°). This allowed compensating for individual differences in the optimal relative TACS syllable timing (Zoefel, Davis, Valente, & Riecke, 2019; Riecke et al., 2018; Riecke, Formisano, et al., 2015; Riecke, Sack, & Schroeder, 2015), with the aim to improve the detectability of putative stimulation effects in the group-level analysis. In this study, we did not observe any effect of TACS/syllable onset lag (Figure 2). Thus, the behavioral data were pooled across the six TACS/syllable onset lags for each stimulation condition. Every stimulus was preceded by a fixation cross presented 600 msec before auditory stimulus onset. At 1450 msec after the fixation cross onset, the response options /ga/ and /da/ were presented (one above and one below the fixation cross, falling within a visual angle of 9.43°). The participants indicated their response by pressing the corresponding response button with their left index finger.1 Participants were instructed to perform as accurately and as fast as possible. The position (up vs. down) of the response options was counterbalanced across participants.
Data Analysis
In a first step, we assessed the reliability of the categorical judgments of individual participants on unambiguous endpoint trials (base and chirp stimuli with the same F3 endpoint, supporting the interpretation of /ga/ or /da/) collected during the sham blocks. Hence, for each participant, we tested with a chi-square test whether the proportion of /ga/ responses differed between /ga/ and /da/ endpoint stimuli. Based on this criterion, the data of four participants were excluded from further analyses because their classification accuracy did not significantly exceed chance level. One additional participant was excluded because of a technical error during the experiment. Thus, the final data set included data from 31 participants (M = 22.63 years, SD = 3.20, 12 men).
Two dependent variables were analyzed: the categorical response on each individual trial (0 = /da/; 1 = /ga/) and the proportion of responses consistent with the presented F3 chirp (i.e., those in which interhemispheric integration occurred), per condition. These variables were computed based on participants' responses to the stimuli requiring interhemispheric integration, that is, the stimuli composed of an ambiguous base and a disambiguating F3 chirp. For each stimulation condition, the proportion of integrated trials was calculated per TACS/syllable onset lag, which were concatenated to build a behavioral time series. To compensate for individual differences (Figure 2), the maximum (best lag) of the time series was subsequently aligned across individuals. Because we did not observe any effect of TACS/syllable onset lag, the behavioral time series were pooled across the six TACS/syllable onset lags for each stimulation condition. Statistical analyses were conducted in R (Version 3.3.3) using parametric tests (normality assumption was fulfilled, the dependent variable in each of our conditions was normally distributed, Shapiro–Wilk test of normality, ps > .19): Linear mixed-effect models were used to analyze categorical responses, and repeated-measures ANOVAs were used to test for a stimulation effect, interhemispheric phase effect, frequency effect, and interactions. Post hoc comparisons were conducted using paired t tests and false discovery rate (FDR) corrections for multiple comparisons were applied (Benjamini & Hochberg, 1995).
RESULTS
The average classification accuracy (%) including unambiguous stimuli (extreme points from the /ga/–/da/ continuum) during sham blocks was high (M = 90.59, SD = 8.74). For trials that required interhemispheric integration, participants integrated the information from the F3 chirp on average on 73.4 ± 9.7% (mean ± SEM) of the trials that included an ambiguous base stimulus. First, we tested whether participants' responses to ambiguous base stimuli were influenced by the frequency of the disambiguating F3 chirp presented to the contralateral ear. For this analysis, we only included sham blocks. We observed that participants gave on average 34.12 ± 10.08% (mean ± SEM) /ga/ responses to ambiguous bases combined with the high (∼2.9 kHz) F3 chirp and 80.92 ± 9.14% (mean ± SEM) /ga/ responses to ambiguous bases combined with the low (∼2.5 kHz) F3 chirp. To confirm that the chirp F3 frequency influenced participants' response (0 = /da/; 1 = /ga/ response), a logistic linear mixed-effect model with the fixed factor “chirp type” (levels: high F3 = −1; low F3 = 1), and by-participant random intercepts and slopes were fitted to the data. The analysis revealed a main effect of chirp type (B = 2.733, z = 13.052, p < .001). This result indicates that interhemispheric speech integration occurred (i.e., the participants integrated the chirp and the contralateral ambiguous base) during the sham blocks as we expected (Figure 3A).
Figure 3B shows participants' average performance for each stimulation condition and the sham condition. In the stimulation conditions, overall performance ranged on average between 71.75% and 73.98%, whereas in the sham condition, it was significantly better (76.16%; average difference: 3.46%), t(30) = 2.89, p = .007, d = 0.37. To test whether the strength of this general stimulation effect depended on the frequency or phase synchrony of the TACS, a two-way repeated-measures ANOVA, including the within-subject factors Stimulation Frequency (40 Hz, 3.125 Hz) and Interhemispheric Phase Synchronization (in-phase, anti-phase) was conducted for the dependent variable difference in interhemispheric integration of the chirp as compared with sham stimulation. Delta values between the performance in each stimulation condition and the sham condition were included in the analysis. Contrary to our predictions, this analysis revealed no significant interaction Stimulation Frequency × Interhemispheric Phase Synchronization, F(1, 30) = 2.59, p = .12, ηp2 = .01, or main effects of Stimulation Frequency, F(1, 30) = 0.12, p = .73, ηp2 = .0004, or Interhemispheric Phase Synchronization, F(1, 30) = 0.20, p = .66, ηp2 =.0008. The lack of a main effect of Interhemispheric Phase Synchronization implies no significant difference between in-phase versus anti-phase stimulation.
To identify the specific TACS conditions under which the stimulation effect occurred, a one-way repeated-measures ANOVA, including the within-subject factors Stimulation Condition (sham, in-phase 40 Hz, anti-phase 40 Hz, in-phase 3.125 Hz, anti-phase 3.125 Hz) was conducted for the dependent variable the proportion of integrated trials. This analysis revealed a significant main effect of Stimulation Condition, F(4, 120) = 2.99, p = .02, ηp2 = .02. Pairwise comparisons revealed significantly reduced performance in the in-phase 40-Hz condition, t(30) = −2.78, p = .049, FDR-corrected, d = −0.37, and anti-phase 3.125-Hz condition, t(30) = −2.76, p = .049, FDR-corrected, d = −0.40, compared with sham stimulation, but not in the anti-phase 40-Hz condition, t(30) = −1.44, p = .32, FDR-corrected, d = −0.22, or in-phase 3.125-Hz condition, t(30) = −2.16, p = .13, FDR-corrected, d = −0.29. These results indicate that the bihemispheric TACS modulated interhemispheric speech integration.
DISCUSSION
In this study, we tested the hypothesis that interhemispheric phase synchronization facilitates interhemispheric speech integration. To test this, we applied TACS simultaneously above listeners' left and right auditory speech areas (either in-phase or anti-phase) to synchronize or desynchronize the two areas and measured the effect on interhemispheric speech integration. Based on previous evidence from electrophysiological studies (Steinmann et al., 2014, 2018), interhemispheric integration of speech might be causally related to phase synchronization of bilateral auditory speech areas in the gamma frequency band. No such effect has been reported for the delta frequency band. Thus, we predicted that functional coupling of bilateral auditory speech areas in the gamma, but probably not in delta frequency band, would strengthen interhemispheric speech integration, compared with functionally decoupling them.
Our results show a reduction of interhemispheric integration under gamma TACS compared with sham stimulation. This reduction was significant when gamma TACS was applied in-phase above the two cerebral hemispheres. We also observed a significant reduction when anti-phase delta TACS was applied. We found no significant difference between in-phase compared with anti-phase conditions for either gamma or delta TACS. Although we found a general reduction of performance during TACS versus sham stimulation, we observed no main effect or interaction in an overall ANOVA comparing these reductions across the different TACS conditions. However, the observed pattern of significant (in-phase gamma TACS, anti-phase delta TACS) and nonsignificant (anti-phase gamma TACS, in-phase delta TACS) changes in speech perception relative to sham stimulation strongly suggests that TACS modulated interhemispheric speech cue integration.
Contrary to our prediction, in-phase, not anti-phase, gamma TACS perturbed interhemispheric speech cue integration. This finding implies that full interhemispheric phase synchronization (0° difference) at 40 Hz is not beneficial for interhemispheric speech cue integration. This observation could be related to interindividual differences in interhemispheric auditory transfer times (Henshall et al., 2012). Strongest interhemispheric integration may occur when gamma phase in the two hemispheres differs in a manner commensurate with individual auditory transfer times. This notion is supported by findings showing that the auditory event-related N100 to dichotically presented syllables occurs at a different latency over the right versus the left auditory cortex (Eichele, Nordby, Rimol, & Hugdahl, 2005). The reported lag is on average 15 msec, which closely matches with the half cycle duration of our gamma TACS (12.5 msec). In line with this, a recent study found that anti-phase TACS applied at 40 Hz does not affect response laterality during dichotic listening (Meier et al., 2019). Critically, the authors could show in a follow-up analysis that, only in participants with intrinsic gamma phase asymmetries closer to 0°, anti-phase gamma TACS led to a reduction of interhemispheric integration, that is, a shift in response laterality to right ear. These results corresponds well with our finding that anti-phase gamma TACS, which imposes an interhemispheric lag of 12.5 msec, may not perturb speech cue integration. Our current experimental design does not allow further testing this idea; this may be done in future studies that parametrically manipulate interhemispheric phase asynchrony in multiple steps across the gamma cycle.
Other studies have reported that bilateral 40-Hz TACS perturbs phonemic processing, decreasing discriminability of syllables with different VOTs in young adults (Rufener, Zaehle, Oechslin, & Meyer, 2016) but increasing it in older adults (Rufener, Oechslin, et al., 2016) and in dyslexic individuals (Rufener, Krauel, Meyer, Heinze, & Zaehle, 2019). Therefore, we cannot rule out that our gamma TACS also affected local phoneme processing. We positioned our electrodes so as to stimulate especially cortical speech areas in the lateral superior temporal lobe; therefore, we believe that the observed effect originates from these areas. We cannot exclude that other regions were stimulated by spreading current and also contributed to the effect as our design did not include control regions. In addition to that, gamma TACS might have affected deployment of attentional resources, considering that unilateral 40-Hz TACS may affect performance on dichotic working memory tasks (Wöstmann, Vosskuhl, Obleser, & Herrmann, 2018).
Surprisingly, our results suggest that not only gamma-phase coupling but also delta-phase coupling plays a role for interhemispheric speech cue integration. Our observation that anti-phase delta TACS perturbed behavioral performance suggests that this type of stimulation disrupts cross-lateral transfer of speech cues as well. Previous studies using dichotic stimulus presentation did not report phase coupling in this frequency band (Steinmann et al., 2014, 2018). Therefore, we speculate that anti-phase TACS may have caused a difference in neural excitability between hemispheres during the processing of the binaural input: When the current was positive over one site, it was negative over the contralateral site, and vice versa. This may have been particularly relevant for the delta TACS condition, in which the applied current matched the syllabic envelope. Increased neural excitability in one hemisphere and decreased excitability in the other may have resulted in an interhemispheric difference in the effectiveness with which the dichotic syllabic components (chirp or ambiguous base) were processed. Indeed, transcranial direct current stimulation has been shown to have polarity-specific effects on temporal and spectral processing of auditory input (Heimrath, Kuehne, Heinze, & Zaehle, 2014; Schaal, Williamson, & Banissy, 2013; Zaehle, Beretta, Jäncke, Herrmann, & Sandmann, 2011; Vines, Schnider, & Schlaug, 2006).
An important additional consideration is that anti-phase delta TACS may disrupt interhemispheric cross-frequency dynamics between delta and gamma oscillations during speech perception (Giraud & Poeppel, 2012). Coupling of these frequency bands could be of particular relevance for interhemispheric integration, because regions in the left and right auditory cortex may be differently tuned with respect to these frequency bands, with a relative leftward dominance of low-gamma neural oscillations and/or rightward dominance of slow frequency oscillations (Flinker, Doyle, Mehta, Devinsky, & Poeppel, 2019; Bouton et al., 2018; Giraud & Poeppel, 2012; Saoud et al., 2012; Poeppel, 2003). In addition to this, there is support that right hemispheric auditory processing may be tuned for spectral information (Preisig & Sjerps, 2019; Bouton et al., 2018) and left hemispheric auditory processing may be tuned for temporal information (Flinker et al., 2019; Saoud et al., 2012)—a theoretical framework originally formulated in the asymmetric sampling theory (Poeppel, 2003; for a similar framework, see Zatorre & Belin, 2001). In a previous study, we found that the laterality of initial chirp sound processing, that is, the ear of presentation, did not influence participants' perceptual decisions (Preisig & Sjerps, 2019). However, stimulus laterality influenced the processing speed of integration. Thus, we cannot rule out that the ear of presentation contributes to the observed TACS effect. Our current experimental design does not allow further testing this idea; this may be done in future studies applying interhemispheric cross-frequency delta–gamma TACS stimulation presenting the chirp to the left and the right ear, respectively.
In summary, our results indicate that both gamma and delta TACS affect interhemispheric speech integration, but in different ways. The induced perturbations imply that interhemispheric phase coupling plays a functional role in interhemispheric speech integration.
Acknowledgments
This work was supported by the Swiss National Science Foundation (P2BEP3_168728 /PP00P1_163726) and the Janggen-Pöhn Stiftung. The authors would like to thank Brigit Knudsen, Iris Schmits, and Sarah Kemp for their assistance.
Reprint requests should be sent to Basil C. Preisig, Donders Institute for Brain Cognition and Behaviour, Radboud University, P.O. Box 9101, Nijmegen, Gelderland 6500 HB, The Netherlands, or via e-mail: [email protected].
Note
To activate the right motor cortex, in line with a related ongoing neuroimaging study examining speech processing in the left cerebral hemisphere.