Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.

Our ability to discriminate and assign meaning to speech sounds relies on the identification and integration of spectrotemporal cues carried by the acoustic speech signal. A common approach to study the mechanisms underlying speech cue integration is to present distinct cues to different ears (dichotic listening) and to investigate the conditions under which they give rise to the subjective experience of an integrated unified speech sound (Preisig & Sjerps, 2019; Mathiak, Hertrich, Lutzenberger, & Ackermann, 2001; Liberman & Mattingly, 1989; Rand, 1974). Although the auditory nerve projects from each ear to both cerebral hemispheres, processing of acoustic input is initially dominant in the neural pathway, including the auditory cortex that is “contralateral” to the ear of presentation (Pollmann, Maertens, von Cramon, Lepsien, & Hugdahl, 2002; Sparks & Geschwind, 1968; Kimura, 1967); for reviews, see (Hugdahl & Westerhausen, 2016; Westerhausen & Hugdahl, 2008). Therefore, the unification of the binaurally presented speech cues requires interhemispheric integration, that is, the grouping and fusion of cues that are initially processed by different cerebral hemispheres. Moreover, processing of speech and language, for example, phoneme recognition, is dominant in the left hemisphere (Mesgarani, Cheung, Johnson, & Chang, 2014; Giraud & Poeppel, 2012; Chang et al., 2010; Obleser, Zimmermann, Van Meter, & Rauschecker, 2007; Jäncke, 2002; Zatorre & Belin, 2001). Thus, the integration of binaurally presented speech cues may require interhemispheric transfer of information from the right to the left auditory cortex via the corpus callosum, as described in the so-called callosal relay model (Steinmann et al., 2014, 2018; Bayazıt, Oniz, Hahn, Güntürkün, & Ozgören, 2009; Westerhausen, Grüner, Specht, & Hugdahl, 2009; Jäncke, 2002).

The interhemispheric transfer and integration of sensory information has been suggested to be facilitated through phase synchronization between neural oscillations in the two hemispheres (Fell & Axmacher, 2011; Fries, 2005). Hence, interhemispheric phase synchronization may play a crucial role for the integration of dichotic speech cues. In support of this idea, Steinmann et al. (2014) have shown modulation of interhemispheric gamma (30–100 Hz) phase synchronization during dichotic speech listening. More precisely, increased gamma functional connectivity was observed in a condition requiring transfer of speech cues for phoneme recognition, and this connectivity was directed from the right to the left secondary auditory cortex (Steinmann et al., 2018). Thus, interhemispheric phase synchronization in the gamma band in the posterior superior temporal cortex plays a role in the interhemispheric integration of speech.

It is still unclear whether this synchronization contributes functionally to speech integration or merely results from it (Zaehle, Lenz, Ohl, & Herrmann, 2010). Moreover, it is unclear whether its role is limited to oscillations in the gamma range. Besides their role in interhemispheric integration of speech, cortical oscillations in the lower gamma range (25–40 Hz) are also important for the processing of phonetic information such as formant transitions or voicing (Rufener, Oechslin, Zaehle, & Meyer, 2016; Giraud & Poeppel, 2012; Shamir, Ghitza, Epstein, & Kopell, 2009; Poeppel, 2003). Slower oscillations in the delta and theta (∼1–8 Hz) band overlap with intelligibility-relevant temporal fluctuations in the acoustic speech signal and may contribute functionally to the processing of syllabic information during diotic speech perception (Riecke, Formisano, Sorger, Başkent, & Gaudrain, 2018; Zoefel, Archer-Boyd, & Davis, 2018; Keitel, Ince, Gross, & Kayser, 2017; Rimmele, Zion Golumbic, Schröger, & Poeppel, 2015; Gross et al., 2013; Luo & Poeppel, 2007; for a comprehensive review, see Kösem & Wassenhove, 2017). Thus, these slow oscillations may contribute to diotic speech perception, but there exists no evidence to suggest that they play a role in interhemispheric integration.

In this study, we investigated the mechanisms underlying dichotic speech cue integration. We tested the hypothesis that interhemispheric phase synchronization plays a functional role in interhemispheric speech integration. We experimentally manipulated interhemispheric phase synchronization by applying transcranial alternating current stimulation (TACS) simultaneously above the auditory cortex in the lateral superior temporal lobe of each hemisphere. To functionally couple the two regions, we fixed the phase of TACS across the two stimulation sites (in-phase condition). Conversely, to functionally decouple the two regions, we reversed the phase of TACS at one site (anti-phase condition; Preisig, Sjerps, Kösem, & Riecke, 2019; Saturnino, Madsen, Siebner, & Thielscher, 2017). This approach has already been successfully applied to modulate bistable perception in the visual domain (Helfrich et al., 2014). To test for a specific role of gamma oscillations for interhemispheric speech sound integration, we applied TACS at 40 Hz (gamma condition). Furthermore, we included 3.125-Hz TACS (delta condition) and sham stimulation as control conditions to enable establishing frequency specificity of the putative effect of gamma TACS on interhemispheric speech integration.

Interhemispheric speech integration was assessed using a dichotic listening task. An ambiguous speech sound (“base,” perceptually intermediate between the syllables /ga/ and /da/) was presented to the participants' right ear and a disambiguating acoustic cue (“chirp,” which was either a low or high third formant, F3) to their left ear. Interhemispheric integration of the base and the chirp is reflected by an increased number of /ga/ reports in the low-F3 condition and an increased number of /da/ reports in the high-F3 condition (Preisig & Sjerps, 2019).

We predicted that interhemispheric phase synchronization (in-phase condition) would significantly increase interhemispheric speech integration, as reflected by an increased number of /ga/ reports in the low-F3 condition and an increased number of /da/ reports in the high-F3 condition, compared with interhemispheric phase desynchronization (anti-phase condition). Furthermore, we predicted that functional coupling of bilateral auditory cortices in the gamma, but not delta frequency band, would strengthen interhemispheric speech integration, compared with sham stimulation.

Participants

Thirty-six right-handed volunteers (M = 22.56 years, SD = 2.93; 14 men) participated in the study. All participants had normal or corrected-to-normal visual acuity. The participants reported no history of neurological, psychiatric, or hearing disorders, and all had normal hearing (hearing thresholds of less than 25 dB HL at 250, 500, 750, 1000, 1500, 3000, and 4000 Hz, tested on both ears separately using pure tone audiometry) and no threshold difference between the left and the right ear larger than 5 dB for any of the tested frequencies. All participants gave written informed consent before the experiment. Ethical approval to conduct this study was provided by the local ethics committee (CMO region Arnhem-Nijmegen). This study was conducted in accordance with the principles of the latest version of the Declaration of Helsinki.

Electric Stimulation

Electric currents were applied through two high-density electrode configurations, each consisting of concentric rubber electrodes: a central circular electrode (radius = 1.25 cm) and a surrounding ring electrode (inner radius = 3.5 cm, outer radius = 4.8 cm). Each electrode configuration was connected to a separate battery-driven transcranial current stimulator (Neuroconn, Ilmenau, Germany), similar to previous two-channel approaches (Ten Oever et al., 2016; Riecke, Formisano, Herrmann, & Sack, 2015). The electrode configurations were centered according to the international 10–20 system over CP5 (above the left cerebral hemisphere) and CP6 (above the right cerebral hemisphere; see Figure 1). These scalp locations were chosen to produce relatively strong currents in the target regions over the auditory speech areas (i.e., left and right lateral superior temporal lobe), as suggested by prior electric field simulations on a standard head model using the simnibs toolbox (Thielscher, Antunes, & Saturnino, 2015).

Figure 1. 

Dual-site high-density TACS setup. Left: The electrode configurations were centered according to the international 10–20 system over CP5 (above the left cerebral hemisphere) and CP6 (above the right cerebral hemisphere). Right: The interhemispheric phase synchrony was manipulated using in-phase TACS (0° phase lag between stimulation sites, dotted line) and anti-phase (180° phase lag, dotted line) TACS. The colors represent the polarity (positive = red; negative = blue) of the current for the time stamp highlighted by the dotted line. RH = right hemisphere; LH = left hemisphere.

Figure 1. 

Dual-site high-density TACS setup. Left: The electrode configurations were centered according to the international 10–20 system over CP5 (above the left cerebral hemisphere) and CP6 (above the right cerebral hemisphere). Right: The interhemispheric phase synchrony was manipulated using in-phase TACS (0° phase lag between stimulation sites, dotted line) and anti-phase (180° phase lag, dotted line) TACS. The colors represent the polarity (positive = red; negative = blue) of the current for the time stamp highlighted by the dotted line. RH = right hemisphere; LH = left hemisphere.

Close modal

TACS was applied at a frequency in the low gamma band (40 Hz) or in the delta band (3.125 Hz), the latter matched the timescale of the syllabic envelope, that is, the duration of the syllable matched the length of a half cycle of the delta TACS. Before starting the actual experiment, we ensured that all participants tolerated the TACS well. TACS intensity was adjusted individually to the point for which the participant reported feeling comfortable or uncertain about the presence of the current (1.4 ± 0.1 mA peak-to-peak, mean ± SD across participants). Impedance was kept below 10 kΩ. The average current density was 0.2 mA/cm2 at the center electrode and 0.06 mA/cm2 at the concentric ring electrode. Stimulation was ramped over the first and the last 10 sec of each experimental block using raised cosine ramps.

The timing of the electric and auditory stimuli was controlled using a multichannel D/A converter (National Instruments, sampling rate: 11 kHz) and Datastreamer software (Ten Oever et al., 2016). Visual stimulation and response recording were controlled using Presentation software (Version 18.0, Neurobehavioral Systems, Inc., Berkeley, CA).

Behavioral Pretest

Interhemispheric speech sound integration was assessed by simultaneously presenting an ambiguous base and a disambiguating chirp (F3) to the right and left ear, respectively. The chirp supported either a /da/ (high F3 ∼ 2.9 kHz) or a /ga/ (low F3 ∼ 2.5 kHz) interpretation of the ambiguous base. Because perceptual category boundaries may vary across individuals, a pretest was used to define participant-specific ambiguous base stimuli for the main experiment. The pretest included the presentation of nine stimuli of the /da/–/ga/ continuum, each 16 times, in random order. To make this pretest most similar to the main experiment, syllables of the /da/–/ga/ continuum were presented to the right ear, and a single F3 chirp (identical to the F3 component in the ambiguous base stimulus) to the left ear. Subjective category boundaries were estimated by assessing individual psychometric curves and identifying the point at which participants reported perceiving the stimulus as /da/ or /ga/ in ∼50% of the trials. The stimulus associated with this individual category boundary was then used as the base stimulus for the subsequent main experiment (Preisig & Sjerps, 2019). Further detail concerning stimulus creation is reported in a previous publication using the same materials (Preisig & Sjerps, 2019).

Experimental Design and Task

The experiment included four stimulation conditions and sham stimulation. Electric stimulation was applied at one of the two frequencies, 40 Hz and 3.125 Hz. Each of these frequency conditions was presented in two interhemispheric phase synchronization conditions: (A) “In-phase stimulation” was applied with a phase lag of 0° between the central electrodes placed over the left and the right auditory speech areas (i.e., bilateral superior temporal lobe) to induce interhemispheric synchronization. (B) “Antiphase stimulation” was applied with a relative phase lag of 180° to induce interhemispheric desynchronization (Preisig et al., 2019; Saturnino et al., 2017; see Figure 1). During “sham stimulation” (placebo), the onset ramp was followed immediately by an offset ramp, that is, no electric stimulation was applied during the actual experiment. The ramp was repeated at the end of the block.

The experiment consisted of 10 experimental blocks. Each block consisted of 48 trials containing the ambiguous base stimulus (for which the F3 frequency was set at the participant-specific subjective category boundary value that was obtained in the pretest) and 12 trials containing unambiguous base stimuli (which contained an F3 component that supported a clear /da/ interpretation [∼2.9 kHz] or a /ga/ interpretation [∼2.5 kHz]) presented to the right ear. The ambiguous base stimulus was paired with a disambiguating F3 chirp presented to the left ear (24 trials with the high F3 chirp and 24 trials with the low F3 chirp). In the unambiguous stimuli, a chirp with the same F3 frequency as the base was presented to the left ear. Unambiguous stimuli did not require interhemispheric integration for disambiguation because participants could readily identify these stimuli based on monaural input alone, that is, the unambiguous base stimulus presented to the right ear. The first half of the experiment included five blocks of trials: gamma in-phase TACS, gamma anti-phase TACS, delta in-phase TACS, delta anti-phase TACS, and sham. The order of the first five experimental blocks was reversed in the second half of the experiment. The order of all blocks was pseudorandomized, such that blocks of the same TACS frequency followed upon each other and counterbalanced across participants. This pseudorandomization scheme was used to account for potential cross-frequency carryover effects (Vossen, Gross, & Thut, 2015).

After each block, participants were asked to rate the subjective strength of any sensations induced by the stimulation on a visual analogue scale from 0 cm (no subjective sensations) to 10 cm (strong subjective sensations). Although sensation ratings were relatively low in all conditions, TACS blocks (M = 2.64, SD = 1.38) were rated significantly higher than sham blocks (M = 1.77, SD = 1.46), t(25) = 3.11, p < .01. However, even though participants rated TACS and sham blocks differently (Zoefel, Allard, Anil, & Davis, 2020; Turi et al., 2019), this unlikely influenced our main results, as we found no association between sensation ratings and behavioral performance, Pearson's R(128) = −0.14, p = .10, across stimulation conditions.

The auditory stimuli were presented with an ISI of on average 3.5 sec. The exact ISI was set so that the syllable onset occurred at one of six predefined, equidistant TACS phases (TACS/syllable onset lag: 30°, 90°, 150°, 210°, 270°, 330°). This allowed compensating for individual differences in the optimal relative TACS syllable timing (Zoefel, Davis, Valente, & Riecke, 2019; Riecke et al., 2018; Riecke, Formisano, et al., 2015; Riecke, Sack, & Schroeder, 2015), with the aim to improve the detectability of putative stimulation effects in the group-level analysis. In this study, we did not observe any effect of TACS/syllable onset lag (Figure 2). Thus, the behavioral data were pooled across the six TACS/syllable onset lags for each stimulation condition. Every stimulus was preceded by a fixation cross presented 600 msec before auditory stimulus onset. At 1450 msec after the fixation cross onset, the response options /ga/ and /da/ were presented (one above and one below the fixation cross, falling within a visual angle of 9.43°). The participants indicated their response by pressing the corresponding response button with their left index finger.1 Participants were instructed to perform as accurately and as fast as possible. The position (up vs. down) of the response options was counterbalanced across participants.

Figure 2. 

The phase angle histograms show the distribution of the participants' best TACS/syllable onset lag in the stimulation condition labeled above each histogram (in-phase 40 Hz, anti-phase 40 Hz; sham, in-phase 3.125 Hz, anti-phase 3.125 Hz). The optimal timing of TACS and the syllable presentation, that is, average best lag was 87° ± 12° (mean ± SEM) across participants. The distribution of the participants' best lag pooled across TACS stimulation condition (in-phase 40 Hz, anti-phase 40 Hz; sham, in-phase 3.125 Hz, anti-phase 3.125 Hz) did not deviate significantly from uniformity (Rayleigh test for nonuniformity of circular data, z = 1.733, p = .177), suggesting that the best lag varied substantially across participants. Moreover, the best lags observed in the different stimulation conditions were observed to be uncorrelated (ps >.08), indicating that participants' best lag varied across stimulation conditions. To compensate for these individual differences in optimal relative TACS syllable timing, the best lag was aligned across participants, and the remaining phase bins were phase wrapped, separately for each participant and stimulation condition (for details, see Experimental Design and Task section). Initial analysis of the aligned data for an effect of TACS/syllable onset lag revealed no significant result; therefore, the six TACS/syllable onset lags were pooled in all subsequent analyses.

Figure 2. 

The phase angle histograms show the distribution of the participants' best TACS/syllable onset lag in the stimulation condition labeled above each histogram (in-phase 40 Hz, anti-phase 40 Hz; sham, in-phase 3.125 Hz, anti-phase 3.125 Hz). The optimal timing of TACS and the syllable presentation, that is, average best lag was 87° ± 12° (mean ± SEM) across participants. The distribution of the participants' best lag pooled across TACS stimulation condition (in-phase 40 Hz, anti-phase 40 Hz; sham, in-phase 3.125 Hz, anti-phase 3.125 Hz) did not deviate significantly from uniformity (Rayleigh test for nonuniformity of circular data, z = 1.733, p = .177), suggesting that the best lag varied substantially across participants. Moreover, the best lags observed in the different stimulation conditions were observed to be uncorrelated (ps >.08), indicating that participants' best lag varied across stimulation conditions. To compensate for these individual differences in optimal relative TACS syllable timing, the best lag was aligned across participants, and the remaining phase bins were phase wrapped, separately for each participant and stimulation condition (for details, see Experimental Design and Task section). Initial analysis of the aligned data for an effect of TACS/syllable onset lag revealed no significant result; therefore, the six TACS/syllable onset lags were pooled in all subsequent analyses.

Close modal

Data Analysis

In a first step, we assessed the reliability of the categorical judgments of individual participants on unambiguous endpoint trials (base and chirp stimuli with the same F3 endpoint, supporting the interpretation of /ga/ or /da/) collected during the sham blocks. Hence, for each participant, we tested with a chi-square test whether the proportion of /ga/ responses differed between /ga/ and /da/ endpoint stimuli. Based on this criterion, the data of four participants were excluded from further analyses because their classification accuracy did not significantly exceed chance level. One additional participant was excluded because of a technical error during the experiment. Thus, the final data set included data from 31 participants (M = 22.63 years, SD = 3.20, 12 men).

Two dependent variables were analyzed: the categorical response on each individual trial (0 = /da/; 1 = /ga/) and the proportion of responses consistent with the presented F3 chirp (i.e., those in which interhemispheric integration occurred), per condition. These variables were computed based on participants' responses to the stimuli requiring interhemispheric integration, that is, the stimuli composed of an ambiguous base and a disambiguating F3 chirp. For each stimulation condition, the proportion of integrated trials was calculated per TACS/syllable onset lag, which were concatenated to build a behavioral time series. To compensate for individual differences (Figure 2), the maximum (best lag) of the time series was subsequently aligned across individuals. Because we did not observe any effect of TACS/syllable onset lag, the behavioral time series were pooled across the six TACS/syllable onset lags for each stimulation condition. Statistical analyses were conducted in R (Version 3.3.3) using parametric tests (normality assumption was fulfilled, the dependent variable in each of our conditions was normally distributed, Shapiro–Wilk test of normality, ps > .19): Linear mixed-effect models were used to analyze categorical responses, and repeated-measures ANOVAs were used to test for a stimulation effect, interhemispheric phase effect, frequency effect, and interactions. Post hoc comparisons were conducted using paired t tests and false discovery rate (FDR) corrections for multiple comparisons were applied (Benjamini & Hochberg, 1995).

The average classification accuracy (%) including unambiguous stimuli (extreme points from the /ga/–/da/ continuum) during sham blocks was high (M = 90.59, SD = 8.74). For trials that required interhemispheric integration, participants integrated the information from the F3 chirp on average on 73.4 ± 9.7% (mean ± SEM) of the trials that included an ambiguous base stimulus. First, we tested whether participants' responses to ambiguous base stimuli were influenced by the frequency of the disambiguating F3 chirp presented to the contralateral ear. For this analysis, we only included sham blocks. We observed that participants gave on average 34.12 ± 10.08% (mean ± SEM) /ga/ responses to ambiguous bases combined with the high (∼2.9 kHz) F3 chirp and 80.92 ± 9.14% (mean ± SEM) /ga/ responses to ambiguous bases combined with the low (∼2.5 kHz) F3 chirp. To confirm that the chirp F3 frequency influenced participants' response (0 = /da/; 1 = /ga/ response), a logistic linear mixed-effect model with the fixed factor “chirp type” (levels: high F3 = −1; low F3 = 1), and by-participant random intercepts and slopes were fitted to the data. The analysis revealed a main effect of chirp type (B = 2.733, z = 13.052, p < .001). This result indicates that interhemispheric speech integration occurred (i.e., the participants integrated the chirp and the contralateral ambiguous base) during the sham blocks as we expected (Figure 3A).

Figure 3. 

(A) The proportion of /ga/ responses (mean ± SEM) as a function of chirp type (high F3, low F3) in the sham condition. (B) Participants' average performance (mean ± SEM across participants) is shown for each stimulation condition relative to sham (gray reference line, shaded area represents SEM across participants). Dots represent the data points of single participants.

Figure 3. 

(A) The proportion of /ga/ responses (mean ± SEM) as a function of chirp type (high F3, low F3) in the sham condition. (B) Participants' average performance (mean ± SEM across participants) is shown for each stimulation condition relative to sham (gray reference line, shaded area represents SEM across participants). Dots represent the data points of single participants.

Close modal

Figure 3B shows participants' average performance for each stimulation condition and the sham condition. In the stimulation conditions, overall performance ranged on average between 71.75% and 73.98%, whereas in the sham condition, it was significantly better (76.16%; average difference: 3.46%), t(30) = 2.89, p = .007, d = 0.37. To test whether the strength of this general stimulation effect depended on the frequency or phase synchrony of the TACS, a two-way repeated-measures ANOVA, including the within-subject factors Stimulation Frequency (40 Hz, 3.125 Hz) and Interhemispheric Phase Synchronization (in-phase, anti-phase) was conducted for the dependent variable difference in interhemispheric integration of the chirp as compared with sham stimulation. Delta values between the performance in each stimulation condition and the sham condition were included in the analysis. Contrary to our predictions, this analysis revealed no significant interaction Stimulation Frequency × Interhemispheric Phase Synchronization, F(1, 30) = 2.59, p = .12, ηp2 = .01, or main effects of Stimulation Frequency, F(1, 30) = 0.12, p = .73, ηp2 = .0004, or Interhemispheric Phase Synchronization, F(1, 30) = 0.20, p = .66, ηp2 =.0008. The lack of a main effect of Interhemispheric Phase Synchronization implies no significant difference between in-phase versus anti-phase stimulation.

To identify the specific TACS conditions under which the stimulation effect occurred, a one-way repeated-measures ANOVA, including the within-subject factors Stimulation Condition (sham, in-phase 40 Hz, anti-phase 40 Hz, in-phase 3.125 Hz, anti-phase 3.125 Hz) was conducted for the dependent variable the proportion of integrated trials. This analysis revealed a significant main effect of Stimulation Condition, F(4, 120) = 2.99, p = .02, ηp2 = .02. Pairwise comparisons revealed significantly reduced performance in the in-phase 40-Hz condition, t(30) = −2.78, p = .049, FDR-corrected, d = −0.37, and anti-phase 3.125-Hz condition, t(30) = −2.76, p = .049, FDR-corrected, d = −0.40, compared with sham stimulation, but not in the anti-phase 40-Hz condition, t(30) = −1.44, p = .32, FDR-corrected, d = −0.22, or in-phase 3.125-Hz condition, t(30) = −2.16, p = .13, FDR-corrected, d = −0.29. These results indicate that the bihemispheric TACS modulated interhemispheric speech integration.

In this study, we tested the hypothesis that interhemispheric phase synchronization facilitates interhemispheric speech integration. To test this, we applied TACS simultaneously above listeners' left and right auditory speech areas (either in-phase or anti-phase) to synchronize or desynchronize the two areas and measured the effect on interhemispheric speech integration. Based on previous evidence from electrophysiological studies (Steinmann et al., 2014, 2018), interhemispheric integration of speech might be causally related to phase synchronization of bilateral auditory speech areas in the gamma frequency band. No such effect has been reported for the delta frequency band. Thus, we predicted that functional coupling of bilateral auditory speech areas in the gamma, but probably not in delta frequency band, would strengthen interhemispheric speech integration, compared with functionally decoupling them.

Our results show a reduction of interhemispheric integration under gamma TACS compared with sham stimulation. This reduction was significant when gamma TACS was applied in-phase above the two cerebral hemispheres. We also observed a significant reduction when anti-phase delta TACS was applied. We found no significant difference between in-phase compared with anti-phase conditions for either gamma or delta TACS. Although we found a general reduction of performance during TACS versus sham stimulation, we observed no main effect or interaction in an overall ANOVA comparing these reductions across the different TACS conditions. However, the observed pattern of significant (in-phase gamma TACS, anti-phase delta TACS) and nonsignificant (anti-phase gamma TACS, in-phase delta TACS) changes in speech perception relative to sham stimulation strongly suggests that TACS modulated interhemispheric speech cue integration.

Contrary to our prediction, in-phase, not anti-phase, gamma TACS perturbed interhemispheric speech cue integration. This finding implies that full interhemispheric phase synchronization (0° difference) at 40 Hz is not beneficial for interhemispheric speech cue integration. This observation could be related to interindividual differences in interhemispheric auditory transfer times (Henshall et al., 2012). Strongest interhemispheric integration may occur when gamma phase in the two hemispheres differs in a manner commensurate with individual auditory transfer times. This notion is supported by findings showing that the auditory event-related N100 to dichotically presented syllables occurs at a different latency over the right versus the left auditory cortex (Eichele, Nordby, Rimol, & Hugdahl, 2005). The reported lag is on average 15 msec, which closely matches with the half cycle duration of our gamma TACS (12.5 msec). In line with this, a recent study found that anti-phase TACS applied at 40 Hz does not affect response laterality during dichotic listening (Meier et al., 2019). Critically, the authors could show in a follow-up analysis that, only in participants with intrinsic gamma phase asymmetries closer to 0°, anti-phase gamma TACS led to a reduction of interhemispheric integration, that is, a shift in response laterality to right ear. These results corresponds well with our finding that anti-phase gamma TACS, which imposes an interhemispheric lag of 12.5 msec, may not perturb speech cue integration. Our current experimental design does not allow further testing this idea; this may be done in future studies that parametrically manipulate interhemispheric phase asynchrony in multiple steps across the gamma cycle.

Other studies have reported that bilateral 40-Hz TACS perturbs phonemic processing, decreasing discriminability of syllables with different VOTs in young adults (Rufener, Zaehle, Oechslin, & Meyer, 2016) but increasing it in older adults (Rufener, Oechslin, et al., 2016) and in dyslexic individuals (Rufener, Krauel, Meyer, Heinze, & Zaehle, 2019). Therefore, we cannot rule out that our gamma TACS also affected local phoneme processing. We positioned our electrodes so as to stimulate especially cortical speech areas in the lateral superior temporal lobe; therefore, we believe that the observed effect originates from these areas. We cannot exclude that other regions were stimulated by spreading current and also contributed to the effect as our design did not include control regions. In addition to that, gamma TACS might have affected deployment of attentional resources, considering that unilateral 40-Hz TACS may affect performance on dichotic working memory tasks (Wöstmann, Vosskuhl, Obleser, & Herrmann, 2018).

Surprisingly, our results suggest that not only gamma-phase coupling but also delta-phase coupling plays a role for interhemispheric speech cue integration. Our observation that anti-phase delta TACS perturbed behavioral performance suggests that this type of stimulation disrupts cross-lateral transfer of speech cues as well. Previous studies using dichotic stimulus presentation did not report phase coupling in this frequency band (Steinmann et al., 2014, 2018). Therefore, we speculate that anti-phase TACS may have caused a difference in neural excitability between hemispheres during the processing of the binaural input: When the current was positive over one site, it was negative over the contralateral site, and vice versa. This may have been particularly relevant for the delta TACS condition, in which the applied current matched the syllabic envelope. Increased neural excitability in one hemisphere and decreased excitability in the other may have resulted in an interhemispheric difference in the effectiveness with which the dichotic syllabic components (chirp or ambiguous base) were processed. Indeed, transcranial direct current stimulation has been shown to have polarity-specific effects on temporal and spectral processing of auditory input (Heimrath, Kuehne, Heinze, & Zaehle, 2014; Schaal, Williamson, & Banissy, 2013; Zaehle, Beretta, Jäncke, Herrmann, & Sandmann, 2011; Vines, Schnider, & Schlaug, 2006).

An important additional consideration is that anti-phase delta TACS may disrupt interhemispheric cross-frequency dynamics between delta and gamma oscillations during speech perception (Giraud & Poeppel, 2012). Coupling of these frequency bands could be of particular relevance for interhemispheric integration, because regions in the left and right auditory cortex may be differently tuned with respect to these frequency bands, with a relative leftward dominance of low-gamma neural oscillations and/or rightward dominance of slow frequency oscillations (Flinker, Doyle, Mehta, Devinsky, & Poeppel, 2019; Bouton et al., 2018; Giraud & Poeppel, 2012; Saoud et al., 2012; Poeppel, 2003). In addition to this, there is support that right hemispheric auditory processing may be tuned for spectral information (Preisig & Sjerps, 2019; Bouton et al., 2018) and left hemispheric auditory processing may be tuned for temporal information (Flinker et al., 2019; Saoud et al., 2012)—a theoretical framework originally formulated in the asymmetric sampling theory (Poeppel, 2003; for a similar framework, see Zatorre & Belin, 2001). In a previous study, we found that the laterality of initial chirp sound processing, that is, the ear of presentation, did not influence participants' perceptual decisions (Preisig & Sjerps, 2019). However, stimulus laterality influenced the processing speed of integration. Thus, we cannot rule out that the ear of presentation contributes to the observed TACS effect. Our current experimental design does not allow further testing this idea; this may be done in future studies applying interhemispheric cross-frequency delta–gamma TACS stimulation presenting the chirp to the left and the right ear, respectively.

In summary, our results indicate that both gamma and delta TACS affect interhemispheric speech integration, but in different ways. The induced perturbations imply that interhemispheric phase coupling plays a functional role in interhemispheric speech integration.

This work was supported by the Swiss National Science Foundation (P2BEP3_168728 /PP00P1_163726) and the Janggen-Pöhn Stiftung. The authors would like to thank Brigit Knudsen, Iris Schmits, and Sarah Kemp for their assistance.

Reprint requests should be sent to Basil C. Preisig, Donders Institute for Brain Cognition and Behaviour, Radboud University, P.O. Box 9101, Nijmegen, Gelderland 6500 HB, The Netherlands, or via e-mail: [email protected].

1. 

To activate the right motor cortex, in line with a related ongoing neuroimaging study examining speech processing in the left cerebral hemisphere.

Bayazıt
,
O.
,
Oniz
,
A.
,
Hahn
,
C.
,
Güntürkün
,
O.
, &
Ozgören
,
M.
(
2009
).
Dichotic listening revisited: Trial-by-trial ERP analyses reveal intra- and interhemispheric differences
.
Neuropsychologia
,
47
,
536
545
.
Benjamini
,
Y.
, &
Hochberg
,
Y.
(
1995
).
Controlling the false discovery rate: A practical and powerful approach to multiple testing
.
Journal of the Royal Statistical Society: Series B: Methodological
,
57
,
289
300
.
Bouton
,
S.
,
Chambon
,
V.
,
Tyrand
,
R.
,
Guggisberg
,
A. G.
,
Seeck
,
M.
,
Karkar
,
S.
, et al
(
2018
).
Focal versus distributed temporal cortex activity for speech sound category assignment
.
Proceedings of the National Academy of Sciences, U.S.A.
,
115
,
E1299
E1308
.
Chang
,
E. F.
,
Rieger
,
J. W.
,
Johnson
,
K.
,
Berger
,
M. S.
,
Barbaro
,
N. M.
, &
Knight
,
R. T.
(
2010
).
Categorical speech representation in human superior temporal gyrus
.
Nature Neuroscience
,
13
,
1428
1432
.
Eichele
,
T.
,
Nordby
,
H.
,
Rimol
,
L. M.
, &
Hugdahl
,
K.
(
2005
).
Asymmetry of evoked potential latency to speech sounds predicts the ear advantage in dichotic listening
.
Cognitive Brain Research
,
24
,
405
412
.
Fell
,
J.
, &
Axmacher
,
N.
(
2011
).
The role of phase synchronization in memory processes
.
Nature Reviews Neuroscience
,
12
,
105
118
.
Flinker
,
A.
,
Doyle
,
W. K.
,
Mehta
,
A. D.
,
Devinsky
,
O.
, &
Poeppel
,
D.
(
2019
).
Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries
.
Nature Human Behaviour
,
3
,
393
403
.
Fries
,
P.
(
2005
).
A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence
.
Trends in Cognitive Sciences
,
9
,
474
480
.
Giraud
,
A. L.
, &
Poeppel
,
D.
(
2012
).
Cortical oscillations and speech processing: Emerging computational principles and operations
.
Nature Neuroscience
,
15
,
511
517
.
Gross
,
J.
,
Hoogenboom
,
N.
,
Thut
,
G.
,
Schyns
,
P.
,
Panzeri
,
S.
,
Belin
,
P.
, et al
(
2013
).
Speech rhythms and multiplexed oscillatory sensory coding in the human brain
.
PLoS Biology
,
11
,
e1001752
.
Heimrath
,
K.
,
Kuehne
,
M.
,
Heinze
,
H. J.
, &
Zaehle
,
T.
(
2014
).
Transcranial direct current stimulation (tDCS) traces the predominance of the left auditory cortex for processing of rapidly changing acoustic information
.
Neuroscience
,
261
,
68
73
.
Helfrich
,
R. F.
,
Knepper
,
H.
,
Nolte
,
G.
,
Strüber
,
D.
,
Rach
,
S.
,
Herrmann
,
C. S.
, et al
(
2014
).
Selective modulation of interhemispheric functional connectivity by HD-tACS shapes perception
.
PLOS Biology
,
12
,
e1002031
.
Henshall
,
K. R.
,
Sergejew
,
A. A.
,
McKay
,
C. M.
,
Rance
,
G.
,
Shea
,
T. L.
,
Hayden
,
M. J.
, et al
(
2012
).
Interhemispheric transfer time in patients with auditory hallucinations: An auditory event-related potential study
.
International Journal of Psychophysiology
,
84
,
130
139
.
Hugdahl
,
K.
, &
Westerhausen
,
R.
(
2016
).
Speech processing asymmetry revealed by dichotic listening and functional brain imaging
.
Neuropsychologia
,
93
,
466
481
.
Jäncke
,
L.
(
2002
).
Does “callosal relay” explain ear advantage in dichotic monitoring?
Laterality: Asymmetries of Body, Brain and Cognition
,
7
,
309
320
.
Keitel
,
A.
,
Ince
,
R. A. A.
,
Gross
,
J.
, &
Kayser
,
C.
(
2017
).
Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks
.
Neuroimage
,
147
,
32
42
.
Kimura
,
D.
(
1967
).
Functional asymmetry of the brain in dichotic listening
.
Cortex
,
3
,
163
178
.
Kösem
,
A.
, &
Wassenhove
,
V. v.
(
2017
).
Distinct contributions of low- and high-frequency neural oscillations to speech comprehension
.
Language, Cognition and Neuroscience
,
32
,
536
544
.
Liberman
,
A. M.
, &
Mattingly
,
I. G.
(
1989
).
A specialization for speech perception
.
Science
,
243
,
489
494
.
Luo
,
H.
, &
Poeppel
,
D.
(
2007
).
Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex
.
Neuron
,
54
,
1001
1010
.
Mathiak
,
K.
,
Hertrich
,
I.
,
Lutzenberger
,
W.
, &
Ackermann
,
H.
(
2001
).
Neural correlates of duplex perception: A whole-head magnetencephalography study
.
NeuroReport
,
12
,
501
506
.
Meier
,
J.
,
Nolte
,
G.
,
Schneider
,
T. R.
,
Engel
,
A. K.
,
Leicht
,
G.
, &
Mulert
,
C.
(
2019
).
Intrinsic 40Hz-phase asymmetries predict tACS effects during conscious auditory perception
.
PLoS One
,
14
,
e0213996
.
Mesgarani
,
N.
,
Cheung
,
C.
,
Johnson
,
K.
, &
Chang
,
E. F.
(
2014
).
Phonetic feature encoding in human superior temporal gyrus
.
Science
,
343
,
1006
1010
.
Obleser
,
J.
,
Zimmermann
,
J.
,
Van Meter
,
J.
, &
Rauschecker
,
J. P.
(
2007
).
Multiple stages of auditory speech perception reflected in event-related fMRI
.
Cerebral Cortex
,
17
,
2251
2257
.
Poeppel
,
D.
(
2003
).
The analysis of speech in different temporal integration windows: Cerebral lateralization as ‘asymmetric sampling in time’
.
Speech Communication
,
41
,
245
255
.
Pollmann
,
S.
,
Maertens
,
M.
,
von Cramon
,
D. Y.
,
Lepsien
,
J.
, &
Hugdahl
,
K.
(
2002
).
Dichotic listening in patients with splenial and nonsplenial callosal lesions
.
Neuropsychology
,
16
,
56
64
.
Preisig
,
B. C.
, &
Sjerps
,
M. J.
(
2019
).
Hemispheric specializations affect interhemispheric speech sound integration during duplex perception
.
Journal of the Acoustical Society of America
,
145
,
EL190
EL196
.
Preisig
,
B. C.
,
Sjerps
,
M. J.
,
Kösem
,
A.
, &
Riecke
,
L.
(
2019
).
Dual-site high-density 4 Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping
.
Brain Stimulation
,
12
,
775
777
.
Rand
,
T. C.
(
1974
).
Dichotic release from masking for speech
.
Journal of the Acoustical Society of America
,
55
,
678
680
.
Riecke
,
L.
,
Formisano
,
E.
,
Herrmann
,
C. S.
, &
Sack
,
A. T.
(
2015
).
4-Hz transcranial alternating current stimulation phase modulates hearing
.
Brain Stimulation
,
8
,
777
783
.
Riecke
,
L.
,
Formisano
,
E.
,
Sorger
,
B.
,
Başkent
,
D.
, &
Gaudrain
,
E.
(
2018
).
Neural entrainment to speech modulates speech intelligibility
.
Current Biology
,
28
,
161
169
.
Riecke
,
L.
,
Sack
,
A. T.
, &
Schroeder
,
C. E.
(
2015
).
Endogenous delta/theta sound-brain phase entrainment accelerates the buildup of auditory streaming
.
Current Biology
,
25
,
3196
3201
.
Rimmele
,
J. M.
,
Zion Golumbic
,
E.
,
Schröger
,
E.
, &
Poeppel
,
D.
(
2015
).
The effects of selective attention and speech acoustics on neural speech-tracking in a multi-talker scene
.
Cortex
,
68
,
144
154
.
Rufener
,
K. S.
,
Krauel
,
K.
,
Meyer
,
M.
,
Heinze
,
H.-J.
, &
Zaehle
,
T.
(
2019
).
Transcranial electrical stimulation improves phoneme processing in developmental dyslexia
.
Brain Stimulation
,
12
,
930
937
.
Rufener
,
K. S.
,
Oechslin
,
M. S.
,
Zaehle
,
T.
, &
Meyer
,
M.
(
2016
).
Transcranial Alternating Current Stimulation (tACS) differentially modulates speech perception in young and older adults
.
Brain Stimulation
,
9
,
560
565
.
Rufener
,
K. S.
,
Zaehle
,
T.
,
Oechslin
,
M. S.
, &
Meyer
,
M.
(
2016
).
40 Hz-Transcranial alternating current stimulation (tACS) selectively modulates speech perception
.
International Journal of Psychophysiology
,
101
,
18
24
.
Saoud
,
H.
,
Josse
,
G.
,
Bertasi
,
E.
,
Truy
,
E.
,
Chait
,
M.
, &
Giraud
,
A. L.
(
2012
).
Brain-speech alignment enhances auditory cortical responses and speech perception
.
Journal of Neuroscience
,
32
,
275
281
.
Saturnino
,
G. B.
,
Madsen
,
K. H.
,
Siebner
,
H. R.
, &
Thielscher
,
A.
(
2017
).
How to target inter-regional phase synchronization with dual-site transcranial alternating current stimulation
.
Neuroimage
,
163
,
68
80
.
Schaal
,
N. K.
,
Williamson
,
V. J.
, &
Banissy
,
M. J.
(
2013
).
Anodal transcranial direct current stimulation over the supramarginal gyrus facilitates pitch memory
.
European Journal of Neuroscience
,
38
,
3513
3518
.
Shamir
,
M.
,
Ghitza
,
O.
,
Epstein
,
S.
, &
Kopell
,
N.
(
2009
).
Representation of time-varying stimuli by a network exhibiting oscillations on a faster time scale
.
PLoS Computational Biology
,
5
,
e1000370
.
Sparks
,
R.
, &
Geschwind
,
N.
(
1968
).
Dichotic listening in man after section of neocortical commissures
.
Cortex
,
4
,
3
16
.
Steinmann
,
S.
,
Leicht
,
G.
,
Ertl
,
M.
,
Andreou
,
C.
,
Polomac
,
N.
,
Westerhausen
,
R.
, et al
(
2014
).
Conscious auditory perception related to long-range synchrony of gamma oscillations
.
Neuroimage
,
100
,
435
443
.
Steinmann
,
S.
,
Meier
,
J.
,
Nolte
,
G.
,
Engel
,
A. K.
,
Leicht
,
G.
, &
Mulert
,
C.
(
2018
).
The callosal relay model of interhemispheric communication: New evidence from effective connectivity analysis
.
Brain Topography
,
31
,
218
226
.
Ten Oever
,
S.
,
de Graaf
,
T. A.
,
Bonnemayer
,
C.
,
Ronner
,
J.
,
Sack
,
A. T.
, &
Riecke
,
L.
(
2016
).
Stimulus presentation at specific neuronal oscillatory phases experimentally controlled with tACS: Implementation and applications
.
Frontiers in Cellular Neuroscience
,
10
,
240
.
Thielscher
,
A.
,
Antunes
,
A.
, &
Saturnino
,
G. B.
(
2015
).
Field modeling for transcranial magnetic stimulation: A useful tool to understand the physiological effects of TMS?
In
Proceeding of the Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE
, pp.
222
225
.
Turi
,
Z.
,
Csifcsák
,
G.
,
Boayue
,
N. M.
,
Aslaksen
,
P.
,
Antal
,
A.
,
Paulus
,
W.
, et al
(
2019
).
Blinding is compromised for transcranial direct current stimulation at 1 mA for 20 min in young healthy adults
.
European Journal of Neuroscience
,
50
,
3261
3268
.
Vines
,
B. W.
,
Schnider
,
N. M.
, &
Schlaug
,
G.
(
2006
).
Testing for causality with transcranial direct current stimulation: Pitch memory and the left supramarginal gyrus
.
NeuroReport
,
17
,
1047
1050
.
Vossen
,
A.
,
Gross
,
J.
, &
Thut
,
G.
(
2015
).
Alpha power increase after transcranial alternating current stimulation at alpha frequency (α-tACS) reflects plastic changes rather than entrainment
.
Brain Stimulation
,
8
,
499
508
.
Westerhausen
,
R.
,
Grüner
,
R.
,
Specht
,
K.
, &
Hugdahl
,
K.
(
2009
).
Functional relevance of interindividual differences in temporal lobe callosal pathways: A DTI tractography study
.
Cerebral Cortex
,
19
,
1322
1329
.
Westerhausen
,
R.
, &
Hugdahl
,
K.
(
2008
).
The corpus callosum in dichotic listening studies of hemispheric asymmetry: A review of clinical and experimental evidence
.
Neuroscience & Biobehavioral Reviews
,
32
,
1044
1054
.
Wöstmann
,
M.
,
Vosskuhl
,
J.
,
Obleser
,
J.
, &
Herrmann
,
C. S.
(
2018
).
Opposite effects of lateralised transcranial alpha versus gamma stimulation on auditory spatial attention
.
Brain Stimulation
,
11
,
752
758
.
Zaehle
,
T.
,
Beretta
,
M.
,
Jäncke
,
L.
,
Herrmann
,
C. S.
, &
Sandmann
,
P.
(
2011
).
Excitability changes induced in the human auditory cortex by transcranial direct current stimulation: Direct electrophysiological evidence
.
Experimental Brain Research
,
215
,
135
140
.
Zaehle
,
T.
,
Lenz
,
D.
,
Ohl
,
F. W.
, &
Herrmann
,
C. S.
(
2010
).
Resonance phenomena in the human auditory cortex: Individual resonance frequencies of the cerebral cortex determine electrophysiological responses
.
Experimental Brain Research
,
203
,
629
635
.
Zatorre
,
R. J.
, &
Belin
,
P.
(
2001
).
Spectral and temporal processing in human auditory cortex
.
Cerebral Cortex
,
11
,
946
953
.
Zoefel
,
B.
,
Allard
,
I.
,
Anil
,
M.
, &
Davis
,
M. H.
(
2020
).
Perception of rhythmic speech is modulated by focal bilateral transcranial alternating current stimulation
.
Journal of Cognitive Neuroscience
,
32
,
226
240
.
Zoefel
,
B.
,
Archer-Boyd
,
A.
, &
Davis
,
M. H.
(
2018
).
Phase entrainment of brain oscillations causally modulates neural responses to intelligible speech
.
Current Biology
,
28
,
401
408
.
Zoefel
,
B.
,
Davis
,
M. H.
,
Valente
,
G.
, &
Riecke
,
L.
(
2019
).
How to test for phasic modulation of neural and behavioural responses
.
Neuroimage
,
202
,
116175
.