Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-13 of 13
Sonja A. Kotz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (9): 1697–1707.
Published: 01 September 2015
FIGURES
| View All (5)
Abstract
View article
PDF
The temporal structure of behavior provides information that allows the tracking of temporal regularity in the sensory and sensorimotor domains. In turn, temporal regularity allows the generation of predictions about upcoming events and to adjust behavior accordingly. These mechanisms are essential to ensure behavior beyond the level of mere reaction. However, efficient temporal processing is required to establish adequate internal representations of temporal structure. The current study used two simple paradigms, namely, finger-tapping at a regular self-chosen rate (spontaneous motor tempo) and ERPs of the EEG (EEG/ERP) recorded during attentive listening to temporally regular and irregular “oddball” sequences to explore the capacity to encode and use temporal regularity in production and perception. The results show that specific aspects of the ability to time a regular sequence of events in production covary with the ability to time a regular sequence in perception, probably pointing toward the engagement of domain-general mechanisms.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (4): 798–818.
Published: 01 April 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (7): 1403–1417.
Published: 01 July 2014
FIGURES
| View All (5)
Abstract
View article
PDF
The aim of the current study was to shed further light on control processes that shape semantic access and selection during speech production. These processes have been linked to differential cortical activation in the left inferior frontal gyrus (IFG) and the left middle temporal gyrus (MTG); however, the particular function of these regions is not yet completely elucidated. We applied transcranial direct current stimulation to the left IFG and the left MTG (or sham stimulation) while participants named pictures in the presence of associatively related, categorically related, or unrelated distractor words. This direct modulation of target regions can help to better delineate the functional role of these regions in lexico-semantic selection. Independent of stimulation, the data show interference (i.e., longer naming latencies) with categorically related distractors and facilitation (i.e., shorter naming latencies) with associatively related distractors. Importantly, stimulation location interacted with the associative effect. Whereas the semantic interference effect did not differ between IFG, MTG, and sham stimulations, the associative facilitation effect was diminished under MTG stimulation. Analyses of latency distributions suggest this pattern to result from a response reversal. Associative facilitation occurred for faster responses, whereas associative interference resulted in slower responses under MTG stimulation. This reduction of the associative facilitation effect under transcranial direct current stimulation may be caused by an unspecific overactivation in the lexicon or by promoting competition among associatively related representations. Taken together, the results suggest that the MTG is especially involved in the processes underlying associative facilitation and that semantic interference and associative facilitation are linked to differential activation in the brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (8): 1383–1395.
Published: 01 August 2013
FIGURES
| View All (4)
Abstract
View article
PDF
Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic context (± con ) and context-based typicality of the sentence-last word (high or low: ± typ ). This allowed calculation of two distinct effects of expectancy on the N400 component of the evoked potential. The sentence-final N400 effect was taken as an index of the neural effort of automatic word-into-context integration; it varied in peak amplitude and latency with signal degradation and was not reliably observed in response to severely degraded speech. Under clear speech conditions in a strong context, typical and untypical sentence completions seemed to fulfill the neural prediction, as indicated by N400 reductions. In response to moderately degraded signal quality, however, the formed expectancies appeared more specific: Only typical (+ con + typ ), but not the less typical (+ con − typ ) context–word combinations led to a decrease in the N400 amplitude. The results show that adverse listening “narrows,” rather than broadens, the expectancies about the perceived speech signal: limiting the perceptual evidence forces the neural system to rely on signal-driven expectancies, rather than more abstract expectancies, while a sentence unfolds over time.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (3): 698–706.
Published: 01 March 2012
FIGURES
| View All (4)
Abstract
View article
PDF
Forward predictions are crucial in motor action (e.g., catching a ball, or being tickled) but may also apply to sensory or cognitive processes (e.g., listening to distorted speech or to a foreign accent). According to the “internal forward model,” the cerebellum generates predictions about somatosensory consequences of movements. These predictions simulate motor processes and prepare respective cortical areas for anticipated sensory input. Currently, there is very little evidence that a cerebellar forward model also applies to other sensory domains. In the current study, we address this question by examining the role of the cerebellum when auditory stimuli are anticipated as a consequence of a motor act. We applied an N100 suppression paradigm and compared the ERP in response to self-initiated with the ERP response to externally produced sounds. We hypothesized that sensory consequences of self-initiated sounds are precisely predicted and should lead to an N100 suppression compared with externally produced sounds. Moreover, if the cerebellum is involved in the generation of a motor-to-auditory forward model, patients with focal cerebellar lesions should not display an N100 suppression effect. Compared with healthy controls, patients showed a largely attenuated N100 suppression effect. The current results suggest that the cerebellum forms not only motor-to-somatosensory predictions but also motor-to-auditory predictions. This extends the cerebellar forward model to other sensory domains such as audition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (9): 1693–1708.
Published: 01 September 2009
Abstract
View article
PDF
Many studies refer to the relevance of metric cues in speech segmentation during language acquisition and adult language processing. However, the on-line use (i.e., time-locking the unfolding of a sentence to EEG) of metric stress patterns that are manifested by the succession of stressed and unstressed syllables during auditory syntactic processing has not been investigated. This is surprising as both processes rely on abstract rules that allow the building up of expectancies of which element will occur next and at which point in time. Participants listened to metrically regular sentences that could either be correct, syntactically incorrect, metrically incorrect, or doubly incorrect. They either judged syntactic correctness or metric homogeneity in two different sessions. We provide first event-related potential evidence that the metric structure of a given language is processed in two stages as evidenced in a biphasic pattern of an early frontal negativity and a late posterior positivity. This pattern is comparable to the biphasic pattern reported in syntactic processing. However, metric cues are processed earlier than syntactic cues during the first stage (LAN), whereas both processes seem to interact at a later integrational stage (P600). The present results substantiate the important impact of metric cues during auditory syntactic language processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (7): 1207–1219.
Published: 01 July 2008
Abstract
View article
PDF
Neurolinguistic research utilizing event-related brain potentials (ERPs) typically relates syntactic phrase structure processing to an early automatic processing stage around 150 to 200 msec, whereas morphosyntactic processing is associated with a later and somewhat more attention-dependent processing stage between 300 and 500 msec. However, recent studies have challenged this position by reporting highly automatic ERP effects for morphosyntax in the 100 to 200 msec time range. The present study aimed at determining the factors that could contribute to such shifts in latency and automaticity. In two experiments varying the degree of attention, German phrase structure and morphosyntactic violations were compared in conditions in which the locality of the violated syntactic relation, as well as the violation point and the acoustic properties of the speech stimuli, were strictly controlled for. A negativity between 100 and 300 msec after the violation point occurred in response to both types of syntactic violations and independently of the allocation of attentional resources. These findings suggest that the timing and automaticity of ERP effects reflecting specific syntactic subprocesses are influenced to a larger degree by methodological than by linguistic factors, and thus, need to be regarded as relative rather than fixed to temporally successive processing stages.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (4): 594–604.
Published: 01 April 2007
Abstract
View article
PDF
It is still a matter of debate whether initial analysis of speech is independent of contextual influences or whether meaning can modulate word activation directly. Utilizing event-related brain potentials (ERPs), we tested the neural correlates of speech recognition by presenting sentences that ended with incomplete words, such as To light up the dark she needed her can- . Immediately following the incomplete words, subjects saw visual words that (i) matched form and meaning, such as candle ; (ii) matched meaning but not form, such as lantern ; (iii) matched form but not meaning, such as candy ; or (iv) mismatched form and meaning, such as number . We report ERP evidence for two distinct cohorts of lexical tokens: (a) a left-lateralized effect, the P250, differentiates form-matching words (i, iii) and form-mismatching words (ii, iv); (b) a right-lateralized effect, the P220, differentiates words that match in form and/or meaning (i, ii, iii) from mismatching words (iv). Lastly, fully matching words (i) reduce the amplitude of the N400. These results accommodate bottom-up and top-down accounts of human speech recognition. They suggest that neural representations of form and meaning are activated independently early on and are integrated at a later stage during sentence comprehension.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (3): 386–400.
Published: 01 March 2007
Abstract
View article
PDF
The present study investigated the automaticity of morphosyntactic processes and processes of syntactic structure building using event-related brain potentials. Two experiments were conducted, which contrasted the impact of local subject-verb agreement violations (Experiment 1) and word category violations (Experiment 2) on the mismatch negativity, an early event-related brain potential component reflecting automatic auditory change detection. The two violation types were realized in two-word utterances comparable with regard to acoustic parameters and structural complexity. The grammaticality of the utterances modulated the mismatch negativity response in both experiments, suggesting that both types of syntactic violations were detected automatically within 200 msec after the violation point. However, the topographical distribution of the grammaticality effect varied as a function of violation type, which indicates that the brain mechanisms underlying the processing of subject-verb agreement and word category information may be functionally distinct even at this earliest stage of syntactic analysis. The findings are discussed against the background of studies investigating syntax processing beyond the level of two-word utterances.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (10): 1593–1610.
Published: 01 October 2005
Abstract
View article
PDF
We report three reaction time (RT)/event-related brain potential (ERP) semantic priming lexical decision experiments that explore the following in relation to L1 activation during L2 processing: (1) the role of L2 proficiency, (2) the role of sentence context, and (3) the locus of L1 activations (ortho-graphic vs. semantic). All experiments used German (L1) homonyms translated into English (L2) to form prime-target pairs ( pine-jaw for Kiefer ) to test whether the L1 caused interference in an all-L2 experiment. Both RTs and ERPs were measured on targets. Experiment 1 revealed reversed priming in the N200 component and RTs for low-proficiency learners, but only RT interference for high-proficiency participants. Experiment 2 showed that once the words were processed in sentence context, the low-proficiency participants still showed reversed N200 and RT priming, whereas the high-proficiency group showed no effects. Experiment 3 tested native English speakers with the words in sentence context and showed a null result comparable to the high-proficiency group. Based on these results, we argue that cognitive control relating to translational activation is modulated by (1) L2 proficiency, as the early interference in the N200 was observed only for low-proficiency learners, and (2) sentence context, as it helps high-proficiency learners control L1 activation. As reversed priming was observed in the N200 and not the N400 component, we argue that (3) the locus of the L1 activations was orthographic. Implications in terms of bilingual word recognition and the functional role of the N200 ERP component are discussed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (4): 541–552.
Published: 01 May 2004
Abstract
View article
PDF
Behavioral evidence suggests that spoken word recognition involves the temporary activation of multiple entries in a listener's mental lexicon. This phenomenon can be demonstrated in cross-modal word fragment priming (CMWP). In CMWP, an auditory word fragment (prime) is immediately followed by a visual word or pseudoword (target). Experiment 1 investigated ERPs for targets presented in this paradigm. Half of the targets were congruent with the prime (e.g., in the prime-target pair: AM-AMBOSS [anvil]), half were not (e.g., AM-PENSUM [pensum]). Lexical entries of the congruent targets should receive activation from the prime. Thus, lexical identification of these targets should be facilitated. An ERP effect named P350, two frontal negative ERP deflections, and the N400 were sensitive to prime-target congruency. In Experiment 2, the relation of the formerly observed ERP effects to processes in a modality-independent mental lexicon was investigated by presenting primes visually. Only the P350 effect could be replicated across different fragment lengths. Therefore, the P350 is discussed as a correlate of lexical identification in a modality-independent mental lexicon.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (8): 1135–1148.
Published: 15 November 2003
Abstract
View article
PDF
The present study investigated the interaction of emotional prosody and word valence during emotional comprehension in men and women. In a prosody-word interference task, participants listened to positive, neutral, and negative words that were spoken with a happy, neutral, and angry prosody. Participants were asked to rate word valence while ignoring emotional prosody, or vice versa. Congruent stimuli were responded faster and more accurately as compared to incongruent emotional stimuli. This behavioral effect was more salient for the word valence task than for the prosodic task and was comparable between men and women. The event-related potentials (ERPs) revealed a smaller N400 amplitude for congruent as compared to emotionally incongruent stimuli. This ERP effect, however, was significant only for the word valence judgment and only for female listeners. The present data suggest that the word valence judgment was more difficult and more easily influenced by task-irrelevant emotional information than the prosodic task in both men and women. Furthermore, although emotional prosody and word valence may have a similar influence on an emotional judgment in both sexes, ERPs indicate sex differences in the underlying processing. Women, but not men, show an interaction between prosody and word valence during a semantic processing stage.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (3): 370–388.
Published: 01 April 2001
Abstract
View article
PDF
Procedural learning of spatio-motor and phoneme sequences was investigated in patients with Broca's and Wernick's aphasia and age-matched controls. In Experiment 1, participants performed a standard serial reaction task (SRT) in which they manually responded to a repeating sequence of stimulus locations. Both Broca's and Wernick's aphasics showed intact sequence learning, as indicated by a reliable response time (RT) cost when the repeating sequence was swithched to a random sequence. In Experiment 2, Broca's aphasics and controls performed a new serial search task (SST), which allowed us to investigate the learning of a spatio-motor sequence and a phoneme sequence independently from each other. On each trial, four letters were presented visually, followed by a single auditorily presented letter. Participants had to press one of four response keys to indicate the location of the auditory letter in the visual display. The arrangement of the visual letters was changed from trial to trial such that either the key-presses or the auditory letters followed a repeating pattern, while the other sequence was random. While controls learned both the key-press and the phoneme sequences, Broca's aphasics were selectively impaired in learning the phoneme sequence. This dissociation between learning of spatio-motor and phoneme sequences supports the assumption that partially separable brain systems are involved in proceedural learning of differenct types of sequential structures.