Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Olaf Hauk
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (3): 403–425.
Published: 01 March 2020
FIGURES
| View All (7)
Abstract
View article
PDF
Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400–800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.
Journal Articles
Frontal Cortex Supports the Early Structuring of Multiple Solution Steps in Symbolic Problem-solving
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (1): 114–124.
Published: 01 January 2017
FIGURES
| View All (5)
Abstract
View article
PDF
Abstract problem-solving relies on a sequence of cognitive steps involving phases of task encoding, the structuring of solution steps, and their execution. On the neural level, metabolic neuroimaging studies have associated a frontal-parietal network with various aspects of executive control during numerical and nonnumerical problem-solving. We used EEG–MEG to assess whether frontal cortex contributes specifically to the early structuring of multiple solution steps. Basic multiplication (“3 × 4” vs. “3 × 24”) was compared with an arithmetic sequence rule (“first add the two digits, then multiply the sum with the smaller digit”) on two complexity levels. This allowed dissociating demands of early solution step structuring from early task encoding demands. Structuring demands were high for conditions that required multiple steps, that is, complex multiplication and the two arithmetic sequence conditions, but low for easy multiplication that mostly relied on direct memory retrieval. Increased right frontal activation in time windows between 300 and 450 msec was observed only for conditions that required multiple solution steps. General task encoding demands, operationalized by problem size (one-digit vs. two-digit numbers), did not predict these early frontal effects. In contrast, parietal effects occurred as a function of problem size irrespectively of structuring demands in early phases of task encoding between 100 and 300 msec. We here propose that frontal cortex subserves domain-general processes of problem-solving, such as the structuring of multiple solution steps, whereas parietal cortex supports number-specific early encoding processes that vary as a function of problem size.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (8): 1098–1110.
Published: 01 August 2016
FIGURES
| View All (6)
Abstract
View article
PDF
Arithmetic problem-solving can be conceptualized as a multistage process ranging from task encoding over rule and strategy selection to step-wise task execution. Previous fMRI research suggested a frontal–parietal network involved in the execution of complex numerical and nonnumerical tasks, but evidence is lacking on the particular contributions of frontal and parietal cortices across time. In an arithmetic task paradigm, we evaluated individual participants' “retrieval” and “multistep procedural” strategies on a trial-by-trial basis and contrasted those in time-resolved analyses using combined EEG and MEG. Retrieval strategies relied on direct retrieval of arithmetic facts (e.g., 2 + 3 = 5). Procedural strategies required multiple solution steps (e.g., 12 + 23 = 12 + 20 + 3 or 23 + 10 + 2). Evoked source analyses revealed independent activation dynamics within the first second of problem-solving in brain areas previously described as one network, such as the frontal–parietal cognitive control network: The right frontal cortex showed earliest effects of strategy selection for multistep procedural strategies around 300 msec, before parietal cortex activated around 700 msec. In time–frequency source power analyses, memory retrieval and multistep procedural strategies were differentially reflected in theta, alpha, and beta frequencies: Stronger beta and alpha desynchronizations emerged for procedural strategies in right frontal, parietal, and temporal regions as function of executive demands. Arithmetic fact retrieval was reflected in right prefrontal increases in theta power. Our results demonstrate differential brain dynamics within frontal–parietal networks across the time course of a problem-solving process, and analyses of different frequency bands allowed us to disentangle cortical regions supporting the underlying memory and executive functions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (9): 1738–1751.
Published: 01 September 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Visual word recognition is often described as automatic, but the functional locus of top–down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions—in the vicinity of the putative visual word form area—around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (9): 2027–2041.
Published: 01 September 2010
FIGURES
| View All (5)
Abstract
View article
PDF
It has been claimed that semantic dementia (SD), the temporal variant of fronto-temporal dementia, is characterized by an across-the-board deficit affecting all types of conceptual knowledge. We here confirm this generalized deficit but also report differential degrees of impairment in processing specific semantic word categories in a case series of SD patients ( N = 11). Within the domain of words with strong visually grounded meaning, the patients' lexical decision accuracy was more impaired for color-related than for form-related words. Likewise, within the domain of action verbs, the patients' performance was worse for words referring to face movements and speech acts than for words semantically linked to actions performed with the hand and arm. Psycholinguistic properties were matched between the stimulus groups entering these contrasts; an explanation for the differential degrees of impairment must therefore involve semantic features of the words in the different conditions. Furthermore, this specific pattern of deficits cannot be captured by classic category distinctions such as nouns versus verbs or living versus nonliving things. Evidence from previous neuroimaging research indicates that color- and face/speech-related words, respectively, draw most heavily on anterior-temporal and inferior-frontal areas, the structures most affected in SD. Our account combines (a) the notion of an anterior-temporal amodal semantic “hub” to explain the profound across-the-board deficit in SD word processing, with (b) a semantic topography model of category-specific circuits whose cortical distributions reflect semantic features of the words and concepts represented.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (5): 998–1010.
Published: 01 May 2010
FIGURES
| View All (4)
Abstract
View article
PDF
There are two views about morphology, the aspect of language concerned with the internal structure of words. One view holds that morphology is a domain of knowledge with a specific type of neurocognitive representation supported by specific brain mechanisms lateralized to left fronto-temporal cortex. The alternate view characterizes morphological effects as being a by-product of the correlation between form and meaning and where no brain area is predicted to subserve morphological processing per se. Here we provided evidence from Arabic that morphemes do have specific memory traces, which differ as a function of their functional properties. In an MMN study, we showed that the abstract consonantal root, which conveys semantic meaning (similarly to monomorphemic content words in English), elicits an MMN starting from 160 msec after the deviation point, whereas the abstract vocalic word pattern, which plays a range of grammatical roles, elicits an MMN response starting from 250 msec after the deviation point. Topographically, the root MMN has a symmetric fronto-central distribution, whereas the word pattern MMN lateralizes significantly to the left, indicating stronger involvement of left peri-sylvian areas. In languages with rich morphologies, morphemic processing seems to be supported by distinct neural networks, thereby providing evidence for a specific neuronal basis for morphology as part of the cerebral language machinery.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (3): 525–542.
Published: 01 March 2007
Abstract
View article
PDF
Concepts are composed of features related to different sensory and motor modalities such as vision, sound, and action. It is a matter of controversy whether conceptual features are represented in sensory-motor areas reflecting the specific learning experience during acquisition. In order to address this issue, we assessed the plasticity of conceptual representations by training human participants with novel objects under different training conditions. These objects were assigned to categories such that for one class of categories, the overall shape was diagnostic for category membership, whereas for the other class, a detail feature affording a particular action was diagnostic. During training, participants were asked to either make an action pantomime toward the detail feature of the novel object or point to it. In a categorization task at test, we assessed the neural correlates of the acquired conceptual representations by measuring electrical brain activity. Here, we show that the same object is differentially processed depending on the sensory-motor interactions during knowledge acquisition. Only in the pantomime group did we find early activation in frontal motor regions and later activation in occipito-parietal visual-motor regions. In the pointing training group, these effects were absent. These results show that action information contributes to conceptual processing depending on the specific learning experience. In line with modality-specific theories of conceptual memory, our study suggests that conceptual representations are established by the learning-based formation of cell assemblies in sensory-motor areas.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (5): 747–758.
Published: 01 May 2003
Abstract
View article
PDF
A sound turned off for a short moment can be perceived as continuous if the silent gap is filled with noise. The neural mechanisms underlying this “continuity illusion” were investigated using the mismatch negativity (MMN), an eventrelated potential reflecting the perception of a sudden change in an otherwise regular stimulus sequence. The MMN was recorded in four conditions using an oddball paradigm. The standards consisted of 500-Hz, 120-msec tone pips that were either physically continuous (Condition 1) or were interrupted by a 40-msec silent gap (Condition 2). The deviants consisted of the interrupted tone, but with the silent gap filled by a burst of bandpass-filtered noise. The noise either occupied the same frequency region as the tone and elicited the continuity illusion (Conditions 1a and 2a), or occupied a remote frequency region and did not elicit the illusion (Conditions 1b and 2b). We predicted that, if the continuity illusion is determined before MMN generation, then, other things being equal, the MMN should be larger in conditions where the deviants are perceived as continuous and the standards as interrupted or vice versa, than when both were perceived as continuous or both interrupted. Consistent with this prediction, we observed an interaction between standard type and noise frequency region, with the MMN being larger in Condition 1a than in Condition 1b, but smaller in Condition 2a than in Condition 2b. Because the subjects were instructed to ignore the tones and watch a silent movie during the recordings, the results indicate that the continuity illusion can occur outside the focus of attention. Furthermore, the latency of the MMN (less than approximately 200 msec postdeviance onset) places an upper limit on the stage of neural processing responsible for the illusion.