Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Milena Rabovsky
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (6): 1048–1070.
Published: 01 June 2024
FIGURES
| View All (9)
Abstract
View article
PDF
Prediction errors drive implicit learning in language, but the specific mechanisms underlying these effects remain debated. This issue was addressed in an EEG study manipulating the context of a repeated unpredictable word (repetition of the complete sentence or repetition of the word in a new sentence context) and sentence constraint. For the manipulation of sentence constraint, unexpected words were presented either in high-constraint (eliciting a precise prediction) or low-constraint sentences (not eliciting any specific prediction). Repetition-induced reduction of N400 amplitudes and of power in the alpha/beta frequency band was larger for words repeated with their sentence context as compared with words repeated in a new low-constraint context, suggesting that implicit learning happens not only at the level of individual items but additionally improves sentence-based predictions. These processing benefits for repeated sentences did not differ between constraint conditions, suggesting that sentence-based prediction update might be proportional to the amount of unpredicted semantic information, rather than to the precision of the prediction that was violated. In addition, the consequences of high-constraint prediction violations, as reflected in a frontal positivity and increased theta band power, were reduced with repetition. Overall, our findings suggest a powerful and specific adaptation mechanism that allows the language system to quickly adapt its predictions when unexpected semantic information is processed, irrespective of sentence constraint, and to reduce potential costs of strong predictions that were violated.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (12): 2297–2310.
Published: 01 November 2022
FIGURES
Abstract
View article
PDF
The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate. It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension. The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time. Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention. We reanalyzed EEG and behavioral data from a sentence processing task by Sassenhagen and Bornkessel-Schlesewsky [The P600 as a correlate of ventral attention network reorientation. Cortex , 66 , A3–A20, 2015], which included sentences with morphosyntactic and semantic violations. Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information. To quantify the varying degrees of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task. We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability. These results thus suggest that the P600 component is sensitive to sustained attention whereas the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes reflect more automatic aspects of sentence comprehension.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (8): 1216–1226.
Published: 01 August 2019
FIGURES
| View All (6)
Abstract
View article
PDF
It is becoming increasingly established that information from long-term memory can influence early perceptual processing, a finding that is in line with recent theoretical approaches to cognition such as the predictive coding framework. Notwithstanding, the impact of semantic knowledge on conscious perception and the temporal dynamics of such an influence remain unclear. To address this question, we presented pictures of novel objects to participants as the second of two targets in an attentional blink paradigm. We found that associating newly acquired semantic knowledge to objects increased overall conscious detection in comparison to objects associated with minimal knowledge while controlling for object familiarity. Additionally, event-related brain potentials revealed a corresponding modulation beginning 100 msec after stimulus presentation in the P1 component. Furthermore, the size of this modulation was correlated with participant's subjective reports of conscious perception. These findings suggest that semantic knowledge can shape the contents of consciousness by affecting early stages of perceptual processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (4): 990–1005.
Published: 01 April 2012
FIGURES
| View All (5)
Abstract
View article
PDF
Recent evidence suggests that conceptual knowledge modulates early visual stages of object recognition. The present study investigated whether similar modulations can be observed also for the recognition of object names, that is, for symbolic representations with only arbitrary relationships between their visual features and the corresponding conceptual knowledge. In a learning paradigm, we manipulated the amount of information provided about initially unfamiliar visual objects while controlling for perceptual stimulus properties and exposure. In a subsequent test session with electroencephalographic recordings, participants performed several tasks on either the objects or their written names. For objects as well as names, knowledge effects were observed as early as about 120 msec in the P1 component of the ERP, reflecting perceptual processing in extrastriate visual cortex. These knowledge-dependent modulations of early stages of visual word recognition suggest that information about word meanings may modulate the perception of arbitrarily related visual features surprisingly early.