Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Marcia Grabowecky
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (9): 1654–1671.
Published: 01 September 2020
FIGURES
| View All (7)
Abstract
View article
PDF
Sensory systems utilize temporal structure in the environment to build expectations about the timing of forthcoming events. We investigated the effects of rhythm-based temporal expectation on auditory responses measured with EEG recorded from the frontocentral sites implicated in auditory processing. By manipulating temporal expectation and the interonset interval (IOI) of tones, we examined how neural responses adapted to auditory rhythm and reacted to stimuli that violated the rhythm. Participants passively listened to the tones while watching a silent nature video. In Experiment 1 ( n = 22), in the long-IOI block, tones were frequently presented (80%) with 1.7-sec IOI and infrequently presented (20%) with 1.2-sec IOI, generating unexpectedly early tones that violated temporal expectation. Conversely, in the short-IOI block, tones were frequently presented with 1.2-sec IOI and infrequently presented with 1.7-sec IOI, generating late tones. We analyzed the tone-evoked N1–P2 amplitude of ERPs and intertrial phase clustering in the theta–alpha band. The results provided evidence of strong delay-dependent adaptation effects (short-term, sensitive to IOI), weak cumulative adaptation effects (long-term, driven by tone repetition over time), and robust temporal-expectation violation effects over and above the adaptation effects. Experiment 2 ( n = 22) repeated Experiment 1 with shorter IOIs of 1.2 and 0.7 sec. Overall, we found evidence of strong delay-dependent adaptation effects, weak cumulative adaptation effects (which may most efficiently accumulate at the tone presentation rate of ∼1 Hz), and robust temporal-expectation violation effects that substantially boost auditory responses to the extent of overriding the delay-dependent adaptation effects likely through mechanisms involved in exogenous attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (3): 435–447.
Published: 01 March 2017
FIGURES
Abstract
View article
PDF
The perceptual system integrates synchronized auditory–visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory–visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory–visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory–visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory–visual synchrony contribute to reading comprehension.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (8): 1875–1886.
Published: 01 August 2011
FIGURES
| View All (5)
Abstract
View article
PDF
Frequency-following and frequency-doubling neurons are ubiquitous in both striate and extrastriate visual areas. However, responses from these two types of neural populations have not been effectively compared in humans because previous EEG studies have not successfully dissociated responses from these populations. We devised a light–dark flicker stimulus that unambiguously distinguished these responses as reflected in the first and second harmonics in the steady-state visual evoked potentials. These harmonics revealed the spatial and functional segregation of frequency-following (the first harmonic) and frequency-doubling (the second harmonic) neural populations. Spatially, the first and second harmonics in steady-state visual evoked potentials exhibited divergent posterior scalp topographies for a broad range of EEG frequencies. The scalp maximum was medial for the first harmonic and contralateral for the second harmonic, a divergence not attributable to absolute response frequency. Functionally, voluntary visual–spatial attention strongly modulated the second harmonic but had negligible effects on the simultaneously elicited first harmonic. These dissociations suggest an intriguing possibility that frequency-following and frequency-doubling neural populations may contribute complementary functions to resolve the conflicting demands of attentional enhancement and signal fidelity—the frequency-doubling population may mediate substantial top–down signal modulation for attentional selection, whereas the frequency-following population may simultaneously preserve relatively undistorted sensory qualities regardless of the observer's cognitive state.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (3): 462–474.
Published: 01 April 2003
Abstract
View article
PDF
Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (3): 295–317.
Published: 01 May 1997
Abstract
View article
PDF
An earlier report described a patient (RM) with bilateral parietal damage who showed severe binding problems between shape and color and shape and size (Friedman-Hill, Robertson, & Treisman, 1995). When shown two different-colored letters, RM reported a large number of illusory conjunctions (ICs) combining the shape of one letter with the color of the other, even when he was looking directly at one of them and had as long as 10 sec to respond. The lesions also produced severe deficits in locating and reaching for objects, and difficulty in seeing more than one object at a time, resulting in a neuropsychological diagnosis of Balint's syndrome or dorsal simultanagnosia. The pattern of deficits supported predictions of Treisman's Feature Integration Theory (FIT) that the loss of spatial information would lead to binding errors. They further suggested that the spatial information used in binding depends on intact parietal function. In the present paper we extend these findings and examine other deficits in RM that would be predicted by FIT. We show that: (1) Object individuation is impaired, making it impossible for him correctly to count more than one or two objects, even when he is aware that more are present. (2) Visual search for a target defined by a conjunction of features (requiring binding) is impaired, while the detection of a target defined by a unique feature is not. Search for the absence of a feature (0 among Qs) is also severely impaired, while search for the presence (Q among 0s) is not. Feature absence can only be detected when all the present features are bound to the nontarget items. (3) RM's deficits cannot be attributed to a general binding problem: binding errors were far more likely with simultaneous presentation where spatial information was required than with sequential presentation where time could be used as the medium for binding. (4) Selection for attention was severely impaired, whether it was based on the position of a marker or on some other feature (color). (5) Spatial information seems to exist that RM cannot access, suggesting that feature binding relies on a relatively late stage where implicit spatial information is made explicitly accessible. The data converge to support our conclusions that explicit spatial knowledge is necessary for the perception of accurately bound features, for accurate attentional selection, and for accurate and rapid search for a conjunction of features in a multiitem display. It is obviously necessary for directing attention to spatial locations, but the consequences of impairments in this ability seem also to affect object selection, object individuation, and feature integration. Thus, the functional effects of parietal damage are not limited to the spatial and attentional problems that have long been described in patients with Balint's syndrome. Damage to parietal areas also affects object perception through damage to spatial representations that are fundamental for spatial awareness.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1993) 5 (3): 288–302.
Published: 01 July 1993
Abstract
View article
PDF
Preattentive processes such as perceptual grouping are thought to be important in the initial guidance of visual attention and may also operate in unilateral neglect by contributing to the definition of a task-appropriate reference frame. We explored this question with a visual search task in which patients with unilateral visual neglect (5 with right-, 2 with left-hemisphere damage) searched a diamond-shaped matrix for a conjunction target that shared one feature with each of two distractor elements. Additional grouping stimuli appeared as flanks either on the left, right, or both sides of the central matrix, and significantly changed performance in the search task. As expected, when flanks appeared only on the ipsilesional side a decrement in search performance was observed, but the further addition of contralesional flanks actually reduced the decrement and returned performance to near baseline levels. These data suggest that flanking stimuli on the neglected contralesional side of visual space can influence the reference frame by grouping with task-relevant stimuli, and that this preattentive influence can be preserved in patients with unilateral visual neglect.