Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Elia Formisano
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (11): 2145–2158.
Published: 01 November 2020
FIGURES
| View All (6)
Abstract
View article
PDF
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (6): 980–990.
Published: 01 June 2017
FIGURES
| View All (4)
Abstract
View article
PDF
In many everyday listening situations, an otherwise audible sound may go unnoticed amid multiple other sounds. This auditory phenomenon, called informational masking (IM), is sensitive to visual input and involves early (50–250 msec) activity in the auditory cortex (the so-called awareness-related negativity). It is still unclear whether and how the timing of visual input influences the neural correlates of IM in auditory cortex. To address this question, we obtained simultaneous behavioral and neural measures of IM from human listeners in the presence of a visual input stream and varied the asynchrony between the visual stream and the rhythmic auditory target stream (in-phase, antiphase, or random). Results show effects of cross-modal asynchrony on both target detectability (RT and sensitivity) and the awareness-related negativity measured with EEG, which were driven primarily by antiphasic audiovisual stimuli. The neural effect was limited to the interval shortly before listeners' behavioral report of the target. Our results indicate that the relative timing of visual input can influence the IM of a target sound in the human auditory cortex. They further show that this audiovisual influence occurs early during the perceptual buildup of the target sound. In summary, these findings provide novel insights into the interaction of IM and multisensory interaction in the human brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (8): 1332–1342.
Published: 01 August 2013
FIGURES
| View All (6)
Abstract
View article
PDF
Lesion studies in neglect patients have inspired two competing models of spatial attention control, namely, Heilman's “hemispatial” theory and Kinsbourne's “opponent processor” model. Both assume a functional asymmetry between the two hemispheres but propose very different mechanisms. Neuroimaging studies have identified a bilateral dorsal frontoparietal network underlying voluntary shifts of spatial attention. However, lateralization of attentional processes within this network has not been consistently reported. In the current study, we aimed to provide direct evidence concerning the functional asymmetry of the right and left FEF during voluntary shifts of spatial attention. To this end, we applied fMRI-guided neuronavigation to disrupt individual FEF activation foci with a longer-lasting inhibitory patterned TMS protocol followed by a spatial cueing task. Our results indicate that right FEF stimulation impaired the ability of shifting spatial attention toward both hemifields, whereas the effects of left FEF stimulation were limited to the contralateral hemifield. These results provide strong direct evidence for right-hemispheric dominance in spatial attention within frontal cortex supporting Heilman's “hemispatial” theory. This complements previous TMS studies that generally conform to Kinsbourne's “opponent processor” model after disruption of parietal cortex, and we therefore propose that both theories are not mutually exclusive.