Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Aina Puce
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience 1–24.
Published: 22 March 2024
Abstract
View article
PDF
The two visual pathway description of [Ungerleider, L. G., & Mishkin, M. Two cortical visual systems. In D. J. Dingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT, 1982] changed the course of late 20th century systems and cognitive neuroscience. Here, I try to reexamine our laboratory's work through the lens of the [Pitcher, D., & Ungerleider, L. G. Evidence for a third visual pathway specialized for social perception. Trends in Cognitive Sciences, 25 , 100–110, 2021] new third visual pathway. I also briefly review the literature related to brain responses to static and dynamic visual displays, visual stimulation involving multiple individuals, and compare existing models of social information processing for the face and body. In this context, I examine how the posterior STS might generate unique social information relative to other brain regions that also respond to social stimuli. I discuss some of the existing challenges we face with assessing how information flow progresses between structures in the proposed functional pathways and how some stimulus types and experimental designs may have complicated our data interpretation and model generation. I also note a series of outstanding questions for the field. Finally, I examine the idea of a potential expansion of the third visual pathway, to include aspects of previously proposed “lateral” visual pathways. Doing this would yield a more general entity for processing motion/action (i.e., “ [inter]action ”) that deals with interactions between people, as well as people and objects. In this framework, a brief discussion of potential hemispheric biases for function, and different forms of neuropsychological impairments created by focal lesions in the posterior brain is highlighted to help situate various brain regions into an expanded [inter]action pathway.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (8): 2079–2101.
Published: 01 August 2011
FIGURES
| View All (5)
Abstract
View article
PDF
In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (2): 189–203.
Published: 01 March 2004
Abstract
View article
PDF
Several brain imaging studies have identified a region of fusiform gyrus (FG) that responds more strongly to faces than common objects. The precise functional role of this fusiform face area (FFA) is, however, a matter of dispute. We sought to distinguish among three hypotheses concerning FFA function: face specificity, individuation, and expert individuation. According to the face-specificity hypothesis, the FFA is specialized for face processing. Alternatively, the FFA may be specialized for individuating visually similar items within a category (the individuation hypothesis) or for individuating within categories with which a person has expertise (the expert-individuation hypothesis). Our results from two experiments supported the face-specificity hypothesis. Greater FFA activation to faces than Lepidoptera, another homogeneous object class, occurred during both free viewing and individuation, with similar FFA activation to Lepidoptera and common objects (Experiment 1). Furthermore, during individuation of Lepidoptera, 83% of activated FG voxels were outside the face FG region and only 15% of face FG voxels were activated. This pattern of results suggests that distinct areas may individuate faces and Lepidoptera. In Experiment 2, we tested Lepidoptera experts using the same experimental design. Again, the results supported the face-specificity hypothesis. Activation to faces in the FFA was greater than to both Lepidoptera and objects with little overlap between FG areas activated by faces and Lepidoptera. Our results suggest that distinct populations of neurons in human FG may be tuned to the features needed to individuate the members of different object classes, as has been reported in monkey inferotemporal cortex, and that the FFA contains neurons tuned for individuating faces.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (5): 605–610.
Published: 01 October 1997
Abstract
View article
PDF
The perception of faces is sometimes regarded as a specialized task involving discrete brain regions. In an attempt to identi$ face-specific cortex, we used functional magnetic resonance imaging (fMRI) to measure activation evoked by faces presented in a continuously changing montage of common objects or in a similar montage of nonobjects. Bilateral regions of the posterior fusiform gyrus were activated by faces viewed among nonobjects, but when viewed among objects, faces activated only a focal right fusiform region. To determine whether this focal activation would occur for another category of familiar stimuli, subjects viewed flowers presented among nonobjects and objects. While flowers among nonobjects evoked bilateral fusiform activation, flowers among objects evoked no activation. These results demonstrate that both faces and flowers activate large and partially overlapping regions of inferior extrastriate cortex. A smaller region, located primarily in the right lateral fusiform gyrus, is activated specifically by faces.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1996) 8 (6): 551–565.
Published: 01 November 1996
Abstract
View article
PDF
Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task in which they mentally counted the number of occurrences of pictorial stimuli from a designated category such as butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces, and other nonface stimuli. Human faces evoked a negative potential at 172 msec (N170), which was absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the “structural encoder” suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.