Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Nick E. Barraclough
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Jeannette A. M. Lorteije, Nick E. Barraclough, Tjeerd Jellema, Mathijs Raemaekers, Jacob Duijnhouwer ...
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (6): 1533–1548.
Published: 01 June 2011
FIGURES
| View All (10)
Abstract
View article
PDF
To investigate form-related activity in motion-sensitive cortical areas, we recorded cell responses to animate implied motion in macaque middle temporal (MT) and medial superior temporal (MST) cortex and investigated these areas using fMRI in humans. In the single-cell studies, we compared responses with static images of human or monkey figures walking or running left or right with responses to the same human and monkey figures standing or sitting still. We also investigated whether the view of the animate figure (facing left or right) that elicited the highest response was correlated with the preferred direction for moving random dot patterns. First, figures were presented inside the cell's receptive field. Subsequently, figures were presented at the fovea while a dynamic noise pattern was presented at the cell's receptive field location. The results show that MT neurons did not discriminate between figures on the basis of the implied motion content. Instead, response preferences for implied motion correlated with preferences for low-level visual features such as orientation and size. No correlation was found between the preferred view of figures implying motion and the preferred direction for moving random dot patterns. Similar findings were obtained in a smaller population of MST cortical neurons. Testing human MT+ responses with fMRI further corroborated the notion that low-level stimulus features might explain implied motion activation in human MT+. Together, these results suggest that prior human imaging studies demonstrating animate implied motion processing in area MT+ can be best explained by sensitivity for low-level features rather than sensitivity for the motion implied by animate figures.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (9): 1805–1819.
Published: 01 September 2009
Abstract
View article
PDF
Prolonged exposure to visual stimuli, or adaptation, often results in an adaptation “aftereffect” which can profoundly distort our perception of subsequent visual stimuli. This technique has been commonly used to investigate mechanisms underlying our perception of simple visual stimuli, and more recently, of static faces. We tested whether humans would adapt to movies of hands grasping and placing different weight objects. After adapting to hands grasping light or heavy objects, subsequently perceived objects appeared relatively heavier, or lighter, respectively. The aftereffects increased logarithmically with adaptation action repetition and decayed logarithmically with time. Adaptation aftereffects also indicated that perception of actions relies predominantly on view-dependent mechanisms. Adapting to one action significantly influenced the perception of the opposite action. These aftereffects can only be explained by adaptation of mechanisms that take into account the presence/absence of the object in the hand. We tested if evidence on action processing mechanisms obtained using visual adaptation techniques confirms underlying neural processing. We recorded monkey superior temporal sulcus (STS) single-cell responses to hand actions. Cells sensitive to grasping or placing typically responded well to the opposite action; cells also responded during different phases of the actions. Cell responses were sensitive to the view of the action and were dependent upon the presence of the object in the scene. We show here that action processing mechanisms established using visual adaptation parallel the neural mechanisms revealed during recording from monkey STS. Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action perception.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (3): 377–391.
Published: 01 March 2005
Abstract
View article
PDF
Processing of complex visual stimuli comprising facial movements, hand actions, and body movements is known to occur in the superior temporal sulcus (STS) of humans and nonhuman primates. The STS is also thought to play a role in the integration of multimodal sensory input. We investigated whether STS neurons coding the sight of actions also integrated the sound of those actions. For 23% of neurons responsive to the sight of an action, the sound of that action significantly modulated the visual response. The sound of the action increased or decreased the visually evoked response for an equal number of neurons. In the neurons whose visual response was increased by the addition of sound (but not those neurons whose responses were decreased), the audiovisual integration was dependent upon the sound of the action matching the sight of the action. These results suggest that neurons in the STS form multisensory representations of observed actions.