Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Timothy L. Hodgson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Eye Movements in the “Morris Maze” Spatial Working Memory Task Reveal Deficits in Strategic Planning
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (4): 497–509.
Published: 01 April 2019
FIGURES
| View All (4)
Abstract
View article
PDF
Analysis of eye movements can provide insights into processes underlying performance of cognitive tasks. We recorded eye movements in healthy participants and people with idiopathic Parkinson disease during a token foraging task based on the spatial working memory component of the widely used Cambridge Neuropsychological Test Automated Battery. Participants selected boxes (using a mouse click) to reveal hidden tokens. Tokens were never hidden under a box where one had been found before, such that memory had to be used to guide box selections. A key measure of performance in the task is between search errors (BSEs) in which a box where a token has been found is selected again. Eye movements were found to be most commonly directed toward the next box to be clicked on, but fixations also occurred at rates higher than expected by chance on boxes farther ahead or back along the search path. Looking ahead and looking back in this way was found to correlate negatively with BSEs and was significantly reduced in patients with Parkinson disease. Refixating boxes where tokens had already been found correlated with BSEs and the severity of Parkinson disease symptoms. It is concluded that eye movements can provide an index of cognitive planning in the task. Refixations on locations where a token has been found may also provide a sensitive indicator of visuospatial memory integrity. Eye movement measures derived from the spatial working memory task may prove useful in the assessment of executive functions as well as neurological and psychiatric diseases in the future.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 3734–3745.
Published: 01 December 2011
FIGURES
| View All (5)
Abstract
View article
PDF
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (1): 13–24.
Published: 01 January 2007
Abstract
View article
PDF
Many complex tasks require us to flexibly switch between behavioral rules, associations, and strategies. The prefrontal cerebral cortex is thought to be critical to the performance of such behaviors, although the relative contribution of different components of this structure and associated subcortical regions are not fully understood. We used functional magnetic resonance imaging to measure brain activity during a simple task which required repeated reversals of a rule linking a colored cue and a left/right motor response. Each trial comprised three discrete events separated by variable delay periods. A colored cue instructed which response was to be executed, followed by a go signal which told the subject to execute the response and a feedback instruction which indicated whether to “hold” or “flip” the rule linking the colored cue and response. The design allowed us to determine which brain regions were recruited by the specific demands of preparing a rule contingent motor response, executing such a response, evaluating the significance of the feedback, and reconfiguring stimulus-response (SR) associations. The results indicate that an increase in neural activity occurs within the anterior cingulate gyrus under conditions in which SR associations are labile. In contrast, lateral frontal regions are activated by unlikely/unexpected perceptual events regardless of their significance for behavior. A network of subcortical structures, including the mediodorsal nucleus of the thalamus and striatum were the only regions showing activity that was exclusively correlated with the neurocognitive demands of reversing SR associations. We conclude that lateral frontal regions act to evaluate the behavioral significance of perceptual events, whereas medial frontal-thalamic circuits are involved in monitoring and reconfiguring SR associations when necessary.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (5): 894–907.
Published: 01 September 2000
Abstract
View article
PDF
In this paper, we describe a novel approach to the study of problem solving involving the detailed analysis of natural scanning eye movements during the “one-touch” Tower-of-London (TOL) task. We showed subjects a series of pictures depicting two arrangements of colored balls in pockets within the uper and lower halves of a computer display. The task was to plan (but not to execute)the shortest movement sequence required to rearrange the balls in one half of the display (the workspace)to match the arrangement in the opposite half (the Goalspace)and indicate the minimum number of moves required for problem solution. We report that subjects are more likely to look towards the Goalspace in the initial period after picture presentation, but bias gaze towards the Workspace during the middle of trials. Towards the end of a trial, subjects are once again more likely to fixate the Goalspace. This pattern is found regardless of whether the subjects solve problems by rearranging the balls in the lower or uper visual fields, demonstrating that this strategy correlates with discreate phases in problem solving. A second experiment showed that efficient planners direct their gaze selectively towards the problem critical balls in the workspace. In contrast, Individuals who make errors spend more time looking at irrelevant items and are strongly influenced by the movement strategy needed to solve the preceding problem. We conclude that efficient solution of the TOL requires the capacity to generate and flexibly shift between control sets, including those underlying ocular scanning. The role of working memory and the prefrontal cerebral cortex in the task are discussed.