Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Daniel Bullock
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (12): 2667–2686.
Published: 01 December 2024
FIGURES
| View All (8)
Abstract
View articletitled, Amygdalar Excitation of Hippocampal Interneurons Can Lead to Emotion-driven Overgeneralization of Context
View
PDF
for article titled, Amygdalar Excitation of Hippocampal Interneurons Can Lead to Emotion-driven Overgeneralization of Context
Context is central to cognition: Detailed contextual representations enable flexible adjustment of behavior via comparison of the current situation with prior experience. Emotional experiences can greatly enhance contextual memory. However, sufficiently intense emotional signals can have the opposite effect, leading to weaker or less specific memories. How can emotional signals have such intensity-dependent effects? A plausible mechanistic account has emerged from recent anatomical data on the impact of the amygdala on the hippocampus in primates. In hippocampal CA3, the amygdala formed potent synapses on pyramidal neurons, calretinin (CR) interneurons, as well as parvalbumin (PV) interneurons. CR interneurons are known to disinhibit pyramidal neuron dendrites, whereas PV neurons provide strong perisomatic inhibition. This potentially counterintuitive connectivity, enabling amygdala to both enhance and inhibit CA3 activity, may provide a mechanism that can boost or suppress memory in an intensity-dependent way. To investigate this possibility, we simulated this connectivity pattern in a spiking network model. Our simulations revealed that moderate amygdala input can enrich CA3 representations of context through disinhibition via CR interneurons, but strong amygdalar input can impoverish CA3 activity through simultaneous excitation and feedforward inhibition via PV interneurons. Our model revealed an elegant circuit mechanism that mediates an affective “inverted U” phenomenon: There is an optimal level of amygdalar input that enriches hippocampal context representations, but on either side of this zone, representations are impoverished. This circuit mechanism helps explain why excessive emotional arousal can disrupt contextual memory and lead to overgeneralization, as seen in severe anxiety and posttraumatic stress disorder.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (7): 1504–1529.
Published: 01 July 2010
FIGURES
| View All (11)
Abstract
View articletitled, Neural Representations and Mechanisms for the Performance of Simple Speech Sequences
View
PDF
for article titled, Neural Representations and Mechanisms for the Performance of Simple Speech Sequences
Speakers plan the phonological content of their utterances before their release as speech motor acts. Using a finite alphabet of learned phonemes and a relatively small number of syllable structures, speakers are able to rapidly plan and produce arbitrary syllable sequences that fall within the rules of their language. The class of computational models of sequence planning and performance termed competitive queuing models have followed K. S. Lashley [The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior (pp. 112–136). New York: Wiley, 1951] in assuming that inherently parallel neural representations underlie serial action, and this idea is increasingly supported by experimental evidence. In this article, we developed a neural model that extends the existing DIVA model of speech production in two complementary ways. The new model includes paired structure and content subsystems [cf. MacNeilage, P. F. The frame/content theory of evolution of speech production. Behavioral and Brain Sciences, 21, 499–511, 1998 ] that provide parallel representations of a forthcoming speech plan as well as mechanisms for interfacing these phonological planning representations with learned sensorimotor programs to enable stepping through multisyllabic speech plans. On the basis of previous reports, the model's components are hypothesized to be localized to specific cortical and subcortical structures, including the left inferior frontal sulcus, the medial premotor cortex, the basal ganglia, and the thalamus. The new model, called gradient order DIVA, thus fills a void in current speech research by providing formal mechanistic hypotheses about both phonological and phonetic processes that are grounded by neuroanatomy and physiology. This framework also generates predictions that can be tested in future neuroimaging and clinical case studies.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (8): 1611–1627.
Published: 01 August 2009
Abstract
View articletitled, Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
View
PDF
for article titled, Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade–pursuit interactions are clarified by a new computational neural model, which describes interactions between motion processing areas: the middle temporal area, the middle superior temporal area, the frontal pursuit area, and the dorsal lateral pontine nucleus; saccade specification, selection, and planning areas: the lateral intraparietal area, the frontal eye fields, the substantia nigra pars reticulata, and the superior colliculus; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit other than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (4): 668–686.
Published: 01 April 2005
Abstract
View articletitled, How Position, Velocity, and Temporal Information Combine in the Prospective Control of Catching: Data and Model
View
PDF
for article titled, How Position, Velocity, and Temporal Information Combine in the Prospective Control of Catching: Data and Model
The cerebral cortex contains circuitry for continuously computing properties of the environment and one's body, as well as relations among those properties. The success of complex perceptuomotor performances requires integrated, simultaneous use of such relational information. Ball catching is a good example as it involves reaching and grasping of visually pursued objects that move relative to the catcher. Although integrated neural control of catching has received sparse attention in the neuroscience literature, behavioral observations have led to the identification of control principles that may be embodied in the involved neural circuits. Here, we report a catching experiment that refines those principles via a novel manipulation. Visual field motion was used to perturb velocity information about balls traveling on various trajectories relative to a seated catcher, with various initial hand positions. The experiment produced evidence for a continuous, prospective catching strategy, in which hand movements are planned based on gaze-centered ball velocity and ball position information. Such a strategy was implemented in a new neural model, which suggests how position, velocity, and temporal information streams combine to shape catching movements. The model accurately reproduces the main and interaction effects found in the behavioral experiment and provides an interpretation of recently observed target motion-related activity in the motor cortex during interceptive reaching by monkeys. It functionally interprets a broad range of neurobiological and behavioral data, and thus contributes to a unified theory of the neural control of reaching to stationary and moving targets.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (4): 425–444.
Published: 01 July 1998
Abstract
View articletitled, A Cortico-Spinal Model of Reaching and Proprioception under Multiple Task Constraints
View
PDF
for article titled, A Cortico-Spinal Model of Reaching and Proprioception under Multiple Task Constraints
A model of cortico-spinal trajectory generation for voluntary reaching movements is developed to functionally interpret a broad range of behavioral, physiological, and anatomical data. The model simulates how arm movements achieve their remarkable efficiency and accuracy in response to widely varying positional, speed, and force constraints. A key issue in arm movement control is how the brain copes with such a wide range of movement contexts. The model suggests how the brain may set automatic and volitional gating mechanisms to vary the balance of static and dynamic feedback information to guide the movement command and to compensate for external forces. For example, with increasing movement speed, the system shifts from a feedback position controller to a feedforward trajectory generator with superimposed dynamics compensation. Simulations of the model illustrate how it reproduces the effects of elastic loads on fast movements, endpoint errors in Coriolis fields, and several effects of muscle tendon vibration, including tonic and antagonist vibration reflexes, position and movement illusions, effects of obstructing the tonic vibration reflex, and reaching undershoots caused by antagonist vibration.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1994) 6 (4): 341–358.
Published: 01 July 1994
Abstract
View articletitled, Neural Representations for Sensorimotor Control. III. Learning a Body-Centered Representation of a Three-Dimensional Target Position
View
PDF
for article titled, Neural Representations for Sensorimotor Control. III. Learning a Body-Centered Representation of a Three-Dimensional Target Position
A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation—otherwise known as a parcellated distributed representation—of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1993) 5 (4): 408–435.
Published: 01 October 1993
Abstract
View articletitled, A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm
View
PDF
for article titled, A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.