Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Stephen Grossberg
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (5): 1031–1054.
Published: 01 May 2012
FIGURES
| View All (15)
Abstract
View article
PDF
Spatial learning and memory are important for navigation and formation of episodic memories. The hippocampus and medial entorhinal cortex (MEC) are key brain areas for spatial learning and memory. Place cells in hippocampus fire whenever an animal is located in a specific region in the environment. Grid cells in the superficial layers of MEC provide inputs to place cells and exhibit remarkable regular hexagonal spatial firing patterns. They also exhibit a gradient of spatial scales along the dorsoventral axis of the MEC, with neighboring cells at a given dorsoventral location having different spatial phases. A neural model shows how a hierarchy of self-organizing maps, each obeying the same laws, responds to realistic rat trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with unimodal firing fields that fit neurophysiological data about their development in juvenile rats. The hippocampal place fields represent much larger spaces than the grid cells to support navigational behaviors. Both the entorhinal and hippocampal self-organizing maps amplify and learn to categorize the most energetic and frequent co-occurrences of their inputs. Top–down attentional mechanisms from hippocampus to MEC help to dynamically stabilize these spatial memories in both the model and neurophysiological data. Spatial learning through MEC to hippocampus occurs in parallel with temporal learning through lateral entorhinal cortex to hippocampus. These homologous spatial and temporal representations illustrate a kind of “neural relativity” that may provide a substrate for episodic learning and memory.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (8): 1611–1627.
Published: 01 August 2009
Abstract
View article
PDF
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade–pursuit interactions are clarified by a new computational neural model, which describes interactions between motion processing areas: the middle temporal area, the middle superior temporal area, the frontal pursuit area, and the dorsal lateral pontine nucleus; saccade specification, selection, and planning areas: the lateral intraparietal area, the frontal eye fields, the substantia nigra pars reticulata, and the superior colliculus; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit other than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (1): 102–120.
Published: 01 January 2001
Abstract
View article
PDF
Smooth pursuit eye movements (SPEMs) are eye rotations that are used to maintain fixation on a moving target. Such rotations complicate the interpretation of the retinal image, because they nullify the retinal motion of the target, while generating retinal motion of stationary objects in the background. This poses a problem for the oculomotor system, which must track the stabilized target image while suppressing the optokinetic reflex, which would move the eye in the direction of the retinal background motion (opposite to the direction in which the target is moving). Similarly, the perceptual system must estimate the actual direction and speed of moving objects in spite of the confounding effects of the eye rotation. This paper proposes a neural model to account for the ability of primates to accomplish these tasks. The model simulates the neurophysiological properties of cell types found in the superior temporal sulcus of the macaque monkey, specifically the medial superior temporal (MST) region. These cells process signals related to target motion, background motion, and receive an efference copy of eye velocity during pursuit movements. The model focuses on the interactions between cells in the ventral and dorsal subdivisions of MST, which are hypothesized to process target velocity and background motion, respectively. The model explains how these signals can be combined to explain behavioral data about pursuit maintenance and perceptual data from human studies, including the Aubert-Fleischl phenomenon and the Filehne Illusion, thereby clarifying the functional significance of neurophysiological data about these MST cell properties. It is suggested that the connectivity used in the model may represent a general strategy used by the brain in analyzing the visual world.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (4): 425–444.
Published: 01 July 1998
Abstract
View article
PDF
A model of cortico-spinal trajectory generation for voluntary reaching movements is developed to functionally interpret a broad range of behavioral, physiological, and anatomical data. The model simulates how arm movements achieve their remarkable efficiency and accuracy in response to widely varying positional, speed, and force constraints. A key issue in arm movement control is how the brain copes with such a wide range of movement contexts. The model suggests how the brain may set automatic and volitional gating mechanisms to vary the balance of static and dynamic feedback information to guide the movement command and to compensate for external forces. For example, with increasing movement speed, the system shifts from a feedback position controller to a feedforward trajectory generator with superimposed dynamics compensation. Simulations of the model illustrate how it reproduces the effects of elastic loads on fast movements, endpoint errors in Coriolis fields, and several effects of muscle tendon vibration, including tonic and antagonist vibration reflexes, position and movement illusions, effects of obstructing the tonic vibration reflex, and reaching undershoots caused by antagonist vibration.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (2): 199–215.
Published: 01 March 1998
Abstract
View article
PDF
This article develops a neural model of how sharp disparity tuning can arise through experience-dependent development of cortical complex cells. This learning process clarifies how complex cells can binocularly match left and right eye image features with the same contrast polarity, yet also pool signals with opposite contrast polarities. Antagonistic rebounds between LGN ON and OFF cells and cortical simple cells sensitive to opposite contrast polarities enable anticorrelated simple cells to learn to activate a shared set of complex cells. Feedback from binocularly tuned cortical cells to monocular LGN cells is proposed to carry out a matching process that dynamically stabilizes the learning process. This feedback represents a type of matching process that is elaborated at higher visual processing areas into a volitionally controllable type of attention. We show stable learning when both of these properties hold. Learning adjusts the initially coarsely tuned disparity preference to match the disparities present in the environment, and the tuning width decreases to yield high disparity selectivity, which enables the model to quickly detect image disparities. Learning is impaired in the absence of either antagonistic rebounds or corticogeniculate feedback. The model also helps to explain psychophysical and neurobiological data about adult 3-D vision.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (1): 117–132.
Published: 01 January 1997
Abstract
View article
PDF
How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts that belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1996) 8 (3): 257–277.
Published: 01 July 1996
Abstract
View article
PDF
The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensorimotor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contains circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention toward goal objects amid distractions. When these model recognition, reinforcement, sensorimotor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1994) 6 (4): 341–358.
Published: 01 July 1994
Abstract
View article
PDF
A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation—otherwise known as a parcellated distributed representation—of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1993) 5 (4): 408–435.
Published: 01 October 1993
Abstract
View article
PDF
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.