Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
John Rinzel
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (10): 2603–2645.
Published: 16 September 2021
Abstract
View article
PDF
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (1): 1–45.
Published: 01 January 2013
FIGURES
| View All (35)
Abstract
View article
PDF
In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (4): 534–545.
Published: 01 July 1992
Abstract
View article
PDF
When postsynaptic conductance varies slowly compared to the spike generation process, a straightforward averaging scheme can be used to reduce the system's complexity. Our model consists of a Hodgkin-Huxley-like membrane description for each cell; synaptic activation is described by first order kinetics, with slow rates, in which the equilibrium activation is a sigmoidal function of the presynaptic voltage. Our work concentrates on a two-cell network and it applies qualitatively to the activity patterns, including bistable behavior, recently observed in simple in vitro circuits with slow synapses (Kleinfeld et al . 1990). The fact that our averaged system is derived from a realistic biophysical model has important consequences. In particular, it can preserve certain hysteresis behavior near threshold that is not represented in a simple ad hoc sigmoidal input-output network. This behavior enables a coupled pair of cells, one excitatory and one inhibitory, to generate an alternating burst rhythm even though neither cell has fatiguing properties.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (1): 84–97.
Published: 01 January 1992
Abstract
View article
PDF
We study pacemaker rhythms generated by two nonoscillatory model cells that are coupled by inhibitory synapses. A minimal ionic model that exhibits postinhibitory rebound (PIR) is presented. When the post-synaptic conductance depends instantaneously on presynaptic potential the classical alternating rhythm is obtained. Using phase-plane analysis we identify two underlying mechanisms, “release” and “escape,” for the out-of-phase oscillation. When the postsynaptic conductance is not instantaneous but decays slowly, the two cells can oscillate synchronously with no phase difference. In each case, different stable activity patterns can coexist over a substantial parameter range.