Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
H. Sompolinsky
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (6): 1321–1371.
Published: 15 August 1998
Abstract
View article
PDF
The nature and origin of the temporal irregularity in the electrical activity of cortical neurons in vivo are not well understood. We consider the hypothesis that this irregularity is due to a balance of excitatory and inhibitory currents into the cortical cells. We study a network model with excitatory and inhibitory populations of simple binary units. The internal feedback is mediated by relatively large synaptic strengths, so that the magnitude of the total excitatory and inhibitory feedback is much larger than the neuronal threshold. The connectivity is random and sparse. The mean number of connections per unit is large, though small compared to the total number of cells in the network. The network also receives a large, temporally regular input from external sources. We present an analytical solution of the mean-field theory of this model, which is exact in the limit of large network size. This theory reveals a new cooperative stationary state of large networks, which we term a balanced state . In this state, a balance between the excitatory and inhibitory inputs emerges dynamically for a wide range of parameters, resulting in a net input whose temporal fluctuations are of the same order as its mean. The internal synaptic inputs act as a strong negative feedback, which linearizes the population responses to the external drive despite the strong nonlinearity of the individual cells. This feedback also greatly stabilizes the system's state and enables it to track a time-dependent input on time scales much shorter than the time constant of a single cell. The spatiotemporal statistics of the balanced state are calculated. It is shown that the autocorrelations decay on a short time scale, yielding an approximate Poissonian temporal statistics. The activity levels of single cells are broadly distributed, and their distribution exhibits a skewed shape with a long power-law tail. The chaotic nature of the balanced state is revealed by showing that the evolution of the microscopic state of the network is extremely sensitive to small deviations in its initial conditions. The balanced state generated by the sparse, strong connections is an asynchronous chaotic state. It is accompanied by weak spatial cross-correlations, the strength of which vanishes in the limit of large network size. This is in contrast to the synchronized chaotic states exhibited by more conventional network models with high connectivity of weak synapses.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (2): 270–299.
Published: 15 February 1996
Abstract
View article
PDF
We study neural network models of discriminating between stimuli with two similar angles, using the two-alternative forced choice (2AFC) paradigm. Two network architectures are investigated: a two-layer perceptron network and a gating network. In the two-layer network all hidden units contribute to the decision at all angles, while in the other architecture the gating units select, for each stimulus, the appropriate hidden units that will dominate the decision. We find that both architectures can perform the task reasonably well for all angles. Perceptual learning has been modeled by training the networks to perform the task, using unsupervised Hebb learning algorithms with pairs of stimuli at fixed angles θ and δθ . Perceptual transfer is studied by measuring the performance of the network on stimuli with θ′ ≠ θ . The two-layer perceptron shows a partial transfer for angles that are within a distance a from θ , where a is the angular width of the input tuning curves. The change in performance due to learning is positive for angles close to θ , but for |θ − θ′| ≈ a it is negative, i.e., its performance after training is worse than before. In contrast, negative transfer can be avoided in the gating network by limiting the effects of learning to hidden units that are optimized for angles that are close to the trained angle.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (4): 642–657.
Published: 01 July 1994
Abstract
View article
PDF
We propose a model of coupled oscillators with noise that performs segmentation of stimuli using a set of stored images, each consisting of objects and a background. The oscillators' amplitudes encode the spatial and featural distribution of the external stimulus. The coherence of their phases signifies their belonging to the same object. In the learning stage, the couplings between phases are modified in a Hebb-like manner. By mean-field analysis and simulations, we show that an external stimulus whose local features resemble those of one or several of the stored objects generates a selective phase coherence that represents the stored pattern of segmentation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (4): 550–569.
Published: 01 July 1993
Abstract
View article
PDF
We study theoretically how an interaction between assemblies of neuronal oscillators can be modulated by the pattern of external stimuli. It is shown that spatial variations in the stimuli can control the magnitude and phase of the synchronization between the output of neurons with different receptive fields. This modulation emerges from cooperative dynamics in the network, without the need for specialized, activity-dependent synapses. Our results further suggest that the modulation of neuronal interactions by extended features of a stimulus may give rise to complex spatiotemporal fluctuations in the phases of neuronal oscillations.