Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Haim Sompolinsky
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (2): 105–155.
Published: 20 January 2023
Abstract
View article
PDF
Binding operation is fundamental to many cognitive processes, such as cognitive map formation, relational reasoning, and language comprehension. In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms. Previous work has introduced a binding model based on quadratic functions of bound pairs, followed by vector summation of multiple pairs. Based on this framework, we address the following questions: Which classes of quadratic matrices are optimal for decoding relational structures? And what is the resultant accuracy? We introduce a new class of binding matrices based on a matrix representation of octonion algebra, an eight-dimensional extension of complex numbers. We show that these matrices enable a more accurate unbinding than previously known methods when a small number of pairs are present. Moreover, numerical optimization of a binding operator converges to this octonion binding. We also show that when there are a large number of bound pairs, however, a random quadratic binding performs, as well as the octonion and previously proposed binding methods. This study thus provides new insight into potential neural mechanisms of binding operations in the brain.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (10): 2593–2615.
Published: 01 October 2018
FIGURES
| View All (6)
Abstract
View article
PDF
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom. Conventional data augmentation methods rely on sampling large numbers of training examples from these manifolds. Instead, we propose an iterative algorithm, M C P , based on a cutting plane approach that efficiently solves a quadratic semi-infinite programming problem to find the maximum margin solution. We provide a proof of convergence as well as a polynomial bound on the number of iterations required for a desired tolerance in the objective function. The efficiency and performance of M C P are demonstrated in high-dimensional simulations and on image manifolds generated from the ImageNet data set. Our results indicate that M C P is able to rapidly learn good classifiers and shows superior generalization performance compared with conventional maximum margin methods using data augmentation methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (2): 289–331.
Published: 01 February 2012
FIGURES
| View All (7)
Abstract
View article
PDF
In this study, we assume that the brain uses a general-purpose pattern generator to transform static commands into basic movement segments. We hypothesize that this pattern generator includes an oscillator whose complete cycle generates a single movement segment. In order to demonstrate this hypothesis, we construct an oscillator-based model of movement generation. The model includes an oscillator that generates harmonic outputs whose frequency and amplitudes can be modulated by external inputs. The harmonic outputs drive a number of integrators, each activating a single muscle. The model generates muscle activation patterns composed of rectilinear and harmonic terms. We show that rectilinear and fundamental harmonic terms account for known properties of natural movements, such as the invariant bell-shaped hand velocity profile during reaching. We implement these dynamics by a neural network model and characterize the tuning properties of the neural integrator cells, the neural oscillator cells, and the inputs to the system. Finally, we propose a method to test our hypothesis that a neural oscillator is a central component in the generation of voluntary movement.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (8): 2269–2308.
Published: 01 August 2009
FIGURES
| View All (9)
Abstract
View article
PDF
We consider a threshold-crossing spiking process as a simple model for the activity within a population of neurons. Assuming that these neurons are driven by a common fluctuating input with gaussian statistics, we evaluate the cross-correlation of spike trains in pairs of model neurons with different thresholds. This correlation function tends to be asymmetric in time, indicating a preference for the neuron with the lower threshold to fire before the one with the higher threshold, even if their inputs are identical. The relationship between these results and spike statistics in other models of neural activity is explored. In particular, we compare our model with an integrate-and-fire model in which the membrane voltage resets following each spike. The qualitative properties of spike cross-correlations, emerging from the threshold-crossing model, are similar to those of bursting events in the integrate-and-fire model. This is particularly true for generalized integrate-and-fire models in which spikes tend to occur in bursts, as observed, for example, in retinal ganglion cells driven by a rapidly fluctuating visual stimulus. The threshold-crossing model thus provides a simple, analytically tractable description of event onsets in these neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (8): 1951–1986.
Published: 01 August 2006
Abstract
View article
PDF
In many cortical and subcortical areas, neurons are known to modulate their average firing rate in response to certain external stimulus features. It is widely believed that information about the stimulus features is coded by a weighted average of the neural responses. Recent theoretical studies have shown that the information capacity of such a coding scheme is very limited in the presence of the experimentally observed pairwise correlations. However, central to the analysis of these studies was the assumption of a homogeneous population of neurons. Experimental findings show a considerable measure of heterogeneity in the response properties of different neurons. In this study, we investigate the effect of neuronal heterogeneity on the information capacity of a correlated population of neurons. We show that information capacity of a heterogeneous network is not limited by the correlated noise, but scales linearly with the number of cells in the population. This information cannot be extracted by the population vector readout, whose accuracy is greatly suppressed by the correlated noise. On the other hand, we show that an optimal linear readout that takes into account the neuronal heterogeneity can extract most of this information. We study analytically the nature of the dependence of the optimal linear readout weights on the neuronal diversity. We show that simple online learning can generate readout weights with the appropriate dependence on the neuronal diversity, thereby yielding efficient readout.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (6): 1105–1136.
Published: 01 June 2004
Abstract
View article
PDF
Theoretical and experimental studies of distributed neuronal representations of sensory and behavioral variables usually assume that the tuning of the mean firing rates is the main source of information. However, recent theoretical studies have investigated the effect of cross-correlations in the trial-to-trial fluctuations of the neuronal responses on the accuracy of the representation. Assuming that only the first-order statistics of the neuronal responses are tuned to the stimulus, these studies have shown that n the presence of correlations, similar to those observed experimentally in cortical ensembles of neurons, the amount of information in the population is limited, yielding nonzero error levels even in the limit of infinitely large populations of neurons. In this letter, we study correlated neuronal populations whose higher-order statistics, and in particular response variances, are also modulated by the stimulus. We ask two questions: Does the correlated noise limit the accuracy of the neuronal representation of the stimulus? and, How can a biological mechanism extract most of the information embedded in the higher-order statistics of the neuronal responses? Specifically, we address these questions in the context of a population of neurons coding an angular variable. We show that the information embedded in the variances grows linearly with the population size despite the presence of strong correlated noise. This information cannot be extracted by linear readout schemes, including the linear population vector. Instead, we propose a bilinear readout scheme that involves spatial decorrelation, quadratic nonlinearity, and population vector summation. We show that this nonlinear population vector scheme yields accurate estimates of stimulus parameters, with an efficiency that grows linearly with the population size. This code can be implemented using biologically plausible neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1809–1841.
Published: 01 August 2003
Abstract
View article
PDF
Population rate models provide powerful tools for investigating the principles that underlie the cooperative function of large neuronal systems. However, biophysical interpretations of these models have been ambiguous. Hence, their applicability to real neuronal systems and their experimental validation have been severely limited. In this work, we show that conductance-based models of large cortical neuronal networks can be described by simplified rate models, provided that the network state does not possess a high degree of synchrony. We first derive a precise mapping between the parameters of the rate equations and those of the conductance-based network models for time-independent inputs. This mapping is based on the assumption that the effect of increasing the cell's input conductance on its f-I curve is mainly subtractive. This assumption is confirmed by a single compartment Hodgkin-Huxley type model with a transient potassium A-current. This approach is applied to the study of a network model of a hypercolumn in primary visual cortex. We also explore extensions of the rate model to the dynamic domain by studying the firing-rate response of our conductance-based neuron to time-dependent noisy inputs. We show that the dynamics of this response can be approximated by a time-dependent second-order differential equation. This phenomenological single-cell rate model is used to calculate the response of a conductance-based network to time-dependent inputs.