Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Michael J. Berry II
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (2): 270–311.
Published: 01 February 2019
FIGURES
| View All (11)
Abstract
View article
PDF
Within a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Analyzing large populations of retinal ganglion cells, we found that the real, heterogeneous population can yield a discrimination error lower than the homogeneous population by several orders of magnitude and consequently can encode much more visual information. This effect increases with population size and with graded degrees of heterogeneity. We complemented these results with an analysis of coding based on the Chernoff distance, as well as derivations of inequalities on coding in certain limits, from which we can conclude that the beneficial effect of heterogeneity occurs over a broad set of conditions. Together, our results indicate that the presence of functional diversity in neural populations can enhance their coding fidelity appreciably. A noteworthy outcome of our study is that this effect can be extremely strong and should be taken into account when investigating design principles for neural circuits.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (7): 1870–1890.
Published: 01 July 2013
FIGURES
| View All (7)
Abstract
View article
PDF
Current dimensionality-reduction methods can identify relevant subspaces for neural computations but do not favor one basis over the other within the relevant subspace. Finding the appropriate basis can simplify the description of the nonlinear computation with respect to the relevant variables, making it easier to elucidate the underlying neural computation and make hypotheses about the neural circuitry, giving rise to the observed responses. Part of the problem is that although some of the dimensionality reduction methods can identify many of the relevant dimensions, it is usually difficult to map out or interpret the nonlinear transformation with respect to more than a few relevant dimensions simultaneously without some simplifying assumptions. While recent approaches make it possible to create predictive models based on many relevant dimensions simultaneously, there still remains the need to relate such predictive models to the mechanistic descriptions of the operation of underlying neural circuitry. Here we demonstrate that transforming to a basis within the relevant subspace where the neural computation is best described by a given nonlinear function often makes it easier to interpret the computation and describe it with a small number of parameters. We refer to the corresponding basis as the functional basis, and illustrate the utility of such transformation in the context of logical OR and logical AND functions. We show that although dimensionality-reduction methods such as spike-triggered covariance are able to find a relevant subspace, they often produce dimensions that are difficult to interpret and do not correspond to a functional basis. The functional features can be found using a maximum likelihood approach. The results are illustrated using simulated neurons and recordings from retinal ganglion cells. The resulting features are uniquely defined and nonorthogonal, and they make it easier to relate computational and mechanistic models to each other.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (4): 799–815.
Published: 01 April 2001
Abstract
View article
PDF
Energy-efficient information transmission may be relevant to biological sensory signal processing as well as to low-power electronic devices. We explore its consequences in two different regimes. In an “immediate” regime, we argue that the information rate should be maximized subject to a power constraint, and in an “exploratory” regime, the transmission rate per power cost should be maximized. In the absence of noise, discrete inputs are optimally encoded into Boltzmann distributed output symbols. In the exploratory regime, the partition function of this distribution is numerically equal to 1. The structure of the optimal code is strongly affected by noise in the transmission channel. The Arimoto-Blahut algorithm, generalized for cost constraints, can be used to derive and interpret the distribution of symbols for optimal energy-efficient coding in the presence of noise. We outline the possibilities and problems in extending our results to information coding and transmission in neurobiological systems.