Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Ron Meir
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (4): 794–828.
Published: 01 April 2020
Abstract
View article
PDF
Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE: Fisher information, maximum likelihood estimation error, and the Bayesian Cramér-Rao bound. We find that optimization of these measures yields qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (8): 2056–2112.
Published: 01 August 2018
FIGURES
| View All (16)
Abstract
View article
PDF
Neural decoding may be formulated as dynamic state estimation (filtering) based on point-process observations, a generally intractable problem. Numerical sampling techniques are often practically useful for the decoding of real neural data. However, they are less useful as theoretical tools for modeling and understanding sensory neural systems, since they lead to limited conceptual insight into optimal encoding and decoding strategies. We consider sensory neural populations characterized by a distribution over neuron parameters. We develop an analytically tractable Bayesian approximation to optimal filtering based on the observation of spiking activity that greatly facilitates the analysis of optimal encoding in situations deviating from common assumptions of uniform coding. Continuous distributions are used to approximate large populations with few parameters, resulting in a filter whose complexity does not grow with population size and allowing optimization of population parameters rather than individual tuning functions. Numerical comparison with particle filtering demonstrates the quality of the approximation. The analytic framework leads to insights that are difficult to obtain from numerical algorithms and is consistent with biological observations about the distribution of sensory cells' preferred stimuli.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (5): 1277–1320.
Published: 01 May 2009
FIGURES
| View All (11)
Abstract
View article
PDF
A key requirement facing organisms acting in uncertain dynamic environments is the real-time estimation and prediction of environmental states, based on which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bayesian statistical calculations for these purposes, it is far less clear how these putative computations are implemented by neural networks in a strictly dynamic setting. In this work, we make use of rigorous mathematical results from the theory of continuous time point process filtering and show how optimal real-time state estimation and prediction may be implemented in a general setting using simple recurrent neural networks. The framework is applicable to many situations of common interest, including noisy observations, non-Poisson spike trains (incorporating adaptation), multisensory integration, and state prediction. The optimal network properties are shown to relate to the statistical structure of the environment, and the benefits of adaptation are studied and explicitly demonstrated. Finally, we recover several existing results as appropriate limits of our general setting.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (4): 1100–1124.
Published: 01 April 2009
FIGURES
| View All (7)
Abstract
View article
PDF
Oscillations are a ubiquitous feature of many neural systems, spanning many orders of magnitude in frequency. One of the most prominent oscillatory patterns, with possible functional implications, is that occurring in the mammalian thalamocortical system during sleep. This system is characterized by relatively long delays (reaching up to 40 msec) and gives rise to low-frequency oscillatory waves. Motivated by these phenomena, we study networks of excitatory and inhibitory integrate-and-fire neurons within a Fokker-Planck delay partial differential equation formalism and establish explicit conditions for the emergence of oscillatory solutions, and for the amplitude and period of the ensuing oscillations, for relatively large values of the delays. When a two-timescale analysis is employed, the full partial differential equation is replaced in this limit by a discrete time iterative map, leading to a relatively simple dynamic interpretation. This asymptotic result is shown numerically to hold, to a good approximation, over a wide range of parameter values, leading to an accurate characterization of the behavior in terms of the underlying physical parameters. Our results provide a simple mechanistic explanation for one type of slow oscillation based on delayed inhibition, which may play an important role in the slow spindle oscillations occurring during sleep. Moreover, they are consistent with experimental findings related to human motor behavior with visual feedback.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (8): 2245–2279.
Published: 01 August 2007
Abstract
View article
PDF
Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (8): 2159–2173.
Published: 15 November 1998
Abstract
View article
PDF
We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W , where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W 2 . We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights.