Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Jeffrey M. Beck
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (11): 2991–3009.
Published: 01 November 2009
FIGURES
| View All (5)
Abstract
View articletitled, On the Maximization of Information Flow Between Spiking Neurons
View
PDF
for article titled, On the Maximization of Information Flow Between Spiking Neurons
A feedforward spiking network represents a nonlinear transformation that maps a set of input spikes to a set of output spikes. This mapping transforms the joint probability distribution of incoming spikes into a joint distribution of output spikes. We present an algorithm for synaptic adaptation that aims to maximize the entropy of this output distribution, thereby creating a model for the joint distribution of the incoming point processes. The learning rule that is derived depends on the precise pre- and postsynaptic spike timings. When trained on correlated spike trains, the network learns to extract independent spike trains, thereby uncovering the underlying statistical structure and creating a more efficient representation of the incoming spike trains.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (5): 1344–1361.
Published: 01 May 2007
Abstract
View articletitled, Exact Inferences in a Neural Implementation of a Hidden Markov Model
View
PDF
for article titled, Exact Inferences in a Neural Implementation of a Hidden Markov Model
From first principles, we derive a quadratic nonlinear, first-order dynamical system capable of performing exact Bayes-Markov inferences for a wide class of biologically plausible stimulus-dependent patterns of activity while simultaneously providing an online estimate of model performance. This is accomplished by constructing a dynamical system that has solutions proportional to the probability distribution over the stimulus space, but with a constant of proportionality adjusted to provide a local estimate of the probability of the recent observations of stimulus-dependent activity-given model parameters. Next, we transform this exact equation to generate nonlinear equations for the exact evolution of log likelihood and log-likelihood ratios and show that when the input has low amplitude, linear rate models for both the likelihood and the log-likelihood functions follow naturally from these equations. We use these four explicit representations of the probability distribution to argue that, in contrast to the arguments of previous work, the dynamical system for the exact evolution of the likelihood (as opposed to the log likelihood or log-likelihood ratios) not only can be mapped onto a biologically plausible network but is also more consistent with physiological observations.