Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Bryan P. Tripp
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (6): 1186–1222.
Published: 01 June 2015
FIGURES
| View All (84)
Abstract
View article
PDF
Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (4): 867–894.
Published: 01 April 2012
FIGURES
| View All (7)
Abstract
View article
PDF
Response variability is often positively correlated in pairs of similarly tuned neurons in the visual cortex. Many authors have considered correlated variability to prevent postsynaptic neurons from averaging across large groups of inputs to obtain reliable stimulus estimates. However, a simple average of variability ignores nonlinearities in cortical signal integration. This study shows that feedforward divisive normalization of a neuron's inputs effectively decorrelates their variability. Furthermore, we show that optimal linear estimates of a stimulus parameter that are based on normalized inputs are more accurate than those based on nonnormalized inputs, due partly to reduced correlations, and that these estimates improve with increasing population size up to several thousand neurons. This suggests that neurons may possess a simple mechanism for substantially decorrelating noise in their inputs. Further work is needed to reconcile this conclusion with past evidence that correlated noise impairs visual perception.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 621–659.
Published: 01 March 2010
FIGURES
| View All (10)
Abstract
View article
PDF
Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.