Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Rodney Douglas
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (9): 2303–2354.
Published: 01 September 2013
FIGURES
| View All (80)
Abstract
View article
PDF
Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals (ISIs) are used for stimulus representation in neural systems. However, very few algorithmic principles exploit the benefits of such temporal codes for probabilistic inference of stimuli or decisions. Here, we describe and rigorously prove the functional properties of a spike-based processor that uses ISI distributions to perform probabilistic inference. The abstract processor architecture serves as a building block for more concrete, neural implementations of the belief-propagation (BP) algorithm in arbitrary graphical models (e.g., Bayesian networks and factor graphs). The distributed nature of graphical models matches well with the architectural and functional constraints imposed by biology. In our model, ISI distributions represent the BP messages exchanged between factor nodes, leading to the interpretation of a single spike as a random sample that follows such a distribution. We verify the abstract processor model by numerical simulation in full graphs, and demonstrate that it can be applied even in the presence of analog variables. As a particular example, we also show results of a concrete, neural implementation of the processor, although in principle our approach is more flexible and allows different neurobiological interpretations. Furthermore, electrophysiological data from area LIP during behavioral experiments are assessed in light of ISI coding, leading to concrete testable, quantitative predictions and a more accurate description of these data compared to hitherto existing models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (10): 2457–2497.
Published: 01 October 2011
FIGURES
| View All (17)
Abstract
View article
PDF
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (9): 2502–2523.
Published: 01 September 2009
FIGURES
| View All (6)
Abstract
View article
PDF
From a theoretical point of view, statistical inference is an attractive model of brain operation. However, it is unclear how to implement these inferential processes in neuronal networks. We offer a solution to this problem by showing in detailed simulations how the belief propagation algorithm on a factor graph can be embedded in a network of spiking neurons. We use pools of spiking neurons as the function nodes of the factor graph. Each pool gathers “messages” in the form of population activities from its input nodes and combines them through its network dynamics. Each of the various output messages to be transmitted over the edges of the graph is computed by a group of readout neurons that feed in their respective destination pools. We use this approach to implement two examples of factor graphs. The first example, drawn from coding theory, models the transmission of signals through an unreliable channel and demonstrates the principles and generality of our network approach. The second, more applied example is of a psychophysical mechanism in which visual cues are used to resolve hypotheses about the interpretation of an object's shape and illumination. These two examples, and also a statistical analysis, demonstrate good agreement between the performance of our networks and the direct numerical evaluation of belief propagation.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (9): 2437–2465.
Published: 01 September 2009
FIGURES
| View All (12)
Abstract
View article
PDF
The winner-take-all (WTA) computation in networks of recurrently connected neurons is an important decision element of many models of cortical processing. However, analytical studies of the WTA performance in recurrent networks have generally addressed rate-based models. Very few have addressed networks of spiking neurons, which are relevant for understanding the biological networks themselves and also for the development of neuromorphic electronic neurons that commmunicate by action potential like address-events. Here, we make steps in that direction by using a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events. Our work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval. We show first for the case of spike rate inputs that input discrimination and the effects of self-excitation and inhibition on this discrimination are consistent with results obtained from the standard rate-based WTA models. We also extend this discrimination analysis of spiking WTAs to nonstationary inputs with time-varying spike rates resembling statistics of real-world sensory stimuli. We conclude that spiking WTAs are consistent with their continuous counterparts for steady-state inputs, but they also exhibit high discrimination performance with nonstationary inputs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (12): 2994–3008.
Published: 01 December 2006
Abstract
View article
PDF
Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O (log n ) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O (log n ) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (8): 1893–1913.
Published: 15 November 1999
Abstract
View article
PDF
It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast) of the time-varying somatic input current. The adaptation algorithm estimates the somatic current signal from the spike train by way of the intracellular somatic calcium concentration, thereby continuously adjusting the neuronś firing dynamics. This principle is shown to work in an analog VLSI-designed silicon neuron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (1): 67–74.
Published: 01 January 1999
Abstract
View article
PDF
Constant current injection with superimposed periodic inhibition gives rise to phase locking as well as chaotic activity in rat neocortical neurons. Here we compare the behavior of a leaky integrate-and-fire neural model with that of a biophysically realistic model of the rat neuron to determine which membrane properties influence the response to such stimuli. We find that only the biophysical model with voltage-sensitive conductances can produce chaotic behavior.