Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-10 of 10
Markus Diesmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (2): 353–359.
Published: 01 February 2009
Abstract
View articletitled, Simplicity and Efficiency of Integrate-and-Fire Neuron Models
View
PDF
for article titled, Simplicity and Efficiency of Integrate-and-Fire Neuron Models
Lovelace and Cios ( 2008 ) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10 5 neurons and 10 9 connections on moderate computer clusters.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (2): 301–339.
Published: 01 February 2009
FIGURES
| View All (10)
Abstract
View articletitled, A Spiking Neural Network Model of an Actor-Critic Learning Agent
View
PDF
for article titled, A Spiking Neural Network Model of an Actor-Critic Learning Agent
The ability to adapt behavior to maximize reward as a result of interactions with the environment is crucial for the survival of any higher organism. In the framework of reinforcement learning, temporal-difference learning algorithms provide an effective strategy for such goal-directed adaptation, but it is unclear to what extent these algorithms are compatible with neural computation. In this article, we present a spiking neural network model that implements actor-critic temporal-difference learning by combining local plasticity rules with a global reward signal. The network is capable of solving a nontrivial gridworld task with sparse rewards. We derive a quantitative mapping of plasticity parameters and synaptic weights to the corresponding variables in the standard algorithmic formulation and demonstrate that the network learns with a similar speed to its discrete time counterpart and attains the same equilibrium performance.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (9): 2133–2184.
Published: 01 September 2008
Abstract
View articletitled, Dependence of Neuronal Correlations on Filter Characteristics and Marginal Spike Train Statistics
View
PDF
for article titled, Dependence of Neuronal Correlations on Filter Characteristics and Marginal Spike Train Statistics
Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. It is largely unknown how the spike train correlation structure is altered by this filtering and what the consequences for the dynamics of the system and for the interpretation of measured correlations are. In this study, we focus on linearly filtered spike trains and particularly consider correlations caused by overlapping presynaptic neuron populations. We demonstrate that correlation functions and statistical second-order measures like the variance, the covariance, and the correlation coefficient generally exhibit a complex dependence on the filter properties and the statistics of the presynaptic spike trains. We point out that both contributions can play a significant role in modulating the interaction strength between neurons or neuron populations. In many applications, the coherence allows a filter-independent quantification of correlated activity. In different network models, we discuss the estimation of network connectivity from the high-frequency coherence of simultaneous intracellular recordings of pairs of neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (9): 2185–2226.
Published: 01 September 2008
Abstract
View articletitled, Correlations and Population Dynamics in Cortical Networks
View
PDF
for article titled, Correlations and Population Dynamics in Cortical Networks
The function of cortical networks depends on the collective interplay between neurons and neuronal populations, which is reflected in the correlation of signals that can be recorded at different levels. To correctly interpret these observations it is important to understand the origin of neuronal correlations. Here we study how cells in large recurrent networks of excitatory and inhibitory neurons interact and how the associated correlations affect stationary states of idle network activity. We demonstrate that the structure of the connectivity matrix of such networks induces considerable correlations between synaptic currents as well as between subthreshold membrane potentials, provided Dale's principle is respected. If, in contrast, synaptic weights are randomly distributed, input correlations can vanish, even for densely connected networks. Although correlations are strongly attenuated when proceeding from membrane potentials to action potentials (spikes), the resulting weak correlations in the spike output can cause substantial fluctuations in the population activity, even in highly diluted networks. We show that simple mean-field models that take the structure of the coupling matrix into account can adequately describe the power spectra of the population activity. The consequences of Dale's principle on correlations and rate fluctuations are discussed in the light of recent experimental findings.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (6): 1437–1467.
Published: 01 June 2007
Abstract
View articletitled, Spike-Timing-Dependent Plasticity in Balanced Random Networks
View
PDF
for article titled, Spike-Timing-Dependent Plasticity in Balanced Random Networks
The balanced random network model attracts considerable interest because it explains the irregular spiking activity at low rates and large membrane potential fluctuations exhibited by cortical neurons in vivo. In this article, we investigate to what extent this model is also compatible with the experimentally observed phenomenon of spike-timing-dependent plasticity (STDP). Confronted with the plethora of theoretical models for STDP available, we reexamine the experimental data. On this basis, we propose a novel STDP update rule, with a multiplicative dependence on the synaptic weight for depression, and a power law dependence for potentiation. We show that this rule, when implemented in large, balanced networks of realistic connectivity and sparseness, is compatible with the asynchronous irregular activity regime. The resultant equilibrium weight distribution is unimodal with fluctuating individual weight trajectories and does not exhibit development of structure. We investigate the robustness of our results with respect to the relative strength of depression. We introduce synchronous stimulation to a group of neurons and demonstrate that the decoupling of this group from the rest of the network is so severe that it cannot effectively control the spiking of other neurons, even those with the highest convergence from this group.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (1): 47–79.
Published: 01 January 2007
Abstract
View articletitled, Exact Subthreshold Integration with Continuous Spike Times in Discrete-Time Neural Network Simulations
View
PDF
for article titled, Exact Subthreshold Integration with Continuous Spike Times in Discrete-Time Neural Network Simulations
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (11): 2651–2679.
Published: 01 November 2006
Abstract
View articletitled, Programmable Logic Construction Kits for Hyper-Real-Time Neuronal Modeling
View
PDF
for article titled, Programmable Logic Construction Kits for Hyper-Real-Time Neuronal Modeling
Programmable logic designs are presented that achieve exact integration of leaky integrate-and-fire soma and dynamical synapse neuronal models and incorporate spike-time dependent plasticity and axonal delays. Highly accurate numerical performance has been achieved by modifying simpler forward-Euler-based circuitry requiring minimal circuit allocation, which, as we show, behaves equivalently to exact integration. These designs have been implemented and simulated at the behavioral and physical device levels, demonstrating close agreement with both numerical and analytical results. By exploiting finely grained parallelism and single clock cycle numerical iteration, these designs achieve simulation speeds at least five orders of magnitude faster than the nervous system, termed here hyper-real-time operation , when deployed on commercially available field-programmable gate array (FPGA) devices. Taken together, our designs form a programmable logic construction kit of commonly used neuronal model elements that supports the building of large and complex architectures of spiking neuron networks for real-time neuromorphic implementation, neurophysiological interfacing, or efficient parameter space investigations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (8): 1776–1801.
Published: 01 August 2005
Abstract
View articletitled, Advancing the Boundaries of High-Connectivity Network Simulation with Distributed Computing
View
PDF
for article titled, Advancing the Boundaries of High-Connectivity Network Simulation with Distributed Computing
The availability of efficient and reliable simulation tools is one of the mission-critical technologies in the fast-moving field of computational neuroscience. Research indicates that higher brain functions emerge from large and complex cortical networks and their interactions. The large number of elements (neurons) combined with the high connectivity (synapses) of the biological network and the specific type of interactions impose severe constraints on the explorable system size that previously have been hard to overcome. Here we present a collection of new techniques combined to a coherent simulation tool removing the fundamental obstacle in the computational study of biological neural networks: the enormous number of synaptic contacts per neuron. Distributing an individual simulation over multiple computers enables the investigation of networks orders of magnitude larger than previously possible. The software scales excellently on a wide range of tested hardware, so it can be used in an interactive and iterative fashion for the development of ideas, and results can be produced quickly even for very large networks. In con-trast to earlier approaches, a wide class of neuron models and synaptic dynamics can be represented.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (1): 43–80.
Published: 01 January 2002
Abstract
View articletitled, Unitary Events in Multiple Single-Neuron Spiking Activity: I. Detection and Significance
View
PDF
for article titled, Unitary Events in Multiple Single-Neuron Spiking Activity: I. Detection and Significance
It has been proposed that cortical neurons organize dynamically into functional groups (cell assemblies) by the temporal structure of their joint spiking activity. Here, we describe a novel method to detect conspicuous patterns of coincident joint spike activity among simultaneously recorded single neurons. The statistical significance of these unitary events of coincident joint spike activity is evaluated by the joint-surprise. The method is tested and calibrated on the basis of simulated, stationary spike trains of independently firing neurons, into which coincident joint spike events were inserted under controlled conditions. The sensitivity and specificity of the method are investigated for their dependence on physiological parameters (firing rate, coincidence precision, coincidence pattern complexity) and temporal resolution of the analysis. In the companion article in this issue, we describe an extension of the method, designed to deal with nonstationary firing rates.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (1): 81–119.
Published: 01 January 2002
Abstract
View articletitled, Unitary Events in Multiple Single-Neuron Spiking Activity: II. Nonstationary Data
View
PDF
for article titled, Unitary Events in Multiple Single-Neuron Spiking Activity: II. Nonstationary Data
In order to detect members of a functional group (cell assembly) in simultaneously recorded neuronal spiking activity, we adopted the widely used operational definition that membership in a common assembly is expressed in near-simultaneous spike activity. Unitary event analysis, a statistical method to detect the significant occurrence of coincident spiking activity in stationary data, was recently developed (see the companion article in this issue). The technique for the detection of unitary events is based on the assumption that the underlying processes are stationary in time. This requirement, however, is usually not fulfilled in neuronal data. Here we describe a method that properly normalizes for changes of rate: the unitary events by moving window analysis (UEMWA). Analysis for unitary events is performed separately in overlapping time segments by sliding a window of constant width along the data. In each window, stationarity is assumed. Performance and sensitivity are demonstrated by use of simulated spike trains of independently firing neurons, into which coincident events are inserted. If cortical neurons organize dynamically into functional groups, the occurrence of near-simultaneous spike activity should be time varying and related to behavior and stimuli. UEMWA also accounts for these potentially interesting nonstationarities and allows locating them in time. The potential of the new method is illustrated by results from multiple single-unit recordings from frontal and motor cortical areas in awake, behaving monkey.