Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Benjamin Schrauwen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (3): 725–747.
Published: 01 March 2015
FIGURES
| View All (11)
Abstract
View article
PDF
In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov’s model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (6): 1055–1079.
Published: 01 June 2014
FIGURES
Abstract
View article
PDF
In the field of neural network simulation techniques, the common conception is that spiking neural network simulators can be divided in two categories: time-step-based and event-driven methods. In this letter, we look at state-of-the art simulation techniques in both categories and show that a clear distinction between both methods is increasingly difficult to define. In an attempt to improve the weak points of each simulation method, ideas of the alternative method are, sometimes unknowingly, incorporated in the simulation engine. Clearly the ideal simulation method is a mix of both methods. We formulate the key properties of such an efficient and generally applicable hybrid approach.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (1): 104–133.
Published: 01 January 2012
FIGURES
| View All (8)
Abstract
View article
PDF
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1468–1472.
Published: 01 June 2010
Abstract
View article
PDF
Recently van Elburg and van Ooyen ( 2009 ) published a generalization of the event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory currents and double exponential inhibitory synaptic currents, introduced by Carnevale and Hines. In the paper, it was shown that the constraints on the synaptic time constants imposed by the Newton-Raphson iteration scheme, can be relaxed. In this note, we show that according to the results published in D'Haene, Schrauwen, Van Campenhout, and Stroobandt ( 2009 ), a further generalization is possible, eliminating any constraint on the time constants. We also demonstrate that in fact, a wide range of linear neuron models can be efficiently simulated with this computation scheme, including neuron models mimicking complex neuronal behavior. These results can change the way complex neuronal spiking behavior is modeled: instead of highly nonlinear neuron models with few state variables, it is possible to efficiently simulate linear models with a large number of state variables.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (5): 1272–1311.
Published: 01 May 2010
FIGURES
| View All (13)
Abstract
View article
PDF
Reservoir computing (RC) systems are powerful models for online computations on input sequences. They consist of a memoryless readout neuron that is trained on top of a randomly connected recurrent neural network. RC systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work indicated a fundamental difference in the behavior of these two implementations of the RC idea. The performance of an RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons, such clear dependency has not been observed. In this letter, we address this apparent dichotomy by investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks. Our analyses are based on a novel estimation of the Lyapunov exponent of the network dynamics with the help of branching process theory, rank measures that estimate the kernel quality and generalization capabilities of recurrent networks, and a novel mean field predictor for computational performance. These analyses reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to differences in the integration of information over short and long timescales. This explains the decreased computational performance observed in binary circuits that are densely connected. The mean field predictor is also used to bound the memory function of recurrent circuits of binary neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (4): 1068–1099.
Published: 01 April 2009
FIGURES
| View All (6)
Abstract
View article
PDF
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.