Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
André van Schaik
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes
Open AccessPublisher: Journals Gateway
Neural Computation (2025) 37 (5): 926–956.
Published: 17 April 2025
FIGURES
| View All (7)
Abstract
View articletitled, The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes
View
PDF
for article titled, The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes
Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.
Includes: Supplementary data
Journal Articles
Electrical Signaling Beyond Neurons
Open AccessPublisher: Journals Gateway
Neural Computation (2024) 36 (10): 1939–2029.
Published: 17 September 2024
FIGURES
| View All (13)
Abstract
View articletitled, Electrical Signaling Beyond Neurons
View
PDF
for article titled, Electrical Signaling Beyond Neurons
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that “simpler” neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals—for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell’s assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (3): 472–496.
Published: 01 March 2014
FIGURES
| View All (23)
Abstract
View articletitled, Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons
View
PDF
for article titled, Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (2): 510–531.
Published: 01 February 2013
FIGURES
| View All (9)
Abstract
View articletitled, Temporal Order Detection and Coding in Nervous Systems
View
PDF
for article titled, Temporal Order Detection and Coding in Nervous Systems
This letter discusses temporal order coding and detection in nervous systems. Detection of temporal order in the external world is an adaptive function of nervous systems. In addition, coding based on the temporal order of signals can be used as an internal code. Such temporal order coding is a subset of temporal coding. We discuss two examples of processing the temporal order of external events: the auditory location detection system in birds and the visual direction detection system in flies. We then discuss how somatosensory stimulus intensities are translated into a temporal order code in the human peripheral nervous system. We next turn our attention to input order coding in the mammalian cortex. We review work demonstrating the capabilities of cortical neurons for detecting input order. We then discuss research refuting and demonstrating the representation of stimulus features in the cortex by means of input order. After some general theoretical considerations on input order detection and coding, we conclude by discussing the existing and potential use of input order coding in neuromorphic engineering.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (6): 1554–1588.
Published: 01 June 2009
FIGURES
| View All (15)
Abstract
View articletitled, A First-Order Nonhomogeneous Markov Model for the Response of Spiking Neurons Stimulated by Small Phase-Continuous Signals
View
PDF
for article titled, A First-Order Nonhomogeneous Markov Model for the Response of Spiking Neurons Stimulated by Small Phase-Continuous Signals
We present a first-order nonhomogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes only the neuron characteristics and the other of which describes only the signal characteristics. The approximation shows particularly clearly that signal autocorrelations and cross-correlations arise as natural features of the interspike-interval density and are particularly clear for small signals and moderate noise. We show that this model simplifies the design of spiking neuron cross-correlation systems and describe a four-neuron mutual inhibition network that generates a cross-correlation output for two input signals.