Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.

Animals can respond to environmental changes remarkably quickly. Sometimes quick reactions are critical to their survival, as in predator-prey interactions (Monk & Paulin, 2014). These reactions can occur within tens of milliseconds (Sourakov, 2009; Prete & Cleal, 1996) and sometimes even quicker (Sourakov, 2011). For example, zebrafish larvae initiate escape responses to the hydrodynamic stimulus of an approaching predator’s bow wave in as little as 4 ms (Stewart et al., 2013, 2014; Liu & Fetcho, 1999). Countless analogous examples exist throughout the animal kingdom (Yamawaki, 2000; Aldridge et al., 1997; Randall, 1964).

Quick detection of environmental stimuli requires animals to detect changes in the spiking behavior of their sensory neurons as soon as possible (Herberholz & Marquart, 2012). Perhaps its sensory neurons are spiking faster because they are being stimulated by some signal from an approaching predator (Santer et al., 2012; Monk & Paulin, 2014), so a reaction needs to be made immediately. For example, zebrafish larvae need to detect a sudden increase in the spiking rates of their lateral line neurons (Trapani & Nicolson, 2011) in order to quickly react to an approaching predator’s bow wave. Presumably their downstream spiking neurons accomplish this task.

From a statistical perspective, detecting changes in sensory spiking behavior is a change-point detection (CPD) problem. CPD is a statistics discipline that detects abrupt changes in statistical properties of a time series (Aminikhanghahi & Cook, 2017), for example, its underlying distributions, mean, variance, and/or other moments (Polunchenko & Tartakovsky, 2012). CPD algorithms have a wide variety of applications in manufacturing (Wu et al., 2022), finance (Pepelyshev & Polunchenko, 2015), epidemiology (Dehning et al., 2020), climate change (Reeves et al., 2007), and neural spike train analysis (Zhang & Chen, 2021; Bélisle et al., 1998; Ritov et al., 2002; Mosqueiro et al., 2016). Some CPD algorithms exhibit a striking resemblance to spiking neurons.

Figure 1 (main plot, green trace) is a schematic of an online cumulative sum (CUSUM) CPD algorithm (Page, 1954). CUSUM CPD has been extensively studied (Bissell, 1969; Brook & Evans, 1972; Saghaei et al., 2009) and has a wide range of applications from quality control (Biau et al., 2007; Huang et al., 2011) to performance assessment (Lim et al., 2002; Van Rij et al., 1995). It is minimax-optimal, that is, it minimizes the maximum expected detection delay for a fixed tolerance of false alarms (Moustakides et al., 2009; Tartakovsky et al., 2009). Figure 1 illustrates how the CUSUM CPD algorithm detects a sudden increase in the spiking rate of an input spike train from a sensory neuron (blue raster plot, top). It compares a test statistic—in this example, a likelihood ratio (green trace, Figure 1), to a fixed threshold θ (horizontal dotted trace). Whenever an input spike is observed, the likelihood ratio instantaneously jumps. As no input spikes are observed, the likelihood ratio decays. The algorithm bounds the likelihood ratio from falling too low (during long interspike intervals in the input spikes) with a reflecting barrier under it (RB, Figure 1, bottom). When the likelihood ratio crosses the threshold, the algorithm asserts that a change point has been detected. The threshold determines a speed-accuracy trade-off, where raising θ reduces false alarms but delays change-point detections, and vice versa. The likelihood ratio is then reset to the reflecting barrier, and the algorithm repeats.

Figure 1:

Online CPD algorithms exhibit salient neural features. The top raster plot (blue) is an example input spike train to a neuron. The main plot (green trace) illustrates an online CUSUM CPD algorithm that detects a sudden increase in the rate of that input. It compares a likelihood ratio (green trace) to a threshold θ (horizontal dotted trace). The likelihood ratio instantaneously jumps when an input is observed and decays during waiting times between inputs. It is bound from below by a reflecting barrier (RB). When the likelihood ratio crosses the threshold, a change point is asserted (Output raster plot, bottom). The algorithm then resets and repeats. θ controls the speed-accuracy trade-off for the algorithm. Many functional traits of the online CUSUM CPD algorithm map naturally to salient neural features. A neuron can represent a likelihood ratio of its inputs, or a one-to-one function of it, with its membrane potential. Its resting potential acts as a reflecting barrier, its synaptic weight determines the jump size of the likelihood ratio when input spikes arrive, and its leak current determines how quickly the likelihood ratio should decay between input spikes. Its output spikes (Output, bottom raster plot) are the assertion by a neuron that it has detected a change point in its inputs at that moment in time. Those output spikes can be used as inputs to another CPD algorithm (i.e., a downstream neuron) that implements CPD on it.

Figure 1:

Online CPD algorithms exhibit salient neural features. The top raster plot (blue) is an example input spike train to a neuron. The main plot (green trace) illustrates an online CUSUM CPD algorithm that detects a sudden increase in the rate of that input. It compares a likelihood ratio (green trace) to a threshold θ (horizontal dotted trace). The likelihood ratio instantaneously jumps when an input is observed and decays during waiting times between inputs. It is bound from below by a reflecting barrier (RB). When the likelihood ratio crosses the threshold, a change point is asserted (Output raster plot, bottom). The algorithm then resets and repeats. θ controls the speed-accuracy trade-off for the algorithm. Many functional traits of the online CUSUM CPD algorithm map naturally to salient neural features. A neuron can represent a likelihood ratio of its inputs, or a one-to-one function of it, with its membrane potential. Its resting potential acts as a reflecting barrier, its synaptic weight determines the jump size of the likelihood ratio when input spikes arrive, and its leak current determines how quickly the likelihood ratio should decay between input spikes. Its output spikes (Output, bottom raster plot) are the assertion by a neuron that it has detected a change point in its inputs at that moment in time. Those output spikes can be used as inputs to another CPD algorithm (i.e., a downstream neuron) that implements CPD on it.

Close modal

Online CUSUM CPD algorithms exhibit many salient features of neuron models such as the leaky integrate-and-fire (LIF) neuron (Ratnam et al., 2003). Its likelihood ratio resembles the LIF neuron’s membrane potential. Its reflecting barrier resembles the LIF neuron’s resting potential. CUSUM and the LIF neuron both have a fixed threshold. CUSUM’s likelihood ratio, and the LIF neuron’s membrane potential are both reset to their respective lower barriers once their thresholds are crossed. Instantaneous increases in the likelihood ratio given input spikes resemble excitatory synaptic inputs to the LIF neuron. The decay of the likelihood ratio between input spikes resembles the leak current of the LIF neuron. The output of the CUSUM algorithm, change-point assertions, can be represented as an output spike train from the LIF neuron (output, Figure 1, bottom).

CPD algorithms have been used in neuroscience studies to detect abrupt changes in spike train statistics (Bélisle et al., 1998; Mosqueiro et al., 2016; Ritov et al., 2002; Pillow et al., 2011; Koepcke et al., 2016; Johnson et al., 2003; Malladi et al., 2013; Xu et al., 1999). But only three studies have suggested that spiking neurons themselves are hardware implementations of a CPD algorithm (Kim et al., 2012; Monk & van Schaik, 2020). Yu (2006) drew parallels between the mathematical structure of a Bayes-optimal CPD algorithm and the dynamics of the LIF neuron. Denève (2008) considered a Markov transition model that resembles a CPD problem and showed a relationship between the log likelihood of that model and the membrane potential of the LIF neuron. A key result of Denv́e’s work is the recursive nature of her neural computation model, where each layer of neurons processes inputs from its afferent layer with the same algorithm. Ratnam et al. (2003) explored similarities between the CUSUM algorithm and the LIF neuron and found that the LIF neuron approximates CUSUM in terms of mean detection delays. None of these studies demonstrated that the LIF neuron performs exact, online CUSUM CPD given input spikes with specific statistical properties.

This article shows that the LIF neuron implements minimax-optimal CUSUM CPD for a family of compound Poisson processes. We evaluate the CPD performance of the LIF neuron in various regions of its parameter space and interpret salient features of the LIF neuron in a rigorous statistical context. We demonstrate that CUSUM CPD algorithms can be recursive, that is, the output of an LIF neuron can be used as input to another LIF neuron that implements CPD on it. Then we simulate a feedforward LIF neural network whose layers recursively perform CUSUM CPD on their afferent layers. That network quickly detects tiny differences in sensory input spiking rates with very few false positives. This result demonstrates that simple networks of thresholded biological cells offer a principled solution to the pressing problem of quickly and accurately detecting small changes in sensory neural spiking statistics. More generally, our work suggests that spiking neurons are hardware implementations of an online CPD algorithm. If true, then the electrophysiological properties of a neuron’s membrane (e.g., its time constant) must be related to spiking statistics of its inputs. We show a simple example of this relationship for the time constant of the LIF neuron and its input spiking rates. Our work also suggests that a neuron’s action potentials are its assertions that it has detected some change in its inputs at that moment in time. Neurons might not be noisy devices whose action potentials must be averaged over time or populations (Faisal, 2010; Gerstner & Kistler, 2002; Monk et al., 2024). Instead they might be analog computational units that implement sophisticated and statistically optimal algorithms on their inputs in real time.

All figures can be reproduced by the Jupyter Notebook submitted as supplementary material.

2.1  The LIF Neuron as an Online CUSUM CPD Algorithm

2.1.1  CUSUM for Standard Poisson Processes.

Consider an afferent sensory neuron firing Poisson-distributed spikes as shown in the top raster plot of Figure 2. At some point, the input spike rate changes from λ1 (blue spike train) to a higher value λ2 (red spike train). The goal of a CPD algorithm is to detect when the input spike rate changes (i.e., the change point, vertical dotted line, Figure 2).

Figure 2:

Comparison of jump sizes of the likelihood ratio for simple and compound Poisson processes. Top raster plot: Example CPD problem on Poisson inputs. Blue spikes denote Poisson arrivals preceding the change point (at t=10 s, vertical dotted line) at a low rate, λ1=0.5 Hz. Red spikes denote Poisson arrivals after the change point at a higher rate, λ2=3 Hz. The goal is to detect the change in rates as we observe arrivals. Main plot: The CUSUM algorithm solution to this CPD problem. L(t) is shown for the simple (pink trace) and a particular compound (green trace) Poisson process given the top raster plot as input. The jump sizes of L(t) upon input spike arrivals are reported by the colored numbers. Notice that L(t) for this compound Poisson process has neurally plausible constant jump sizes (green numbers), while L(t) for the simple Poisson process does not (magenta numbers). Bottom raster plots: The change points generated by the CUSUM algorithm assuming a simple (magenta) or compound (green) Poisson process. Notice that CUSUM on the latter has a lower number of outputs than that of the former.

Figure 2:

Comparison of jump sizes of the likelihood ratio for simple and compound Poisson processes. Top raster plot: Example CPD problem on Poisson inputs. Blue spikes denote Poisson arrivals preceding the change point (at t=10 s, vertical dotted line) at a low rate, λ1=0.5 Hz. Red spikes denote Poisson arrivals after the change point at a higher rate, λ2=3 Hz. The goal is to detect the change in rates as we observe arrivals. Main plot: The CUSUM algorithm solution to this CPD problem. L(t) is shown for the simple (pink trace) and a particular compound (green trace) Poisson process given the top raster plot as input. The jump sizes of L(t) upon input spike arrivals are reported by the colored numbers. Notice that L(t) for this compound Poisson process has neurally plausible constant jump sizes (green numbers), while L(t) for the simple Poisson process does not (magenta numbers). Bottom raster plots: The change points generated by the CUSUM algorithm assuming a simple (magenta) or compound (green) Poisson process. Notice that CUSUM on the latter has a lower number of outputs than that of the former.

Close modal
Let L(t) be the likelihood ratio of the input spikes under the assumption that their rate is λ1 or λ2, that is, whether or not the change point has occurred (Koepcke et al., 2016). For a standard Poisson process (SPP), L(t) is the ratio of two Poisson probability densities with rates λ1 and λ2:
(2.1)
Figure 2 (main plot, pink trace) plots L(t) given the spikes in the top raster plot. When an afferent sensory spike arrives, L(t) is instantaneously scaled by the ratio λ2/λ1. In between arrivals, L(t) exponentially decays.

The CUSUM algorithm (pink trace, main plot of Figure 2) determines that a change point has occurred whenever L(t)>θ for some given threshold θ. These change points are visualized as output spikes in the lower pink raster plot. The likelihood L(t) is then reset to the lower bound, and the algorithm repeats.

We can analytically demonstrate similarities between CUSUM (for an SPP) and the LIF neuron. In the absence of input spikes, z does not change, and equation 2.1 is the solution to the first-order differential equation:
When an afferent sensory spike is observed at time tz+1, z increases to z+1, and L instantaneously increases by a factor λ2/λ1:
in càdlàg notation, with L(tz+1-) denoting the value of L just before the spike at tz+1 occurs. That jump can be implemented by driving the first-order differential equation by that factor when input spikes are observed:
(2.2)
where δ is the Dirac-delta function. Writing the differential equation in this form allows us to see the similarity of equation 2.2 to the membrane potential V(t) of an LIF neuron (Burkitt, 2006):
(2.3)

where τ is the LIF neuron’s time constant and w (R) its synaptic weight.

The likelihood ratio L(t) of an SPP does not quite exhibit the same dynamics as the membrane potential of an LIF neuron. For an SPP, the jump sizes of L(t) at input spike arrival times are proportional to the value of L(t) immediately preceding the arrivals (magenta numbers, Figure 2). So when the likelihood ratio L(t) is high at the time of input spike arrival, jump sizes are high, and vice versa (magenta trace and numbers, Figure 2). In an LIF neuron, jump sizes are a constant value w (seen the right-hand sides of equations 2.2 and 2.3).

2.1.2  CUSUM for Compound Poisson Processes

Now we show that L(t) for a certain family of compound Poisson processes (CPP) exhibits the same as the membrane potential of an LIF neuron.

A CPP is a generalization of standard Poisson processes to include non-unit jump sizes. Jump sizes can be a deterministic function or even realizations of independent random variables (Daley et al., 2003). In a neural context, we interpret jump sizes as ”importance weights” attached to sensory spikes. Although sensory spikes are modeled here as Poisson distributed, it does not necessarily follow that all sensory spikes should be equally indicative that a change point has occurred. For example, if the likelihood ratio is high when a sensory spike arrives (i.e., if a change point is more likely to have occurred), then perhaps an animal should weigh that arrival more highly (and vice versa).

Consider again a CPD problem where the rate of a Poisson process changes from λ1 to λ2 at some unknown time. But now let c1(t) and c2(t) be jump size functions for the process pre- and post–change point, respectively. The form of L(t) for a CPP can be derived by constructing Janossy densities and relating them to the likelihood ratio (see definitions 7.1.II and 7.1.III in Daley et al., 2003):
(2.4)
where ti is the time of the ith spike. Let c1(ti) and c2(ti) have the form
(2.5)
An algorithmic implementation of L(t) is given by
The forms of L, c1, and c2 are well behaved even at the limit of no input spikes. The jump functions c1 and c2 are Markovian because they depend only on the value of L just before a spike arrives. When L crosses the threshold θ, it is reset infinitesimally close to 0 (i.e., ε) and the algorithm continues. The differential equation that L(t) obeys for the CPP is now
(2.6)
If we define the LIF neuron’s time constant to be
(2.7)
then equations 2.6 and 2.3 are identical. Therefore, V(t) of the LIF neuron is proportional to L(t) given by equation 2.5.

Conventionally, a neuron’s membrane time constant is considered an intrinsic electrophysiological property of the membrane (Burkitt, 2006). It is determined by the membrane’s resistance and capacitance. In a CPD framework, that time constant is a function of that neuron’s input spiking statistics (λ1 and λ2). For an LIF neuron to implement CPD as we describe here, the value of those two time constants must be equal (see equation 2.7). If they differ, then an LIF neuron will not achieve minimax-optimal CPD results on our CPP (Shiryaev, 1996; Moustakides et al., 2009). In principle, neurons could adapt their membrane time constant by altering their membrane leak resistance. Calcium influx from input spikes could regulate the expression of ion channels that determine leak resistance (Johnson et al., 1997; Jethanandani et al., 2023; Puri, 2020). For example, an increase in calcium levels might downregulate specific leak channels, thereby decreasing membrane resistance on fast timescales (Rubin et al., 2005; Inglebert et al., 2020). This dynamic adjustment would allow the neuron to “learn” a time constant that satisfies equation 2.7. Experimental measurements in vivo have shown that a neuron’s membrane time constant is inversely related to its input spiking rate (see Figure 5 in Koch et al., 1996) and section 6.2 in Burkitt (2006), even for electrically passive neurons. We are proposing a theoretical explanation for why this relationship exists.

Figure 3:

The LIF neuron achieves CPD performance similar to CUSUM without requiring an artificially imposed reflecting barrier. The top raster plot is analogous to Figure 2. Main plot: L(t) (see equation 2.6) plotted with a reflecting barrier at 1 (i.e., the CUSUM algorithm, red trace) or without it (i.e., V(t) of the LIF neuron, green trace). Analogous to Figure 2, threshold crossings (θ=13.3, horizontal dotted line) are assertions of a change point and the process resets and repeats. The inset plot is a magnified image of the dashed black box below it. Notice that the LIF neuron can decay below 1 while CUSUM cannot (due to the presence of the reflecting barrier). This discrepancy occasionally leads to differences in their threshold crossing times (yellow arrows 1–3). Bottom raster plot: Output change-point assertions, colored according to the main plot legend. Notice the occasional differences in the outputs of CUSUM CPD and the LIF neuron due to the presence or absence of the reflecting barrier.

Figure 3:

The LIF neuron achieves CPD performance similar to CUSUM without requiring an artificially imposed reflecting barrier. The top raster plot is analogous to Figure 2. Main plot: L(t) (see equation 2.6) plotted with a reflecting barrier at 1 (i.e., the CUSUM algorithm, red trace) or without it (i.e., V(t) of the LIF neuron, green trace). Analogous to Figure 2, threshold crossings (θ=13.3, horizontal dotted line) are assertions of a change point and the process resets and repeats. The inset plot is a magnified image of the dashed black box below it. Notice that the LIF neuron can decay below 1 while CUSUM cannot (due to the presence of the reflecting barrier). This discrepancy occasionally leads to differences in their threshold crossing times (yellow arrows 1–3). Bottom raster plot: Output change-point assertions, colored according to the main plot legend. Notice the occasional differences in the outputs of CUSUM CPD and the LIF neuron due to the presence or absence of the reflecting barrier.

Close modal
Figure 4:

The LIF neuron and CUSUM exhibit very similar CPD performances when V(t) and L(t) rarely decay to low values. False alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron (green) and CUSUM algorithms (red), obtained from 10,000 independent simulations and plotted as probability densities. To facilitate fair comparison, we account for the presence or absence of the reflecting barrier by relating the CUSUM threshold θC and LIF neuron threshold θL via θC = θL+ 1. When the synaptic weight is high (first row), the CPD performances of the LIF neuron and CUSUM are practically identical. Given a small difference in input rates (second row), CPD performances are again almost identical. When the synaptic weight is low but the difference in input rates is larger (third row), we observe a significant disparity between the waiting time histograms. By altering θL (fourth row), that disparity in the third row is reduced. Collectively, these plots show that the CPD performances of the LIF neuron and CUSUM algorithm are virtually indistinguishable when V(t) and L(t) rarely decay to low values. Given low w and a larger difference in input rates, L(t) and V(t) decay more strongly to low values (see equation 2.7). In this region of parameter space, L(t) of CUSUM can lie at its lower bound of 1, but V(t) of the LIF neuron never quite reaches its lower bound of 0, so the LIF neuron has to traverse a shorter distance than CUSUM to reach the threshold. By slightly increasing θL (see the third and fourth rows), we partially offset that shorter distance and CPD performance disparity is reduced. All histograms are overlaid with exponential distributions (solid curves) scaled to their means (colored numbers and vertical dashed lines). Notice that the exponential curves are a reasonable fit for the waiting time distributions in some regions of parameter space.

Figure 4:

The LIF neuron and CUSUM exhibit very similar CPD performances when V(t) and L(t) rarely decay to low values. False alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron (green) and CUSUM algorithms (red), obtained from 10,000 independent simulations and plotted as probability densities. To facilitate fair comparison, we account for the presence or absence of the reflecting barrier by relating the CUSUM threshold θC and LIF neuron threshold θL via θC = θL+ 1. When the synaptic weight is high (first row), the CPD performances of the LIF neuron and CUSUM are practically identical. Given a small difference in input rates (second row), CPD performances are again almost identical. When the synaptic weight is low but the difference in input rates is larger (third row), we observe a significant disparity between the waiting time histograms. By altering θL (fourth row), that disparity in the third row is reduced. Collectively, these plots show that the CPD performances of the LIF neuron and CUSUM algorithm are virtually indistinguishable when V(t) and L(t) rarely decay to low values. Given low w and a larger difference in input rates, L(t) and V(t) decay more strongly to low values (see equation 2.7). In this region of parameter space, L(t) of CUSUM can lie at its lower bound of 1, but V(t) of the LIF neuron never quite reaches its lower bound of 0, so the LIF neuron has to traverse a shorter distance than CUSUM to reach the threshold. By slightly increasing θL (see the third and fourth rows), we partially offset that shorter distance and CPD performance disparity is reduced. All histograms are overlaid with exponential distributions (solid curves) scaled to their means (colored numbers and vertical dashed lines). Notice that the exponential curves are a reasonable fit for the waiting time distributions in some regions of parameter space.

Close modal
Figure 5:

Quantifying the LIF neuron’s trade-off between false alarms and detection delays in different regions of parameter space. We computed false alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron with different parameter values. Each histogram presents results from 10,000 independent simulations. Top row: The threshold θ is varied while the input rates and synaptic weights are kept constant. Lower thresholds result in lower detection delays but more frequent false alarms and vice versa. Middle row: The synaptic weight is varied while input rates and thresholds are kept constant. Higher synaptic weights cause faster threshold crossings, resulting in lower detection delays but higher false alarms. Bottom row: Input rate pairs λ1 and λ2 are increased while the threshold and synaptic weight are kept constant. A higher difference between the input rates implies a faster decay of V(t) between input spike arrivals, which causes longer waiting times. But under a fixed threshold, we see that higher input rate pairs (e.g., λ1=4 Hz and λ2=12 Hz) exhibit lower waiting times than that of lower input rate pairs (like λ1=1 Hz and λ2=3 Hz). This observation implies that the effect of more frequent input spikes is greater than the higher decay between input spikes when all other parameters are held constant. The values of the parameters held constant in each row are reported in the left column. The values of varied parameters are reported in the legends in the right column.

Figure 5:

Quantifying the LIF neuron’s trade-off between false alarms and detection delays in different regions of parameter space. We computed false alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron with different parameter values. Each histogram presents results from 10,000 independent simulations. Top row: The threshold θ is varied while the input rates and synaptic weights are kept constant. Lower thresholds result in lower detection delays but more frequent false alarms and vice versa. Middle row: The synaptic weight is varied while input rates and thresholds are kept constant. Higher synaptic weights cause faster threshold crossings, resulting in lower detection delays but higher false alarms. Bottom row: Input rate pairs λ1 and λ2 are increased while the threshold and synaptic weight are kept constant. A higher difference between the input rates implies a faster decay of V(t) between input spike arrivals, which causes longer waiting times. But under a fixed threshold, we see that higher input rate pairs (e.g., λ1=4 Hz and λ2=12 Hz) exhibit lower waiting times than that of lower input rate pairs (like λ1=1 Hz and λ2=3 Hz). This observation implies that the effect of more frequent input spikes is greater than the higher decay between input spikes when all other parameters are held constant. The values of the parameters held constant in each row are reported in the left column. The values of varied parameters are reported in the legends in the right column.

Close modal

The jump-size functions c1(t) and c2(t) have a direct interpretation. The form of c1(t) (see equation 2.5, left) implies that the importance of an input spike depends on the value of L(t), that is, how likely that a change point has occurred. The more likely a change point has occurred, the more importance we attribute to new input spikes, and vice versa. The form of c2(t) (see equation 2.5, right) implies that input spikes arriving after the change point should have a higher importance attributed to them than those that arrive before. The LIF neuron’s synaptic weight w specifies this discrepancy in importance before and after the change point. Of course, the LIF neuron itself does not know whether a change point occurred. But we are free to define pre- and post–change point CPPs whose jump size functions attribute importance to sensory spikes as described by equation 2.5. These particular forms of jump size functions lead to a constant jump height for L(t). The LIF neuron itself does not attribute importance to sensory spikes, but it evaluates evidence for a change point on underlying CPPs that do.

This analysis extends naturally to n independent and identical sensory neurons sending input spikes to one downstream LIF neuron. The sum of n independent and identical Poisson-distributed inputs, each with rate λ, is itself Poisson distributed with rate nλ. If we set
(2.8)
then V(t) of the LIF neuron is proportional to the L(t) for CPPs defined by equation 2.5 given n independent and identical input sensory neurons.

Figure 2 (main plot, green trace) plots equation 2.6 for the same inputs from the top raster plot. CUSUM exhibits the same neural characteristics shown earlier by the SPP. L(t) exponentially decays when no inputs arrive and instantaneously jumps at each arrival. Those jumps are now arithmetic (as opposed to geometric for the SPP), and are a constant size w (green arrows and numbers, w=10 in this example). The bottom raster plots in Figure 2 show CUSUM’s change point assertions for the SPP (pink) and CPP (green). The LIF neuron represents those change-point assertions as output spikes.

2.2  The Reflecting Barrier

The sole difference between CUSUM CPD (on a CPP with jump sizes defined by equation 2.5) and the LIF neuron is the presence of a lower bound for L(t). CUSUM requires L(t)1 (Page, 1954) and enforces this constraint with a reflecting barrier at 1. The LIF neuron does not have such a reflecting barrier. Instead, it is naturally bound at the LIF neuron’s resting potential of 0.

Figure 3 shows the effect of the lower barrier’s presence or absence on CPD performance. Similarly to Figure 2, the top raster plot of Figure 3 presents an example CPD problem on Poisson-distributed spikes from an afferent sensory neuron. The solid red trace in Figure 3 (main plot) is L(t) (i.e., equation 2.6) with a lower bound at L(t)=1. The green trace plots V(t) for an LIF neuron given the same Poisson-distributed input spikes. The inset provides a magnified view of the boxed region in the lower-left corner of the main plot. It shows that the LIF neuron can decay below V(t)=1, but CUSUM is bound by L(t)=1. The green and red traces jump to different values when an afferent spike arrives if they had different values at the arrival time. Therefore their threshold crossing times can be different (see Figure 3, bottom raster plots). One such threshold crossing discrepancy is highlighted by yellow arrows in Figure 3. Arrow 1 indicates that the LIF neuron had decayed below V(t)=1 when an input spike arrived. Arrow 2 shows how that decay sometimes results in the LIF neuron not crossing its threshold given subsequent input spikes, but CUSUM did because it started slightly higher at L(t)=1. Arrow 3 shows that algorithm outputs can therefore differ.

Figure 4 shows where in parameter space the presence or absence of the reflecting barrier affects the discrepancy between the CPD performances of CUSUM and the LIF neuron. CPD performances are quantified by their false alarm and detection delay waiting time distributions. The false alarm distribution (see Figure 4, left column) is the distribution of waiting times between threshold crossings before the change point occurs, that is, given input spikes at rate λ1 and jump sizes c1 (equation 2.5, left). The detection delay distribution (see Figure 4, right column) is analogous to the distribution of waiting times given that the change point occurred, that is, given λ2 and c2 (see equation 2.5, right). Collectively, these two distributions determine the sensitivity, specificity, and speed of a CPD algorithm. The histograms for the LIF neuron (green bars) and CUSUM (red bars) in Figure 4 are plotted as probability densities from 10,000 independent simulations of false alarms and detection delays. Each simulation begins with L(t) or V(t) on their reflecting barriers (1 or 0, respectively) and runs until threshold is crossed. The green and red numbers and vertical dashed lines mark the means of each histogram. The green and red traces are exponential distributions with those means. To ensure a fair comparison of threshold crossing times, the threshold for CUSUM (θC) is one unit higher than that of the LIF neuron (θL) in the first, second, and third rows (their values are stated in the false alarm panels). So both algorithms have an equal effective distance between their reflecting barrier and their threshold.

The first two rows of Figure 4 show that the CUSUM algorithm and LIF neuron implement CPD indistinguishably when L(t) and V(t) spend little time near their reflecting barriers of 1 and 0, respectively. The first row of Figure 4 shows that the CPD performances of the LIF neuron and the CUSUM algorithm are almost identical when w is high. When w is high, L(t) and V(t) are more likely to occupy values above their reflecting barriers. So the presence or absence of the reflecting barrier does not appreciably affect threshold crossing times. The second row of Figure 4 shows that the CPD performances of the LIF neuron and the CUSUM algorithm are again almost identical when the input rates are approximately equal. When λ1λ2, the rate of decay of L(t) and V(t) between input spikes is low (see equation 2.7). With slow decay between input spikes, L(t) and V(t) are again more likely to assume values above their reflecting barriers.

The third row of Figure 4 shows that CPD performances of the LIF neuron and CUSUM can differ in other regions of parameter space. When w is low and the rate of decay is high (i.e., λ2λ1), L(t) and V(t) are more likely to decay to values near their reflective barriers. L(t) of the CUSUM algorithm frequently occupies its lower bound of 1. But V(t) of the LIF neuron never quite reaches its lower bound of 0. So the LIF neuron maintains a shorter distance to its threshold compared to CUSUM and thus crosses its threshold more quickly. The third row of Figure 4 shows that the LIF neuron detects change points much faster than the CUSUM algorithm, but at the cost of more frequent false alarms. The fourth row of Figure 4 shows one way to reduce this disparity in CPD performance. We increased θL from 2.5 to 3, keeping all other parameters the same as they were in the third row. This increase reduced false alarms and increased detection delays for the LIF neuron. Its false alarm distribution becomes almost identical to that of CUSUM (fourth row, left panel). The detection delay distributions still exhibit a disparity (fourth row, right panel), albeit less than they did in the third row. The LIF neuron can detect a change point noticeably faster than CUSUM on average and at a similar false alarm rate (on our CPP, and in this region of parameter space).

2.3  The Performance of a LIF Neuron as a Change Point Detector

Figure 5 evaluates the CPD performance of the LIF neuron for the CPP (see equations 2.4 and 2.5) in different regions of parameter space. Again, we quantified CPD performance with false alarm and detection delay waiting time histograms obtained via simulation. Each histogram in Figure 5 was plotted as a probability distribution of 10,000 independent simulations, analogous to the histograms in Figure 4.

In the top row of Figure 5, we varied the threshold θ while keeping the input rates and the synaptic weight constant (λ1=2Hz,λ2=6Hz,w=2). θ increased from 4.5 to 5.5 in three equally spaced steps (blue, purple, and yellow bars, respectively). We see that increasing θ reduces false alarm frequency at the cost of higher detection delays, resulting in V(t) needing more input spikes to reach the threshold. Random bursts of inputs are less likely to cause the LIF neuron to falsely assert a change point, that is, false alarm waiting times are more likely to be higher. But the LIF neuron also requires more time to accumulate enough evidence to spike after the change point, that is, detection delays are also more likely to be higher.

In Figure 5 (middle row), we varied the synaptic weight while keeping the input rates and threshold constant. w increased from 1.5 to 2.5 in three equally spaced steps (red, yellow, and green bars, respectively). We see that increasing the synaptic weight increases the false alarm frequency and lowers the detection delay. A higher synaptic weight allows V(t) to achieve threshold with fewer inputs. Notice that increasing the synaptic weight has an analogous effect on CPD performance as decreasing the threshold.

In the bottom row, we varied the input rates (dark red, gray, and dark green bars) while keeping the synaptic weight and threshold constant. Notice that as both input rates increase, the difference between them increases as well (lower-right panel’s legend, Figure 5). Increasing input rates should reduce the LIF neuron’s waiting time to threshold crossing both before and after the change point, all else being equal. However, increasing the difference between input rates should increase those waiting times because the time constant is smaller (see equation 2.7), so the membrane potential decays faster between input spike arrivals. The bottom row of Figure 5 shows that higher input rates reduce waiting times more than a higher difference between input rates increases them, at least for these parameter values. This observation holds for both false alarm and detection delay waiting time histograms. The bottom row of Figure 5 can also be interpreted as illustrating the effect of the LIF neuron receiving input spikes from different numbers of independent and identical input neurons. The sum of Poisson processes remains a Poisson process with a rate equal to the sum of the rates in the summand (Pishro-Nik, 2014). Say the red bars are false alarm or detection delay waiting times of an LIF neuron with one input neuron spiking at a rate of λ1=1 Hz or λ2=3 Hz. The gray and dark green bars then show how those waiting times change if we have two or four independent and identical input neurons feeding input spikes to one LIF neuron, respectively. The bottom row of Figure 5 shows that increasing the number of input neurons decreases detection delays at the cost of increasing false alarm frequency.

2.4  The LIF Neuron Amplifies the Difference of Its Input Rates

Figure 6 examines the effect of the threshold on the average LIF neuron false alarm (FA¯) and detection delay (DD¯) waiting times. It explores trends in these average waiting times across different percent differences between the input rates λ1 and λ2. Figure 6 shows that the LIF neuron can increase the percent difference of its output rates relative to its input rates. This relative gain would be computationally useful if the output spikes of that LIF neuron were used as input to another CPD algorithm (i.e., another LIF neuron downstream). That downstream LIF neuron could more quickly and reliably detect a change point because its input rates exhibit a larger percent difference than the inputs of the upstream LIF neuron did. Figure 6 shows that this gain in percent difference of input and output rates is maximized at a certain threshold value (θmax), depending on the specific values of the input rates.

Figure 6:

The LIF neuron increases the pre- and post-change-point percent difference of its output rates relative to its input rates. The percent difference between the mean false alarm (FA¯) and detection delay (DD¯) waiting times for an LIF neuron are plotted against a range of the values of the threshold θ on the x-axis. Blue, beige, and green classes represent a very small (2Hz2.1Hz, i.e., 5% difference), small (2Hz2.4Hz, i.e., 20% difference), and large (2Hz6Hz, i.e., 200% difference) input rate change point, respectively. Dark-tone, mid-tone, and light-tone traces in each color represent 1, 100, and 1000 independent and identical sensory neurons, each spiking at rates indicated by their class, feeding inputs to the LIF neuron. In each trace, the percent difference between FA¯ and DD¯ is maximized for a particular value of the threshold (θmax). Blue, beige, and green numbers in the tables report FA¯ and DD¯ at θmax, colored accordingly. In all traces, θmax and the percent difference between FA¯ and DD¯ at that threshold value increase as the number of input sensory neurons increases. The inset further explores this trend (top right). FA¯ and DD¯ are plotted against orders of magnitude of the number of input neurons (from 1 to 106 input neurons). We fixed the LIF neuron threshold (θ=31.29, θmax of the light-tone blue trace) and the input rates of each input neuron (2Hz2.1Hz). FA¯ (diamond markers, gray) and DD¯ (cross marks, red) are plotted, and the blue numbers report the percent difference between them. Observe the decline of FA¯ and DD¯ and the increase of the percent difference between them as the number of input neurons increases.

Figure 6:

The LIF neuron increases the pre- and post-change-point percent difference of its output rates relative to its input rates. The percent difference between the mean false alarm (FA¯) and detection delay (DD¯) waiting times for an LIF neuron are plotted against a range of the values of the threshold θ on the x-axis. Blue, beige, and green classes represent a very small (2Hz2.1Hz, i.e., 5% difference), small (2Hz2.4Hz, i.e., 20% difference), and large (2Hz6Hz, i.e., 200% difference) input rate change point, respectively. Dark-tone, mid-tone, and light-tone traces in each color represent 1, 100, and 1000 independent and identical sensory neurons, each spiking at rates indicated by their class, feeding inputs to the LIF neuron. In each trace, the percent difference between FA¯ and DD¯ is maximized for a particular value of the threshold (θmax). Blue, beige, and green numbers in the tables report FA¯ and DD¯ at θmax, colored accordingly. In all traces, θmax and the percent difference between FA¯ and DD¯ at that threshold value increase as the number of input sensory neurons increases. The inset further explores this trend (top right). FA¯ and DD¯ are plotted against orders of magnitude of the number of input neurons (from 1 to 106 input neurons). We fixed the LIF neuron threshold (θ=31.29, θmax of the light-tone blue trace) and the input rates of each input neuron (2Hz2.1Hz). FA¯ (diamond markers, gray) and DD¯ (cross marks, red) are plotted, and the blue numbers report the percent difference between them. Observe the decline of FA¯ and DD¯ and the increase of the percent difference between them as the number of input neurons increases.

Close modal
The percent difference between input rates is given by
(2.9)
We considered three classes of percent differences between the input rates before and after the change-point (λ1 and λ2), represented by three colors. Green traces represent a 200% difference between input ra-tes (2Hz6Hz). Beige traces represent a 20% difference between input rates (2Hz2.4Hz). Blue traces represent a 5% difference between input rates (2Hz2.1Hz). For each class, we varied the threshold θ in increments of 0.01 units (x-axis, Figure 6). For each value of θ, we ran 5000 independent simulations and stored false alarm and detection delay waiting times of the LIF neuron (see equations 2.4 and 2.5) given certain input rates (legend, Figure 6). We computed FA¯ and DD¯ from those simulation results. Then the reciprocals of FA¯ and DD¯ are the mean output rates of the LIF neuron. So the percent difference of the LIF neuron output rates is
(2.10)
The y-axis of Figure 6 represents Δout on a logarithmic scale. We see that Δout>Δin for the LIF neuron over a range of threshold values. We refer to this increase of input and output percent differences as the LIF neuron’s ”gain.”

In each color class, we also studied the effect of increasing the number of independent and identical input neurons on Δout. From section 2.3, we saw that increasing the number of independent and identical input neurons is equivalent to increasing the input rates to the LIF neuron multiplicatively. For example, if λ1=2 Hz and λ2=6 Hz, then the input spikes from 100 independent and identical sensory neurons are equivalent to one sensory neuron spiking at 200 Hz or 600 Hz. Figure 6 reports the gain of the LIF neuron across θ given 1 (dark-tone traces), 100 (mid-tone traces), and 1000 (light-tone traces) independent and identical sensory neuron afferents. The legend above Figure 6 reports the input rates for each trace in the main plot.

In the green class (top left, Figure 6 main plot), observe that Δout increases from Δin=200% to a maximum value as θ is increased up to θmax. As the threshold increases to θmax, the frequency of false alarms reduces because V(t) requires more input spikes in a given time window to reach its threshold. A higher threshold also increases the detection delay, but less than it increases false alarm waiting times (because λ2>λ1). Collectively, Δout increases as the threshold increases to θmax. As θ is increased past θmax, we see a sharp decline in Δout. While false alarms continue to decrease as θ increases, detection delays also significantly rise. We see that Δout returns to 200% and eventually becomes negligible with respect to Δin (so the gain approaches zero). This trend is consistent across different numbers of input neurons (medium-tone and light-tone green traces). As the number of input neurons increases, the maximum possible gain also increases and is achieved at a higher value of θmax. With an increased number of input neurons, false alarms become more frequent and require higher thresholds to filter them out. The value of θmax for each trace is stated in the figure (colored dashed lines, bottom). The values of FA¯ and DD¯ at θmax for each trace are also stated alongside the traces, colored accordingly.

The beige class (middle left, Figure 6), demonstrates similar trends to the green class. One key difference between the green and beige classes is that θmax is larger. Since the difference between input rates is smaller in the beige class, the time constant of the LIF neuron is larger (see equation 2.7). Therefore the decay of its membrane potential is slower, and the LIF neuron requires a higher threshold to filter out the same number of false alarms and achieve a higher gain, all else being equal. Another key difference between the classes is that the gains in the beige class are smaller than they were for the green class. CPD amplifies the percent difference more when the input rates already exhibit a high percentage difference. Raising the threshold reduces false alarms and increases detection delays, but these effects depend on the difference between input rates. When the difference between input rates is high (e.g., the green class), raising the threshold quickly filters out false alarms without significantly affecting detection delay. The gain of the LIF neuron increases quickly as we raise the threshold. But when the difference between input rates is lower (e.g., the beige class), raising the threshold filters false alarms and delays detection more evenly. The gain of the LIF neuron increases more slowly as we raise the threshold.

The blue class (bottom, Figure 6) maximizes Δout at even higher values of θmax. This result follows from the same reasoning as for the beige class, but with an even smaller difference between input rates. Notice that as Δin decreases (from the green to beige to blue classes), the maximum possible gain at θmax also decreases. For example, consider the medium-tone traces in each class that represent 100 independent and identical input neurons to the LIF neuron. The maximum gains of those three traces are about 7500% for the green class, about 200% for the beige class, and about 60% for the blue class. The smaller the percent difference in input signals, the less gain the LIF neuron can achieve for a fixed number of input neurons. But the dark-tone, mid-tone, and light-tone traces in each class suggest that we can amplify the gain by connecting more independent and identical input neuron afferents to the LIF neuron.

The inset in Figure 6 (top right) reports the effect of increasing the number of independent and identical input neurons, each with a 5% difference in input rates, on FA¯ and DD¯ of an LIF neuron. The x-axis represents the number of input neurons (from 1 to 106), and the y-axis represents the mean waiting times on a log-log scale (FA¯ in gray, DD¯ in red). θ was fixed at 31.29 (i.e., θmax of the light-tone blue trace). Δout is stated in blue above each pair of FA¯ and DD¯. Notice that the mean waiting times drastically reduce, and Δout increases with the number of input sensory neurons. The inset in Figure 6 suggests that the LIF neuron can amplify very small differences in input rates given enough input neurons. More input neurons expedite detection of a change point at the cost of causing more frequent false alarms. In principle, false alarms can be filtered out by passing the output of the LIF neuron into another one downstream that runs CPD on it. So it might be possible for a network of LIF neuron layers to implement quick CPD on very small changes in input rates with very few false alarms.

2.5  Feedforward LIF Neural Networks Can Achieve Quick, Reliable CPD on Small Shifts in Input Spiking Rates

Figure 7 shows that a feedforward LIF neural network can quickly detect tiny changes in Poisson rates with very rare false alarms. Each layer of the network increases Δout relative to Δin as observed in sections 2.4. By iterating this computation, Δout grows as we propagate down layers of the network. Δout of the deciding node (i.e., the bottom of the network in Figure 7) can be very high. Therefore this simple feedforward network can implement quick yet reliable CPD, even if Δin of the sensory input layer is tiny.

Figure 7:

A feedforward LIF neural network can quickly and reliably detect very small changes in Poisson input spiking rates given enough neurons in the network. Left: Schematic of a feedforward LIF neural network receiving inputs from a sensory layer (deep blue neurons, top). It comprises six hidden layers (red through gray neurons) and a deciding LIF neuron node (green neuron, bottom). Each neuron receives inputs from 10 neurons in the layer above it, with a synaptic weight of 1. The black numbers to the left state the number of neurons in each layer. Right: Neurons in the sensory layer spike with a Poisson rate of 2 Hz or 2.1 Hz before or after the change point, respectively (blue numbers, top row, right column). The plots show false alarm (FA, left column) and detection delay (DD, right column) waiting time histograms for each neuron in a given layer. Similar to Figure 4, each histogram is fitted with an exponential curve (solid green and red traces) scaled to the mean false alarm (FA¯, green numbers) or mean detection delay (DD¯, red numbers) waiting times. These distributions show that the outputs of each layer are approximately Poisson distributed. Therefore, those outputs can be input into another layer of LIF neurons that run CPD on them. The green numbers at the bottom report FA¯ and DD¯ for the deciding node. The network detects small changes in sensory rates within 20 ms, with false alarms occurring only once every 400,000 seconds (i.e., 4 days) on average.

Figure 7:

A feedforward LIF neural network can quickly and reliably detect very small changes in Poisson input spiking rates given enough neurons in the network. Left: Schematic of a feedforward LIF neural network receiving inputs from a sensory layer (deep blue neurons, top). It comprises six hidden layers (red through gray neurons) and a deciding LIF neuron node (green neuron, bottom). Each neuron receives inputs from 10 neurons in the layer above it, with a synaptic weight of 1. The black numbers to the left state the number of neurons in each layer. Right: Neurons in the sensory layer spike with a Poisson rate of 2 Hz or 2.1 Hz before or after the change point, respectively (blue numbers, top row, right column). The plots show false alarm (FA, left column) and detection delay (DD, right column) waiting time histograms for each neuron in a given layer. Similar to Figure 4, each histogram is fitted with an exponential curve (solid green and red traces) scaled to the mean false alarm (FA¯, green numbers) or mean detection delay (DD¯, red numbers) waiting times. These distributions show that the outputs of each layer are approximately Poisson distributed. Therefore, those outputs can be input into another layer of LIF neurons that run CPD on them. The green numbers at the bottom report FA¯ and DD¯ for the deciding node. The network detects small changes in sensory rates within 20 ms, with false alarms occurring only once every 400,000 seconds (i.e., 4 days) on average.

Close modal

The network illustrated in Figure 7 (left) comprises eight layers. The first layer (in blue) is a sensory input layer that generates Poisson inputs at 2 Hz before a change point and 2.1 Hz after it. Δin of each sensory neuron is very small. Following the sensory layer are six hidden layers of LIF neurons (indicated by the yellow bar at the left of Figure 7). The number of neurons in each layer is shown on the left (bold, black numbers). Every neuron, except those in the sensory layer, receives input from 10 neurons in the previous layer. Every input has a synaptic weight (w, in equation 2.5) of 1. A neuron in an upstream layer only feeds spikes to one neuron in the downstream layer. So the spiking outputs of all neurons in a layer are conditionally independent of each other. The thresholds θ for each layer are shown on the right. Alongside each layer, we show the false alarm and detection delay waiting time distributions of any individual neuron in that layer (right side of Figure 7, panels). Each distribution is fitted with an exponential curve (solid green and red traces) scaled to the mean of that distribution (FA¯ and DD¯). The input rates for the next layer are the reciprocals of those average mean waiting times from the layer above it.

The exponential curves fitted to the waiting time distributions (see Figure 7, right panels) show that the outputs of each LIF neuron in a layer are approximately Poisson distributed. Recall that in Figure 4 (second row), the exponential curve did not accurately fit the output waiting time distributions given small differences in input rates λ1 and λ2. The plots in Figure 7 (right side) show that the goodness of fit of the exponential distribution to waiting time distributions can be improved by increasing the number of afferents and the threshold of the LIF neuron. When the waiting times to output spikes of an LIF neuron are accurately approximated by an exponential distribution, CPD can be implemented recursively through layers in a network. The plots in Figure 7 show that as we propagate down layers in the network, FA¯s and DD¯s increase for each neuron in that layer. We then expedite detection by connecting each neuron in a layer to 10 afferents upstream. The final ”deciding node” LIF neuron (green neuron and green numbers, bottom) detects a 5% change in input rates within approximately 20 ms on average, with false alarms occurring every 400,000 seconds, or approximately 4 days on average. This LIF neuron network can swiftly, yet reliably, detect tiny changes in input rates.

We established that the LIF neuron implements online CUSUM CPD for a specific family of compound Poisson processes. This result allows us to interpret salient neural features in a rigorous a statistical framework, at least for the LIF neuron:

  • Afferent spikes are input for a CPD algorithm (see, Figure 1, top raster plot) that detects changes in their statistical properties.

  • The voltage of the LIF neuron represents a likelihood ratio of those inputs (see, Figure 1, main panel).

  • The resting potential of the LIF neuron is the reflecting barrier of the likelihood ratio.

  • Synaptic weights correspond to the change in jump heights of an underlying compound Poisson process before and after the change point.

  • The LIF neuron’s time constant determines the likelihood ratio’s rate of decay between input spikes.

  • Its threshold determines a speed-accuracy trade-off for change-point assertions.

  • Its output spikes (see, Figure 1, bottom raster plot) are its assertions that it has detected a change point at that moment in time.

  • These output spikes can be input to another CPD algorithm, that is, CPD can be implemented recursively by successive layers of LIF neurons.

Many interpretations of neural action potentials have been proposed over the past half-century (Eggermont, 1998; Monk et al., 2024). There is no consensus on how nervous systems use them to represent the world and make decisions. The predominant interpretation of action potentials in computational neuroscience is that trains of them represent information about the environment via some cryptic code (Kumar et al., 2010; Knutsen & Ahissar, 2009; O’Keefe & Burgess, 2005). For example, rate coding suggests that the frequency of action potentials “encodes” stimulus information (Gautrais & Thorpe, 1998; McDonnell & Stocks, 2008). Interspike interval coding claims that the waiting times between action potentials “encode” stimulus information (Van Rullen & Thorpe, 2001; Johansson & Birznieks, 2004). Population coding argues that neurons are noisy, so they should only be studied at a systems or networks level (Saxena & Cunningham, 2019; Quian Quiroga & Panzeri, 2009). Phase coding asserts that the time of a neuron’s action potential relative to some background activity “encodes” stimulus information (Kayser et al., 2009). Our results suggest that each individual action potential is a neuron’s assertion that some spiking statistic(s) of its input neurons have changed at that moment in time. Statistics of a spike train (e.g., rate, phase, or waiting times) are important in the sense that animals need to detect when they change. But these statistics are not “encoding” anything (Brette, 2019). Instead, each action potential is the output of a neuron’s computation that it executed by comparing its membrane potential to a threshold.

Animals and their nervous systems can quickly and accurately detect small changes in sensory spiking behavior (Knudsen, 1981; Polis, 1979; Sourakov, 2009). For mathematical simplicity, we assumed that sensory spikes were Poisson distributed (Dayan & Abbott, 2005). For certain parameter values, the output spikes of an LIF neuron were also approximately Poisson distributed (see Figures 4 and 7). Therefore those output spikes can be inputs to another downstream LIF neuron, which implies that CPD algorithms can be implemented recursively. Successive layers of a feedforward network filter out false-positive change-point assertions from the layer above it, at the cost of increasing detection delay. But detection delay can be reduced by feeding inputs from many independent and identical afferent neurons to the same efferent neuron. Since the sum of independent and identical Poisson random variables is itself Poisson, that efferent neuron still implements CUSUM CPD on the sum of its inputs. As more independent afferent neurons are connected to the efferent neuron, the change in the sum of the input rates before and after the change point increases. All else being equal (e.g., the threshold), the false alarm rate and detection delays are reduced because a change point is easier to detect. Simple feedforward networks of thresholded biological cells are natural, principled solutions to the problem of quickly and reliably detecting changes in sensory spiking behavior.

CPD problems can be considered on more general input stochastic processes with a different likelihood ratio L(t). For example, L(t) for an SPP (see equation 2.1) includes an exponential decay term, which we mapped to the passive leak current of an LIF neuron. If the waiting time distribution for input spikes were inverse-gaussian instead, then L(t) would decay differently between input spikes and would have a different update rule for when input spikes arrive. A different electrophysiological model of a neuron that performs online CUSUM CPD for this specific problem can thus be derived. This observation implies that CUSUM CPD can be a recursive algorithm even if the waiting time distributions of output and input spikes do not share the same form. Say our waiting time distributions for input spikes before and after a change point are the false alarm and detection delay waiting times of the inverse-gaussian CPD neuron. Then we can construct a likelihood ratio and derive another neuron that receives the output of the inverse-gaussian CPD neuron as input. We conjecture that nervous systems recursively implement CPD in this manner. If so, and if neural membrane potentials represent some likelihood ratio of its inputs, then a testable prediction immediately follows: the electrophysiological properties of an efferent neuron’s membrane must be related to the spiking statistics of its afferent inputs. We showed a simple example of this principle, specifically that the time constant of an LIF neuron is inversely related to the difference in its input rates before and after a change point (see equation 2.7). More generally, for a CPD problem that a given neuron needs to solve, we can predict its electrophysiological properties. Conversely, if we know the membrane electrophysiology of a neuron, then we can infer something about the CPD problem that it solves.

This relationship between afferent spiking statistics and efferent electrophysiology could be verified in at least two ways. First, the literature can be reviewed for examples of afferent-efferent neuron pairs whose spiking statistics and electrophysiological properties have been modeled in detail (Knudsen, 1981; Monk & van Schaik, 2020; Contreras et al., 2001; O’Keefe & Burgess, 2005; Villette et al., 2019; Broussard et al., 2014). Computational neuroscience has a long history of constructing both neural electrophysiological models (Koch, 1993; D’Angelo & Jirsa, 2022) and statistical models of their spiking behavior (Perkel et al., 1967; Gabbiani & Koch, 1998). We are unaware of theoretical explanations for why these models should be related for afferent-efferent neuron pairs connected by a synapse. For example, it is well known that barn owl auditory nerve fibers produce (approximately) Poisson-distributed spikes at a rate up to approximately 100 Hz in silence and approximately 300 Hz given a pure-tone sound stimulus (Neubauer et al., 2009). It is also well known that one to five auditory nerve fibers feed spikes into nucleus magnocellularis efferent neurons (Carr & Boudreau, 1991; Oline et al., 2016) whose time constants range from 1 ms to 5 ms (Fukui & Ohmori, 2004; Gerstner et al., 1996; Fontaine & Brette, 2011). However, it has not been proposed that the time constants of nucleus magnocellularis neurons are related to the spiking rates and number of afferent auditory nerve fibers, that is, 1/(300 Hz-100 Hz) = 5 ms and 1/(5·(300 Hz-100 Hz)) = 1 ms. Second, this relationship in slice preparations could be directly measured. Say we have a barrel cortex slice preparation from a mouse (Shlosberg et al., 2012; Buskila et al., 2013, 2014) and perform the experiment reported by Buskila and Amitai (2010). Namely, we identify a dendrite from layer 2/3 and insert a recording electrode into it, and separately drive the spiking of a neuron from layer 4 with a stimulating electrode. That stimulating electrode is then moved until the recording electrode observes the postsynaptic potentials being driven. The neurons in layers 4 and 2/3 must then be synaptically connected. If we change the spiking statistics of the layer 4 neuron, then the electrophysiological properties of the layer 2/3 neuron’s membrane should change (Buskila et al., 2013). For example, the spiking rate of the layer 4 neuron may be inversely related to the dendritic time constant of the layer 2/3 neuron, as equation 2.7 suggests.

We wonder if any electrophysiological neuron model can implement a particular online CPD algorithm on its inputs. This article showed two examples of this principle: an LIF neuron with synaptic weights that are proportional to its membrane potential implements CPD on the SPP (see equations 2.1 and 2.2), and then an LIF neuron with constant synaptic weights implements CPD on a specific family of CPPs (see equations 2.4 and 2.6). These examples show that some simple neuron models can solve some simple CPD problems. The CPD problems that we consider here are particularly restrictive. We assume that input spikes are strictly excitatory and Poisson-distributed at one of two possible input rates. More complicated neural inputs like lateral inhibition (Mu & Ploog, et al., 1981), self-excitation (Steriade et al., 1998; Rolls, 2021), or feedback loops (Kennedy & Bullier, 1985; Miller, 2003) cannot be modeled by excitatory SPPs. In early neural sensory pathways, our mathematical restrictions can still be reasonable approximations of neural spiking behavior and physiology. For example, barn owl auditory afferents fire approximately Poisson-distributed excitatory spikes (Neubauer et al., 2009), and the next neuron in that pathway is reasonably modeled as an LIF neuron (Huo, 2010). Farther down neural pathways, for example, in cortical regions, these assumptions break down. For example, neurons in the cat’s primary visual cortex (V1) integrate inputs over temporal patterns to respond to specific features (Guitchounts et al., 2020; Freeman, 2021). They also receive feedback from higher cortical areas and lateral inputs from neighboring neurons (Pan et al., 2021; Ding et al., 2022), contributing to more complex computations and context-dependent processing (Sengupta et al., 2021; Ding et al., 2022). Our mathematical simplifications cannot account for such complexity. But it might be possible to relax many of our mathematical restrictions, if not all of them.

Extensions of the likelihood ratio for spatiotemporal and history-dependent point processes have been rigorously derived and studied (Reinhart, 2018; Dachian & Kutoyants, 2006). They maintain a similar form to the one presented here (see equation 2.4) as their construction is derived from inhomogeneous Poisson measures (Diggle, 2013). Their similarity in form suggests that it is possible to radically generalize the restrictive CPD problem that we studied here. The two measures compared by the likelihood ratio can represent anything from scalar values to exotic functions resembling neural receptive fields (Park & Pillow, 2011, 2013). For example, λ1(s,t) could represent background noise, while λ2(s,t) could represent specific spatiotemporal patterns of inputs to a dendrite. Notice that both measures are now functions of space and time. They represent the intensity of a generic point process with a known distribution (including, but not limited to, the Poisson distribution). The measures can also be self-exciting, where past activity influences future responses (e.g., a Hawkes process; Reinhart, 2018). Spatiotemporal, self-exciting intensities are very broad measures for point processes. We can define a CUSUM algorithm to run CPD on two such measures.

The likelihood ratio for this broad family of point processes shares the same structure as our equation 2.1 despite these significant extensions (Dachian & Kutoyants, 2006; Reinhart, 2018). This observation suggests that we can derive a wider variety of neural models and electrophysiological features from a generalized version of our CPD framework. For example, inhibitory spikes emerge from our CPD framework by allowing the jump size function c2 to take negative values (wR). Feature selectivity can emerge by assigning greater importance to input spikes that fit some particular spatiotemporal pattern. Neural feedback loops can be modeled as self-exciting point processes, as in cortex (Mei & Eisner, 2017; Garnier et al., 2015; Cools, 1987). Combining these extensions could map complex neural computations to CPD problems and vice versa. Generalizing our CPD framework could thus provide a rigorous probabilistic interpretation for a much broader range of neural electrophysiological models and networks, moving beyond simple stimulus detection tasks like the one shown in Figure 7.

This work was partially funded by the DFG within the Cluster of Excellence EXC 1077/1 – project 390895286. We also thank Michael G. Paulin and Angel Mario Castro Martinez for useful discussion.

Aldridge
,
J. W.
,
Thompson
,
J.
, &
Gilman
,
S.
(
1997
).
Unilateral striatal lesions in the cat disrupt well-learned motor plans in a go/no-go reaching task
.
Experimental Brain Research
,
113
,
379
393
.
Aminikhanghahi
,
S.
, &
Cook
,
D. J.
(
2017
).
A survey of methods for time series change point detection
.
Knowledge and Information Systems
,
51
(
2
),
339
367
.
Bélisle
,
P.
,
Joseph
,
L.
,
MacGibbon
,
B.
,
Wolfson
,
D. B.
, &
Du Berger
,
R.
(
1998
).
Change-point analysis of neuron spike train data
.
Biometrics
,
54
(
1
),
113
123
.
Biau
,
D. J.
,
Resche-Rigon
,
M.
,
Godiris-Petit
,
G.
,
Nizard
,
R. S.
, &
Porcher
,
R.
(
2007
).
Quality control of surgical and interventional procedures: A review of the CUSUM
.
BMJ Quality and Safety
,
16
(
3
),
203
207
.
Bissell
,
A. F.
(
1969
).
CUSUM techniques for quality control
.
Journal of the Royal Statistical Society: Series C (Applied Statistics)
,
18
(
1
),
1
25
.
Brette
,
R.
(
2019
).
Is coding a relevant metaphor for the brain?
Behavioral and Brain Sciences
,
42
, e215.
Brook
,
D.
, &
Evans
,
D.
(
1972
).
An approach to the probability distribution of CUSUM run length
.
Biometrika
,
59
(
3
),
539
549
.
Broussard
,
G. J.
,
Liang
,
R.
, &
Tian
,
L.
(
2014
).
Monitoring activity in neural circuits with genetically encoded indicators
.
Frontiers in Molecular Neuroscience
,
7
,
97
.
Burkitt
,
A. N.
(
2006
).
A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input
.
Biological Cybernetics
,
95
,
1
19
.
Buskila
,
Y.
, &
Amitai
,
Y.
(
2010
).
Astrocytic iNOS-dependent enhancement of synaptic release in mouse neocortex
.
Journal of Neurophysiology
,
103
(
3
),
1322
1328
.
Buskila
,
Y.
,
Breen
,
P. P.
,
Tapson
,
J.
,
van Schaik
,
A.
,
Barton
,
M.
, &
Morley
,
J. W.
(
2014
).
Extending the viability of acute brain slices
.
Scientific Reports
,
4
(
1
),
5309
.
Buskila
,
Y.
,
Crowe
,
S. E.
, &
Ellis-Davies
,
G. C.
(
2013
).
Synaptic deficits in layer 5 neurons precede overt structural decay in 5xFAD mice
.
Neuroscience
,
254
,
152
159
.
Carr
,
C.
, &
Boudreau
,
R.
(
1991
).
Central projections of auditory nerve fibers in the barn owl
.
Journal of Comparative Neurology
,
314
(
2
),
306
318
.
Contreras
,
C. M.
,
Rodríguez-Landa
,
J. F.
,
Gutiẃrrez-García
,
A. G.
, &
Bernal-Morales
,
B.
(
2001
).
The lowest effective dose of fluoxetine in the forced swim test significantly affects the firing rate of lateral septal nucleus neurones in the rat
.
Journal of Psychopharmacology
,
15
(
4
),
231
236
.
Cools
,
A.
(
1987
).
The relevance of feedforward loops
.
Behavioral and Brain Sciences
,
10
(
2
),
210
210
.
Dachian
,
S.
, &
Kutoyants
,
Y. A.
(
2006
).
Hypotheses testing: Poisson versus self-exciting
.
Scandinavian Journal of Statistics
,
33
(
2
),
391
408
.
Daley
,
D. J.
, &
Vere-Jones
,
D.
(
2003
).
An introduction to the theory of point processes
, Vol.
1
:
Elementary theory and methods
.
Springer
.
D’Angelo
,
E.
, &
Jirsa
,
V.
(
2022
).
The quest for multiscale brain modeling
.
Trends in Neurosciences
,
45
(
10
),
777
790
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2005
).
Theoretical neuroscience: Computational and mathematical modeling of neural systems
.
MIT Press
.
Dehning
,
J.
,
Zierenberg
,
J.
,
Spitzner
,
F. P.
,
Wibral
,
M.
,
Neto
,
J. P.
,
Wilczek
,
M.
, &
Priesemann
,
V.
(
2020
).
Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions
.
Science
,
369
(
6500
),
eabb9789
.
Denève
,
S.
(
2008
).
Bayesian spiking neurons I: Inference
.
Neural Computation
,
20
(
1
),
91
117
.
Diggle
,
P. J.
(
2013
).
Statistical analysis of spatial and spatio-temporal point patterns
.
CRC Press
.
Ding
,
J.
,
Ye
,
Z.
,
Xu
,
F.
,
Hu
,
X.
,
Yu
,
H.
,
Zhang
,
S.
, …
Lu
,
Z.-L.
(
2022
).
Effects of top-down influence suppression on behavioral and V1 neuronal contrast sensitivity functions in cats
.
iScience
,
25
(
1
).
Eggermont
,
J. J.
(
1998
).
Is there a neural code?
Neuroscience and Biobehavioral Reviews
,
22
(
2
),
355
370
.
Faisal
,
A. A.
(
2010
).
Stochastic simulation of neurons, axons and action potentials
. In
C.
Laing
&
G.
Lord
(Eds.),
Stochastic methods in neuroscience
(pp.
297
343
).
Oxford University Press
.
Fontaine
,
B.
, &
Brette
,
R.
(
2011
).
Neural development of binaural tuning through Hebbian learning predicts frequency-dependent best delays
.
Journal of Neuroscience
,
31
(
32
),
11692
11696
.
Freeman
,
A. W.
(
2021
).
A model for the origin of motion direction selectivity in visual cortex
.
Journal of Neuroscience
,
41
(
1
),
89
102
.
Fukui
,
I.
, &
Ohmori
,
H.
(
2004
).
Tonotopic gradients of membrane and synaptic properties for neurons of the chicken nucleus magnocellularis
.
Journal of Neuroscience
,
24
(
34
),
7514
7523
.
Gabbiani
,
F.
, &
Koch
,
C.
(
1998
).
Principles of spike train analysis
.
Methods in Neuronal Modeling
,
12
(
4
),
313
360
.
Garnier
,
A.
,
Vidal
,
A.
,
Huneau
,
C.
, &
Benali
,
H.
(
2015
).
A neural mass model with direct and indirect excitatory feedback loops: Identification of bifurcations and temporal dynamics
.
Neural Computation
,
27
(
2
),
329
364
.
Gautrais
,
J.
, &
Thorpe
,
S.
(
1998
).
Rate coding versus temporal order coding: A theoretical approach
.
Biosystems
,
48
(
1–3
),
57
65
.
Gerstner
,
W.
,
Kempter
,
R.
,
Van Hemmen
,
J. L.
, &
Wagner
,
H.
(
1996
).
A neuronal learning rule for sub-millisecond temporal coding
.
Nature
,
383
(
6595
),
76
78
.
Gerstner
,
W.
, &
Kistler
,
W. M.
(
2002
).
Spiking neuron models: Single neurons, populations, plasticity
.
Cambridge University Press
.
Guitchounts
,
G.
,
Masís
,
J.
,
Wolff
,
S. B.
, &
Cox
,
D.
(
2020
).
Encoding of 3D head orienting movements in the primary visual cortex
.
Neuron
,
108
(
3
),
512
525
.
Herberholz
,
J.
, &
Marquart
,
G. D.
(
2012
).
Decision making and behavioral choice during predator avoidance
.
Frontiers in Neuroscience
,
6
, 125.
Huang
,
Y.
,
Li
,
H.
,
Campbell
,
K. A.
, &
Han
,
Z.
(
2011
).
Defending false data injection attack on smart grid network using adaptive CUSUM test
. In
Proceedings of the 2011 45th Annual Conference on Information Sciences and Systems
(pp.
1
6
).
Huo
,
J.
(
2010
).
Adaptive map alignment in the superior colliculus of the barn owl: A neuromorphic implementation
.
PhD diss.
,
University of Edinburgh
.
Inglebert
,
Y.
,
Aljadeff
,
J.
,
Brunel
,
N.
, &
Debanne
,
D.
(
2020
).
Synaptic plasticity rules with physiological calcium levels
.
PNAS
,
117
(
52
),
33639
33648
.
Jethanandani
,
H.
,
Jha
,
B. K.
, &
Ubale
,
M.
(
2023
).
The role of calcium dynamics with amyloid beta on neuron-astrocyte coupling
.
Mathematical Modelling and Numerical Simulation
,
3
(
4
),
376
390
.
Johansson
,
R. S.
, &
Birznieks
,
I.
(
2004
).
First spikes in ensembles of human tactile afferents code complex spatial fingertip events
.
Nature Neuroscience
,
7
(
2
),
170
177
.
Johnson
,
C. M.
,
Hill
,
C. S.
,
Chawla
,
S.
,
Treisman
,
R.
, &
Bading
,
H.
(
1997
).
Calcium controls gene expression via three distinct pathways that can function independently of the Ras/mitogen-activated protein kinases (ERKs) signaling cascade
.
Journal of Neuroscience
,
17
(
16
),
6189
6202
.
Johnson
,
T. D.
,
Elashoff
,
R. M.
, &
Harkema
,
S. J.
(
2003
).
A Bayesian change-point analysis of electromyographic data: Detecting muscle activation patterns and associated applications
.
Biostatistics
,
4
(
1
),
143
164
.
Kayser
,
C.
,
Montemurro
,
M. A.
,
Logothetis
,
N. K.
, &
Panzeri
,
S.
(
2009
).
Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns
.
Neuron
,
61
(
4
),
597
608
.
Kennedy
,
H.
, &
Bullier
,
J.
(
1985
).
A double-labeling investigation of the afferent connectivity to cortical areas V1 and V2 of the macaque monkey
.
Journal of Neuroscience
,
5
(
10
),
2815
2830
.
Kim
,
H.
,
Richmond
,
B. J.
, &
Shinomoto
,
S.
(
2012
).
Neurons as ideal change-point detectors
.
Journal of Computational Neuroscience
,
32
,
137
146
.
Knudsen
,
E. I.
(
1981
).
The hearing of the barn owl
.
Scientific American
,
245
(
6
),
113
125
.
Knutsen
,
P. M.
, &
Ahissar
,
E.
(
2009
).
Orthogonal coding of object location
.
Trends in Neurosciences
,
32
(
2
),
101
109
.
Koch
,
C.
(
1993
).
Biophysics of computation: Toward the mechanisms underlying information processing in single neurons
. In
E.
Schwartz
(Ed.),
Computational neuroscience
(pp.
97
113
).
MIT Press
.
Koch
,
C.
,
Rapp
,
M.
, &
Segev
,
I.
(
1996
).
A brief history of time (constants)
.
Cerebral Cortex
,
6
(
2
),
93
101
.
Koepcke
,
L.
,
Ashida
,
G.
, &
Kretzberg
,
J.
(
2016
).
Single and multiple change point detection in spike trains: Comparison of different CUSUM methods
.
Frontiers in Systems Neuroscience
,
10
,
51
.
Kumar
,
A.
,
Rotter
,
S.
, &
Aertsen
,
A.
(
2010
).
Spiking activity propagation in neuronal networks: Reconciling different perspectives on neural coding
.
Nature Reviews Neuroscience
,
11
(
9
),
615
627
.
Lim
,
T.
,
Soraya
,
A.
,
Ding
,
L.
, &
Morad
,
Z.
(
2002
).
Assessing doctors’ competence: Application of CUSUM technique in monitoring doctors’ performance
.
International Journal for Quality in Health Care
,
14
(
3
),
251
258
.
Liu
,
K. S.
, &
Fetcho
,
J. R.
(
1999
).
Laser ablations reveal functional relationships of segmental hindbrain neurons in zebrafish
.
Neuron
,
23
(
2
),
325
335
.
Malladi
,
R.
,
Kalamangalam
,
G. P.
, &
Aazhang
,
B.
(
2013
).
Online Bayesian change point detection algorithms for segmentation of epileptic activity
. In
Proceedings of the 2013 Asilomar Conference on Signals, Systems and Computers
(pp.
1833
1837
).
McDonnell
,
M. D.
, &
Stocks
,
N. G.
(
2008
).
Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations
.
Physical Review Letters
,
101
,
058103
.
Mei
,
H.
, &
Eisner
,
J. M.
(
2017
).
The neural Hawkes process: A neurally self-modulating multivariate point process
. In
I.
Guyon
,
U. Von
Luxburg
,
S.
Bengio
,
H.
Wallach
,
R.
Fergus
,
S.
Vishwanathan
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems
,
30
.
Curran
.
Miller
,
K. D.
(
2003
).
Understanding layer 4 of the cortical circuit: A model based on cat V1
.
Cerebral Cortex
,
13
(
1
),
73
82
.
Monk
,
T.
,
Dennler
,
N.
,
Ralph
,
N.
,
Rastogi
,
S.
,
Afshar
,
S.
,
Urbizagastegui
,
P.
, …
Adamatzky
,
A.
(
2024
).
Electrical signaling beyond neurons
.
Neural Computation
,
36
(
10
),
1939
2029
.
Monk
,
T.
, &
Paulin
,
M. G.
(
2014
).
Predation and the origin of neurones
.
Brain, Behavior, and Evolution
,
84
(
4
),
246
261
.
Monk
,
T.
, &
van Schaik
,
A.
(
2020
).
The evolutionary inception of changepoint detection and its implications for neural computation
.
Mosqueiro
,
T.
,
Strube-Bloss
,
M.
,
Tuma
,
R.
,
Pinto
,
R.
,
Smith
,
B. H.
, &
Huerta
,
R.
(
2016
).
Non-parametric change point detection for spike trains
. In
Proceedings of the 2016 Annual Conference on Information Science and Systems
(pp.
545
550
).
Moustakides
,
G. V.
,
Polunchenko
,
A. S.
, &
Tartakovsky
,
A. G.
(
2009
).
Numerical comparison of CUSUM and Shiryaev–Roberts procedures for detecting changes in distributions
.
Communications in Statistics: Theory and Methods
,
38
(
16–17
),
3225
3239
.
Mu
,
P.
, &
Ploog
,
D.
(
1981
).
Inhibition of auditory cortical neurons during phonation
.
Brain Research
,
215
(
1–2
),
61
76
.
Neubauer
,
H.
,
Koppl
,
C.
, &
Heil
,
P.
(
2009
).
Spontaneous activity of auditory nerve fibers in the barn owl (Tyto alba), analyses of interspike interval distributions
.
Journal of Physiology
,
101
(
6
),
3169
3191
.
O’Keefe
,
J.
, &
Burgess
,
N.
(
2005
).
Dual phase and rate coding in hippocampal place cells: Theoretical significance and relationship to entorhinal grid cells
.
Hippocampus
,
15
(
7
),
853
866
.
Oline
,
S. N.
,
Ashida
,
G.
, &
Burger
,
R. M.
(
2016
).
Tonotopic optimization for temporal processing in the cochlear nucleus
.
Journal of Neuroscience
,
36
(
32
),
8500
8515
.
Page
,
E. S.
(
1954
).
Continuous inspection schemes
.
Biometrika
,
41
(
1/2
),
100
115
.
Pan
,
H.
,
Zhang
,
S.
,
Pan
,
D.
,
Ye
,
Z.
,
Yu
,
H.
,
Ding
,
J.
, …
Hua
,
T.
(
2021
).
Characterization of feedback neurons in the high-level visual cortical areas that project directly to the primary visual cortex in the cat
.
Frontiers in Neuroscience
,
14
,
616465
.
Park
,
M.
, &
Pillow
,
J. W.
(
2011
).
Receptive field inference with localized priors
.
PLOS Computational Biology
,
7
(
10
),
e1002219
.
Park
,
M.
, &
Pillow
,
J. W.
(
2013
).
Bayesian inference for low rank spatiotemporal neural receptive fields
. In
C. J. C.
Burges
,
L.
Bottou
,
M.
Welling
,
Z.
Ghahramani
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
,
26
.
Curran
.
Pepelyshev
,
A.
, &
Polunchenko
,
A. S.
(
2015
).
Real-time financial surveillance via quickest change-point detection methods
.
arXiv:1509.01570
.
Perkel
,
D. H.
,
Gerstein
,
G. L.
, &
Moore
,
G. P.
(
1967
).
Neuronal spike trains and stochastic point processes: I. The single spike train
.
Biophysical Journal
,
7
(
4
),
391
418
.
Pillow
,
J. W.
,
Ahmadian
,
Y.
, &
Paninski
,
L.
(
2011
).
Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains
.
Neural Computation
,
23
(
1
),
1
45
.
Pishro-Nik
,
H.
(
2014
).
Introduction to probability, statistics, and random processes
.
Kappa Research
.
Polis
,
G. A.
(
1979
).
Prey and feeding phenology of the desert sand scorpion Pamroctonus mesaensis (Scorpionidae: Vaejovidae)
.
Journal of Zoology
,
188
(
3
),
333
346
.
Polunchenko
,
A. S.
, &
Tartakovsky
,
A. G.
(
2012
).
State-of-the-art in sequential change-point detection
.
Methodology and Computing in Applied Probability
,
14
,
649
684
.
Prete
,
F. R.
, &
Cleal
,
K. S.
(
1996
).
The predatory strike of free ranging praying mantises, Sphodromantis lineola (Burmeister). I: Strikes in the mid-sagittal plane
.
Brain, Behavior, and Evolution
,
48
(
4
),
173
190
.
Puri
,
B. K.
(
2020
).
Calcium signaling and gene expression
. In
M.
Islam
(Ed.),
Calcium signaling
(pp.
537
545
).
Quian Quiroga
,
R.
, &
Panzeri
,
S.
(
2009
).
Extracting information from neuronal populations: Information theory and decoding approaches
.
Nature Reviews Neuroscience
,
10
(
3
),
173
185
.
Randall
,
W. L.
(
1964
).
The behavior of cats (Felis catus l.) with lesions in the caudal midbrain region
.
Behaviour
,
23
(
1–2
),
107
139
.
Ratnam
,
R.
,
Goense
,
J. B.
, &
Nelson
,
M. E.
(
2003
).
Change-point detection in neuronal spike train activity
.
Neurocomputing
,
52
,
849
855
.
Reeves
,
J.
,
Chen
,
J.
,
Wang
,
X. L.
,
Lund
,
R.
, &
Lu
,
Q. Q.
(
2007
).
A review and comparison of changepoint detection techniques for climate data
.
Journal of Applied Meteorology and Climatology
,
46
(
6
),
900
915
.
Reinhart
,
A.
(
2018
).
A review of self-exciting spatio-temporal point processes and their applications
.
Statistical Science
,
33
(
3
),
299
318
.
Ritov
,
Y.
,
Raz
,
A.
, &
Bergman
,
H.
(
2002
).
Detection of onset of neuronal activity by allowing for heterogeneity in the change points
.
Journal of Neuroscience Methods
,
122
(
1
),
25
42
.
Rolls
,
E. T.
(
2021
).
Attractor cortical neurodynamics, schizophrenia, and depression
.
Translational Psychiatry
,
11
(
1
),
215
.
Rubin
,
J. E.
,
Gerkin
,
R. C.
,
Bi
,
G.-Q.
, &
Chow
,
C. C.
(
2005
).
Calcium time course as a signal for spike-timing-dependent plasticity
.
Journal of Neurophysiology
93
(
5
),
2600
2613
.
Saghaei
,
A.
,
Mehrjoo
,
M.
, &
Amiri
,
A.
(
2009
).
A CUSUM-based method for monitoring simple linear profiles
.
International Journal of Advanced Manufacturing Technology
,
45
,
1252
1260
.
Santer
,
R. D.
,
Rind
,
F. C.
, &
Simmons
,
P. J.
(
2012
).
Predator versus prey: Locust looming-detector neuron and behavioural responses to stimuli representing attacking bird predators
.
PLOS One
,
7
(
11
),
e50146
.
Saxena
,
S.
, &
Cunningham
,
J. P.
(
2019
).
Towards the neural population doctrine
.
Current Opinion in Neurology
,
55
,
103
111
.
Sengupta
,
M.
,
Daliparthi
,
V.
,
Roussel
,
Y.
,
Bui
,
T. V.
, &
Bagnall
,
M. W.
(
2021
).
Spinal V1 neurons inhibit motor targets locally and sensory targets distally
.
Current Biology
,
31
(
17
),
3820
3833
.
Shiryaev
,
A. N.
(
1996
).
Minimax optimality of the method of cumulative sums (CUSUM) in the case of continuous time
.
Russian Mathematical Surveys
,
51
(
4
),
173
174
.
Shlosberg
,
D.
,
Buskila
,
Y.
,
Abu-Ghanem
,
Y.
, &
Amitai
,
Y.
(
2012
).
Spatiotemporal alterations of cortical network activity by selective loss of NOS-expressing interneurons
.
Frontiers in Neural Circuits
,
6
,
3
.
Sourakov
,
A.
(
2009
).
Extraordinarily quick visual startle reflexes of skipper butterflies (Lepidoptera: Hesperiidae) are among the fastest recorded in the animal kingdom
.
Florida Entomologist
,
92
(
4
),
653
655
.
Sourakov
,
A.
(
2011
).
Faster than a flash: The fastest visual startle reflex response is found in a long-legged fly, Condylostylus (Dolichopodidae)
.
Florida Entomologist
,
94
(
2
),
367
369
.
Steriade
,
M.
,
Timofeev
,
I.
,
Grenier
,
F.
, &
Dürmüller
,
N.
(
1998
).
Role of thalamic and cortical neurons in augmenting responses and self-sustained activity: Dual intracellular recordings in vivo
.
Journal of Neuroscience
,
18
(
16
),
6425
6443
.
Stewart
,
W. J.
,
Cardenas
,
G. S.
, &
McHenry
,
M. J.
(
2013
).
Zebrafish larvae evade predators by sensing water flow
.
Journal of Experimental Biology
,
216
(
3
),
388
398
.
Stewart
,
W. J.
,
Nair
,
A.
,
Jiang
,
H.
, &
McHenry
,
M. J.
(
2014
).
Prey fish escape by sensing the bow wave of a predator
.
Journal of Experimental Biology
217
(
24
),
4328
4336
.
Tartakovsky
,
A. G.
,
Polunchenko
,
A. S.
, &
Moustakides
,
G. V.
(
2009
).
Design and comparison of Shiryaev–Roberts-and CUSUM-type change-point detection procedures
. In
Proceedings of the 2nd International Workshop in Sequential Methodologies
, vol.
50
.
Trapani
,
J. G.
, &
Nicolson
,
T.
(
2011
).
Mechanism of spontaneous activity in afferent neurons of the zebrafish lateral-line organ
.
Journal of Neuroscience
,
31
(
5
),
1614
1623
.
Van Rij
,
A.
,
McDonald
,
J.
,
Pettigrew
,
R.
,
Putterill
,
M.
,
Reddy
,
C.
, &
Wright
,
J.
(
1995
).
CUSUM as an aid to early assessment of the surgical trainee
.
British Journal of Surgery
,
82
(
11
),
1500
1503
.
Van Rullen
,
R.
, &
Thorpe
,
S. J.
(
2001
).
Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex
.
Neural Computation
,
13
(
6
),
1255
1283
.
Villette
,
V.
,
Chavarha
,
M.
,
Dimov
,
I. K.
,
Bradley
,
J.
,
Pradhan
,
L.
,
Mathieu
,
B.
, …
Lin
,
M. Z.
(
2019
).
Ultrafast two-photon imaging of a high-gain voltage indicator in awake behaving mice
.
Cell
,
179
(
7
),
1590
1608
.
Wu
,
Z.
,
Li
,
Y.
, &
Hu
,
L.
(
2022
).
A synchronous multiple change-point detecting method for manufacturing process
.
Computers and Industrial Engineering
,
169
,
108114
.
Xu
,
Z.
,
Ivanusic
,
J.
,
Bourke
,
D. W.
,
Butler
,
E. G.
, &
Horne
,
M. K.
(
1999
).
Automatic detection of bursts in spike trains recorded from the thalamus of a monkey performing wrist movements
.
Journal of Neuroscience Methods
,
91
(
1–2
),
123
133
.
Yamawaki
,
Y.
(
2000
).
Effects of luminance, size, and angular velocity on the recognition of nonlocomotive prey models by the praying mantis
.
Journal of Ethology
,
18
,
85
90
.
Yu
,
A. J.
(
2006
).
Optimal change-detection and spiking neurons
. In
B.
Schölkopf
,
J.
Platt
, &
T.
Hoffman
(Eds.),
Advances in neural information processing systems
,
19
.
MIT Press
.
Zhang
,
Y.
, &
Chen
,
H.
(
2021
).
Graph-based multiple change-point detection
. arXiv:2110.01170.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode