Abstract
Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.
1 Introduction
Animals can respond to environmental changes remarkably quickly. Sometimes quick reactions are critical to their survival, as in predator-prey interactions (Monk & Paulin, 2014). These reactions can occur within tens of milliseconds (Sourakov, 2009; Prete & Cleal, 1996) and sometimes even quicker (Sourakov, 2011). For example, zebrafish larvae initiate escape responses to the hydrodynamic stimulus of an approaching predator’s bow wave in as little as 4 ms (Stewart et al., 2013, 2014; Liu & Fetcho, 1999). Countless analogous examples exist throughout the animal kingdom (Yamawaki, 2000; Aldridge et al., 1997; Randall, 1964).
Quick detection of environmental stimuli requires animals to detect changes in the spiking behavior of their sensory neurons as soon as possible (Herberholz & Marquart, 2012). Perhaps its sensory neurons are spiking faster because they are being stimulated by some signal from an approaching predator (Santer et al., 2012; Monk & Paulin, 2014), so a reaction needs to be made immediately. For example, zebrafish larvae need to detect a sudden increase in the spiking rates of their lateral line neurons (Trapani & Nicolson, 2011) in order to quickly react to an approaching predator’s bow wave. Presumably their downstream spiking neurons accomplish this task.
From a statistical perspective, detecting changes in sensory spiking behavior is a change-point detection (CPD) problem. CPD is a statistics discipline that detects abrupt changes in statistical properties of a time series (Aminikhanghahi & Cook, 2017), for example, its underlying distributions, mean, variance, and/or other moments (Polunchenko & Tartakovsky, 2012). CPD algorithms have a wide variety of applications in manufacturing (Wu et al., 2022), finance (Pepelyshev & Polunchenko, 2015), epidemiology (Dehning et al., 2020), climate change (Reeves et al., 2007), and neural spike train analysis (Zhang & Chen, 2021; Bélisle et al., 1998; Ritov et al., 2002; Mosqueiro et al., 2016). Some CPD algorithms exhibit a striking resemblance to spiking neurons.
Figure 1 (main plot, green trace) is a schematic of an online cumulative sum (CUSUM) CPD algorithm (Page, 1954). CUSUM CPD has been extensively studied (Bissell, 1969; Brook & Evans, 1972; Saghaei et al., 2009) and has a wide range of applications from quality control (Biau et al., 2007; Huang et al., 2011) to performance assessment (Lim et al., 2002; Van Rij et al., 1995). It is minimax-optimal, that is, it minimizes the maximum expected detection delay for a fixed tolerance of false alarms (Moustakides et al., 2009; Tartakovsky et al., 2009). Figure 1 illustrates how the CUSUM CPD algorithm detects a sudden increase in the spiking rate of an input spike train from a sensory neuron (blue raster plot, top). It compares a test statistic—in this example, a likelihood ratio (green trace, Figure 1), to a fixed threshold (horizontal dotted trace). Whenever an input spike is observed, the likelihood ratio instantaneously jumps. As no input spikes are observed, the likelihood ratio decays. The algorithm bounds the likelihood ratio from falling too low (during long interspike intervals in the input spikes) with a reflecting barrier under it (RB, Figure 1, bottom). When the likelihood ratio crosses the threshold, the algorithm asserts that a change point has been detected. The threshold determines a speed-accuracy trade-off, where raising reduces false alarms but delays change-point detections, and vice versa. The likelihood ratio is then reset to the reflecting barrier, and the algorithm repeats.
Online CPD algorithms exhibit salient neural features. The top raster plot (blue) is an example input spike train to a neuron. The main plot (green trace) illustrates an online CUSUM CPD algorithm that detects a sudden increase in the rate of that input. It compares a likelihood ratio (green trace) to a threshold (horizontal dotted trace). The likelihood ratio instantaneously jumps when an input is observed and decays during waiting times between inputs. It is bound from below by a reflecting barrier (RB). When the likelihood ratio crosses the threshold, a change point is asserted (Output raster plot, bottom). The algorithm then resets and repeats. controls the speed-accuracy trade-off for the algorithm. Many functional traits of the online CUSUM CPD algorithm map naturally to salient neural features. A neuron can represent a likelihood ratio of its inputs, or a one-to-one function of it, with its membrane potential. Its resting potential acts as a reflecting barrier, its synaptic weight determines the jump size of the likelihood ratio when input spikes arrive, and its leak current determines how quickly the likelihood ratio should decay between input spikes. Its output spikes (Output, bottom raster plot) are the assertion by a neuron that it has detected a change point in its inputs at that moment in time. Those output spikes can be used as inputs to another CPD algorithm (i.e., a downstream neuron) that implements CPD on it.
Online CPD algorithms exhibit salient neural features. The top raster plot (blue) is an example input spike train to a neuron. The main plot (green trace) illustrates an online CUSUM CPD algorithm that detects a sudden increase in the rate of that input. It compares a likelihood ratio (green trace) to a threshold (horizontal dotted trace). The likelihood ratio instantaneously jumps when an input is observed and decays during waiting times between inputs. It is bound from below by a reflecting barrier (RB). When the likelihood ratio crosses the threshold, a change point is asserted (Output raster plot, bottom). The algorithm then resets and repeats. controls the speed-accuracy trade-off for the algorithm. Many functional traits of the online CUSUM CPD algorithm map naturally to salient neural features. A neuron can represent a likelihood ratio of its inputs, or a one-to-one function of it, with its membrane potential. Its resting potential acts as a reflecting barrier, its synaptic weight determines the jump size of the likelihood ratio when input spikes arrive, and its leak current determines how quickly the likelihood ratio should decay between input spikes. Its output spikes (Output, bottom raster plot) are the assertion by a neuron that it has detected a change point in its inputs at that moment in time. Those output spikes can be used as inputs to another CPD algorithm (i.e., a downstream neuron) that implements CPD on it.
Online CUSUM CPD algorithms exhibit many salient features of neuron models such as the leaky integrate-and-fire (LIF) neuron (Ratnam et al., 2003). Its likelihood ratio resembles the LIF neuron’s membrane potential. Its reflecting barrier resembles the LIF neuron’s resting potential. CUSUM and the LIF neuron both have a fixed threshold. CUSUM’s likelihood ratio, and the LIF neuron’s membrane potential are both reset to their respective lower barriers once their thresholds are crossed. Instantaneous increases in the likelihood ratio given input spikes resemble excitatory synaptic inputs to the LIF neuron. The decay of the likelihood ratio between input spikes resembles the leak current of the LIF neuron. The output of the CUSUM algorithm, change-point assertions, can be represented as an output spike train from the LIF neuron (output, Figure 1, bottom).
CPD algorithms have been used in neuroscience studies to detect abrupt changes in spike train statistics (Bélisle et al., 1998; Mosqueiro et al., 2016; Ritov et al., 2002; Pillow et al., 2011; Koepcke et al., 2016; Johnson et al., 2003; Malladi et al., 2013; Xu et al., 1999). But only three studies have suggested that spiking neurons themselves are hardware implementations of a CPD algorithm (Kim et al., 2012; Monk & van Schaik, 2020). Yu (2006) drew parallels between the mathematical structure of a Bayes-optimal CPD algorithm and the dynamics of the LIF neuron. Denève (2008) considered a Markov transition model that resembles a CPD problem and showed a relationship between the log likelihood of that model and the membrane potential of the LIF neuron. A key result of Denv́e’s work is the recursive nature of her neural computation model, where each layer of neurons processes inputs from its afferent layer with the same algorithm. Ratnam et al. (2003) explored similarities between the CUSUM algorithm and the LIF neuron and found that the LIF neuron approximates CUSUM in terms of mean detection delays. None of these studies demonstrated that the LIF neuron performs exact, online CUSUM CPD given input spikes with specific statistical properties.
This article shows that the LIF neuron implements minimax-optimal CUSUM CPD for a family of compound Poisson processes. We evaluate the CPD performance of the LIF neuron in various regions of its parameter space and interpret salient features of the LIF neuron in a rigorous statistical context. We demonstrate that CUSUM CPD algorithms can be recursive, that is, the output of an LIF neuron can be used as input to another LIF neuron that implements CPD on it. Then we simulate a feedforward LIF neural network whose layers recursively perform CUSUM CPD on their afferent layers. That network quickly detects tiny differences in sensory input spiking rates with very few false positives. This result demonstrates that simple networks of thresholded biological cells offer a principled solution to the pressing problem of quickly and accurately detecting small changes in sensory neural spiking statistics. More generally, our work suggests that spiking neurons are hardware implementations of an online CPD algorithm. If true, then the electrophysiological properties of a neuron’s membrane (e.g., its time constant) must be related to spiking statistics of its inputs. We show a simple example of this relationship for the time constant of the LIF neuron and its input spiking rates. Our work also suggests that a neuron’s action potentials are its assertions that it has detected some change in its inputs at that moment in time. Neurons might not be noisy devices whose action potentials must be averaged over time or populations (Faisal, 2010; Gerstner & Kistler, 2002; Monk et al., 2024). Instead they might be analog computational units that implement sophisticated and statistically optimal algorithms on their inputs in real time.
2 Results
All figures can be reproduced by the Jupyter Notebook submitted as supplementary material.
2.1 The LIF Neuron as an Online CUSUM CPD Algorithm
2.1.1 CUSUM for Standard Poisson Processes.
Consider an afferent sensory neuron firing Poisson-distributed spikes as shown in the top raster plot of Figure 2. At some point, the input spike rate changes from (blue spike train) to a higher value (red spike train). The goal of a CPD algorithm is to detect when the input spike rate changes (i.e., the change point, vertical dotted line, Figure 2).
Comparison of jump sizes of the likelihood ratio for simple and compound Poisson processes. Top raster plot: Example CPD problem on Poisson inputs. Blue spikes denote Poisson arrivals preceding the change point (at s, vertical dotted line) at a low rate, Hz. Red spikes denote Poisson arrivals after the change point at a higher rate, Hz. The goal is to detect the change in rates as we observe arrivals. Main plot: The CUSUM algorithm solution to this CPD problem. is shown for the simple (pink trace) and a particular compound (green trace) Poisson process given the top raster plot as input. The jump sizes of upon input spike arrivals are reported by the colored numbers. Notice that for this compound Poisson process has neurally plausible constant jump sizes (green numbers), while for the simple Poisson process does not (magenta numbers). Bottom raster plots: The change points generated by the CUSUM algorithm assuming a simple (magenta) or compound (green) Poisson process. Notice that CUSUM on the latter has a lower number of outputs than that of the former.
Comparison of jump sizes of the likelihood ratio for simple and compound Poisson processes. Top raster plot: Example CPD problem on Poisson inputs. Blue spikes denote Poisson arrivals preceding the change point (at s, vertical dotted line) at a low rate, Hz. Red spikes denote Poisson arrivals after the change point at a higher rate, Hz. The goal is to detect the change in rates as we observe arrivals. Main plot: The CUSUM algorithm solution to this CPD problem. is shown for the simple (pink trace) and a particular compound (green trace) Poisson process given the top raster plot as input. The jump sizes of upon input spike arrivals are reported by the colored numbers. Notice that for this compound Poisson process has neurally plausible constant jump sizes (green numbers), while for the simple Poisson process does not (magenta numbers). Bottom raster plots: The change points generated by the CUSUM algorithm assuming a simple (magenta) or compound (green) Poisson process. Notice that CUSUM on the latter has a lower number of outputs than that of the former.
The CUSUM algorithm (pink trace, main plot of Figure 2) determines that a change point has occurred whenever for some given threshold . These change points are visualized as output spikes in the lower pink raster plot. The likelihood is then reset to the lower bound, and the algorithm repeats.
where is the LIF neuron’s time constant and () its synaptic weight.
The likelihood ratio of an SPP does not quite exhibit the same dynamics as the membrane potential of an LIF neuron. For an SPP, the jump sizes of at input spike arrival times are proportional to the value of immediately preceding the arrivals (magenta numbers, Figure 2). So when the likelihood ratio is high at the time of input spike arrival, jump sizes are high, and vice versa (magenta trace and numbers, Figure 2). In an LIF neuron, jump sizes are a constant value (seen the right-hand sides of equations 2.2 and 2.3).
2.1.2 CUSUM for Compound Poisson Processes
Now we show that for a certain family of compound Poisson processes (CPP) exhibits the same as the membrane potential of an LIF neuron.
A CPP is a generalization of standard Poisson processes to include non-unit jump sizes. Jump sizes can be a deterministic function or even realizations of independent random variables (Daley et al., 2003). In a neural context, we interpret jump sizes as ”importance weights” attached to sensory spikes. Although sensory spikes are modeled here as Poisson distributed, it does not necessarily follow that all sensory spikes should be equally indicative that a change point has occurred. For example, if the likelihood ratio is high when a sensory spike arrives (i.e., if a change point is more likely to have occurred), then perhaps an animal should weigh that arrival more highly (and vice versa).
Conventionally, a neuron’s membrane time constant is considered an intrinsic electrophysiological property of the membrane (Burkitt, 2006). It is determined by the membrane’s resistance and capacitance. In a CPD framework, that time constant is a function of that neuron’s input spiking statistics ( and ). For an LIF neuron to implement CPD as we describe here, the value of those two time constants must be equal (see equation 2.7). If they differ, then an LIF neuron will not achieve minimax-optimal CPD results on our CPP (Shiryaev, 1996; Moustakides et al., 2009). In principle, neurons could adapt their membrane time constant by altering their membrane leak resistance. Calcium influx from input spikes could regulate the expression of ion channels that determine leak resistance (Johnson et al., 1997; Jethanandani et al., 2023; Puri, 2020). For example, an increase in calcium levels might downregulate specific leak channels, thereby decreasing membrane resistance on fast timescales (Rubin et al., 2005; Inglebert et al., 2020). This dynamic adjustment would allow the neuron to “learn” a time constant that satisfies equation 2.7. Experimental measurements in vivo have shown that a neuron’s membrane time constant is inversely related to its input spiking rate (see Figure 5 in Koch et al., 1996) and section 6.2 in Burkitt (2006), even for electrically passive neurons. We are proposing a theoretical explanation for why this relationship exists.
The LIF neuron achieves CPD performance similar to CUSUM without requiring an artificially imposed reflecting barrier. The top raster plot is analogous to Figure 2. Main plot: (see equation 2.6) plotted with a reflecting barrier at 1 (i.e., the CUSUM algorithm, red trace) or without it (i.e., of the LIF neuron, green trace). Analogous to Figure 2, threshold crossings (, horizontal dotted line) are assertions of a change point and the process resets and repeats. The inset plot is a magnified image of the dashed black box below it. Notice that the LIF neuron can decay below 1 while CUSUM cannot (due to the presence of the reflecting barrier). This discrepancy occasionally leads to differences in their threshold crossing times (yellow arrows 1–3). Bottom raster plot: Output change-point assertions, colored according to the main plot legend. Notice the occasional differences in the outputs of CUSUM CPD and the LIF neuron due to the presence or absence of the reflecting barrier.
The LIF neuron achieves CPD performance similar to CUSUM without requiring an artificially imposed reflecting barrier. The top raster plot is analogous to Figure 2. Main plot: (see equation 2.6) plotted with a reflecting barrier at 1 (i.e., the CUSUM algorithm, red trace) or without it (i.e., of the LIF neuron, green trace). Analogous to Figure 2, threshold crossings (, horizontal dotted line) are assertions of a change point and the process resets and repeats. The inset plot is a magnified image of the dashed black box below it. Notice that the LIF neuron can decay below 1 while CUSUM cannot (due to the presence of the reflecting barrier). This discrepancy occasionally leads to differences in their threshold crossing times (yellow arrows 1–3). Bottom raster plot: Output change-point assertions, colored according to the main plot legend. Notice the occasional differences in the outputs of CUSUM CPD and the LIF neuron due to the presence or absence of the reflecting barrier.
The LIF neuron and CUSUM exhibit very similar CPD performances when and rarely decay to low values. False alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron (green) and CUSUM algorithms (red), obtained from 10,000 independent simulations and plotted as probability densities. To facilitate fair comparison, we account for the presence or absence of the reflecting barrier by relating the CUSUM threshold and LIF neuron threshold via = 1. When the synaptic weight is high (first row), the CPD performances of the LIF neuron and CUSUM are practically identical. Given a small difference in input rates (second row), CPD performances are again almost identical. When the synaptic weight is low but the difference in input rates is larger (third row), we observe a significant disparity between the waiting time histograms. By altering (fourth row), that disparity in the third row is reduced. Collectively, these plots show that the CPD performances of the LIF neuron and CUSUM algorithm are virtually indistinguishable when and rarely decay to low values. Given low and a larger difference in input rates, and decay more strongly to low values (see equation 2.7). In this region of parameter space, of CUSUM can lie at its lower bound of 1, but of the LIF neuron never quite reaches its lower bound of 0, so the LIF neuron has to traverse a shorter distance than CUSUM to reach the threshold. By slightly increasing (see the third and fourth rows), we partially offset that shorter distance and CPD performance disparity is reduced. All histograms are overlaid with exponential distributions (solid curves) scaled to their means (colored numbers and vertical dashed lines). Notice that the exponential curves are a reasonable fit for the waiting time distributions in some regions of parameter space.
The LIF neuron and CUSUM exhibit very similar CPD performances when and rarely decay to low values. False alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron (green) and CUSUM algorithms (red), obtained from 10,000 independent simulations and plotted as probability densities. To facilitate fair comparison, we account for the presence or absence of the reflecting barrier by relating the CUSUM threshold and LIF neuron threshold via = 1. When the synaptic weight is high (first row), the CPD performances of the LIF neuron and CUSUM are practically identical. Given a small difference in input rates (second row), CPD performances are again almost identical. When the synaptic weight is low but the difference in input rates is larger (third row), we observe a significant disparity between the waiting time histograms. By altering (fourth row), that disparity in the third row is reduced. Collectively, these plots show that the CPD performances of the LIF neuron and CUSUM algorithm are virtually indistinguishable when and rarely decay to low values. Given low and a larger difference in input rates, and decay more strongly to low values (see equation 2.7). In this region of parameter space, of CUSUM can lie at its lower bound of 1, but of the LIF neuron never quite reaches its lower bound of 0, so the LIF neuron has to traverse a shorter distance than CUSUM to reach the threshold. By slightly increasing (see the third and fourth rows), we partially offset that shorter distance and CPD performance disparity is reduced. All histograms are overlaid with exponential distributions (solid curves) scaled to their means (colored numbers and vertical dashed lines). Notice that the exponential curves are a reasonable fit for the waiting time distributions in some regions of parameter space.
Quantifying the LIF neuron’s trade-off between false alarms and detection delays in different regions of parameter space. We computed false alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron with different parameter values. Each histogram presents results from 10,000 independent simulations. Top row: The threshold is varied while the input rates and synaptic weights are kept constant. Lower thresholds result in lower detection delays but more frequent false alarms and vice versa. Middle row: The synaptic weight is varied while input rates and thresholds are kept constant. Higher synaptic weights cause faster threshold crossings, resulting in lower detection delays but higher false alarms. Bottom row: Input rate pairs and are increased while the threshold and synaptic weight are kept constant. A higher difference between the input rates implies a faster decay of between input spike arrivals, which causes longer waiting times. But under a fixed threshold, we see that higher input rate pairs (e.g., Hz and Hz) exhibit lower waiting times than that of lower input rate pairs (like Hz and Hz). This observation implies that the effect of more frequent input spikes is greater than the higher decay between input spikes when all other parameters are held constant. The values of the parameters held constant in each row are reported in the left column. The values of varied parameters are reported in the legends in the right column.
Quantifying the LIF neuron’s trade-off between false alarms and detection delays in different regions of parameter space. We computed false alarm (left column) and detection delay (right column) waiting time histograms of the LIF neuron with different parameter values. Each histogram presents results from 10,000 independent simulations. Top row: The threshold is varied while the input rates and synaptic weights are kept constant. Lower thresholds result in lower detection delays but more frequent false alarms and vice versa. Middle row: The synaptic weight is varied while input rates and thresholds are kept constant. Higher synaptic weights cause faster threshold crossings, resulting in lower detection delays but higher false alarms. Bottom row: Input rate pairs and are increased while the threshold and synaptic weight are kept constant. A higher difference between the input rates implies a faster decay of between input spike arrivals, which causes longer waiting times. But under a fixed threshold, we see that higher input rate pairs (e.g., Hz and Hz) exhibit lower waiting times than that of lower input rate pairs (like Hz and Hz). This observation implies that the effect of more frequent input spikes is greater than the higher decay between input spikes when all other parameters are held constant. The values of the parameters held constant in each row are reported in the left column. The values of varied parameters are reported in the legends in the right column.
The jump-size functions and have a direct interpretation. The form of (see equation 2.5, left) implies that the importance of an input spike depends on the value of , that is, how likely that a change point has occurred. The more likely a change point has occurred, the more importance we attribute to new input spikes, and vice versa. The form of (see equation 2.5, right) implies that input spikes arriving after the change point should have a higher importance attributed to them than those that arrive before. The LIF neuron’s synaptic weight specifies this discrepancy in importance before and after the change point. Of course, the LIF neuron itself does not know whether a change point occurred. But we are free to define pre- and post–change point CPPs whose jump size functions attribute importance to sensory spikes as described by equation 2.5. These particular forms of jump size functions lead to a constant jump height for . The LIF neuron itself does not attribute importance to sensory spikes, but it evaluates evidence for a change point on underlying CPPs that do.
Figure 2 (main plot, green trace) plots equation 2.6 for the same inputs from the top raster plot. CUSUM exhibits the same neural characteristics shown earlier by the SPP. exponentially decays when no inputs arrive and instantaneously jumps at each arrival. Those jumps are now arithmetic (as opposed to geometric for the SPP), and are a constant size (green arrows and numbers, in this example). The bottom raster plots in Figure 2 show CUSUM’s change point assertions for the SPP (pink) and CPP (green). The LIF neuron represents those change-point assertions as output spikes.
2.2 The Reflecting Barrier
The sole difference between CUSUM CPD (on a CPP with jump sizes defined by equation 2.5) and the LIF neuron is the presence of a lower bound for . CUSUM requires (Page, 1954) and enforces this constraint with a reflecting barrier at 1. The LIF neuron does not have such a reflecting barrier. Instead, it is naturally bound at the LIF neuron’s resting potential of 0.
Figure 3 shows the effect of the lower barrier’s presence or absence on CPD performance. Similarly to Figure 2, the top raster plot of Figure 3 presents an example CPD problem on Poisson-distributed spikes from an afferent sensory neuron. The solid red trace in Figure 3 (main plot) is (i.e., equation 2.6) with a lower bound at . The green trace plots for an LIF neuron given the same Poisson-distributed input spikes. The inset provides a magnified view of the boxed region in the lower-left corner of the main plot. It shows that the LIF neuron can decay below , but CUSUM is bound by . The green and red traces jump to different values when an afferent spike arrives if they had different values at the arrival time. Therefore their threshold crossing times can be different (see Figure 3, bottom raster plots). One such threshold crossing discrepancy is highlighted by yellow arrows in Figure 3. Arrow 1 indicates that the LIF neuron had decayed below when an input spike arrived. Arrow 2 shows how that decay sometimes results in the LIF neuron not crossing its threshold given subsequent input spikes, but CUSUM did because it started slightly higher at . Arrow 3 shows that algorithm outputs can therefore differ.
Figure 4 shows where in parameter space the presence or absence of the reflecting barrier affects the discrepancy between the CPD performances of CUSUM and the LIF neuron. CPD performances are quantified by their false alarm and detection delay waiting time distributions. The false alarm distribution (see Figure 4, left column) is the distribution of waiting times between threshold crossings before the change point occurs, that is, given input spikes at rate and jump sizes (equation 2.5, left). The detection delay distribution (see Figure 4, right column) is analogous to the distribution of waiting times given that the change point occurred, that is, given and (see equation 2.5, right). Collectively, these two distributions determine the sensitivity, specificity, and speed of a CPD algorithm. The histograms for the LIF neuron (green bars) and CUSUM (red bars) in Figure 4 are plotted as probability densities from 10,000 independent simulations of false alarms and detection delays. Each simulation begins with or on their reflecting barriers (1 or 0, respectively) and runs until threshold is crossed. The green and red numbers and vertical dashed lines mark the means of each histogram. The green and red traces are exponential distributions with those means. To ensure a fair comparison of threshold crossing times, the threshold for CUSUM () is one unit higher than that of the LIF neuron () in the first, second, and third rows (their values are stated in the false alarm panels). So both algorithms have an equal effective distance between their reflecting barrier and their threshold.
The first two rows of Figure 4 show that the CUSUM algorithm and LIF neuron implement CPD indistinguishably when and spend little time near their reflecting barriers of 1 and 0, respectively. The first row of Figure 4 shows that the CPD performances of the LIF neuron and the CUSUM algorithm are almost identical when is high. When is high, and are more likely to occupy values above their reflecting barriers. So the presence or absence of the reflecting barrier does not appreciably affect threshold crossing times. The second row of Figure 4 shows that the CPD performances of the LIF neuron and the CUSUM algorithm are again almost identical when the input rates are approximately equal. When , the rate of decay of and between input spikes is low (see equation 2.7). With slow decay between input spikes, and are again more likely to assume values above their reflecting barriers.
The third row of Figure 4 shows that CPD performances of the LIF neuron and CUSUM can differ in other regions of parameter space. When is low and the rate of decay is high (i.e., ), and are more likely to decay to values near their reflective barriers. of the CUSUM algorithm frequently occupies its lower bound of 1. But of the LIF neuron never quite reaches its lower bound of 0. So the LIF neuron maintains a shorter distance to its threshold compared to CUSUM and thus crosses its threshold more quickly. The third row of Figure 4 shows that the LIF neuron detects change points much faster than the CUSUM algorithm, but at the cost of more frequent false alarms. The fourth row of Figure 4 shows one way to reduce this disparity in CPD performance. We increased from 2.5 to 3, keeping all other parameters the same as they were in the third row. This increase reduced false alarms and increased detection delays for the LIF neuron. Its false alarm distribution becomes almost identical to that of CUSUM (fourth row, left panel). The detection delay distributions still exhibit a disparity (fourth row, right panel), albeit less than they did in the third row. The LIF neuron can detect a change point noticeably faster than CUSUM on average and at a similar false alarm rate (on our CPP, and in this region of parameter space).
2.3 The Performance of a LIF Neuron as a Change Point Detector
Figure 5 evaluates the CPD performance of the LIF neuron for the CPP (see equations 2.4 and 2.5) in different regions of parameter space. Again, we quantified CPD performance with false alarm and detection delay waiting time histograms obtained via simulation. Each histogram in Figure 5 was plotted as a probability distribution of 10,000 independent simulations, analogous to the histograms in Figure 4.
In the top row of Figure 5, we varied the threshold while keeping the input rates and the synaptic weight constant (). increased from 4.5 to 5.5 in three equally spaced steps (blue, purple, and yellow bars, respectively). We see that increasing reduces false alarm frequency at the cost of higher detection delays, resulting in needing more input spikes to reach the threshold. Random bursts of inputs are less likely to cause the LIF neuron to falsely assert a change point, that is, false alarm waiting times are more likely to be higher. But the LIF neuron also requires more time to accumulate enough evidence to spike after the change point, that is, detection delays are also more likely to be higher.
In Figure 5 (middle row), we varied the synaptic weight while keeping the input rates and threshold constant. increased from 1.5 to 2.5 in three equally spaced steps (red, yellow, and green bars, respectively). We see that increasing the synaptic weight increases the false alarm frequency and lowers the detection delay. A higher synaptic weight allows to achieve threshold with fewer inputs. Notice that increasing the synaptic weight has an analogous effect on CPD performance as decreasing the threshold.
In the bottom row, we varied the input rates (dark red, gray, and dark green bars) while keeping the synaptic weight and threshold constant. Notice that as both input rates increase, the difference between them increases as well (lower-right panel’s legend, Figure 5). Increasing input rates should reduce the LIF neuron’s waiting time to threshold crossing both before and after the change point, all else being equal. However, increasing the difference between input rates should increase those waiting times because the time constant is smaller (see equation 2.7), so the membrane potential decays faster between input spike arrivals. The bottom row of Figure 5 shows that higher input rates reduce waiting times more than a higher difference between input rates increases them, at least for these parameter values. This observation holds for both false alarm and detection delay waiting time histograms. The bottom row of Figure 5 can also be interpreted as illustrating the effect of the LIF neuron receiving input spikes from different numbers of independent and identical input neurons. The sum of Poisson processes remains a Poisson process with a rate equal to the sum of the rates in the summand (Pishro-Nik, 2014). Say the red bars are false alarm or detection delay waiting times of an LIF neuron with one input neuron spiking at a rate of Hz or Hz. The gray and dark green bars then show how those waiting times change if we have two or four independent and identical input neurons feeding input spikes to one LIF neuron, respectively. The bottom row of Figure 5 shows that increasing the number of input neurons decreases detection delays at the cost of increasing false alarm frequency.
2.4 The LIF Neuron Amplifies the Difference of Its Input Rates
Figure 6 examines the effect of the threshold on the average LIF neuron false alarm () and detection delay () waiting times. It explores trends in these average waiting times across different percent differences between the input rates and . Figure 6 shows that the LIF neuron can increase the percent difference of its output rates relative to its input rates. This relative gain would be computationally useful if the output spikes of that LIF neuron were used as input to another CPD algorithm (i.e., another LIF neuron downstream). That downstream LIF neuron could more quickly and reliably detect a change point because its input rates exhibit a larger percent difference than the inputs of the upstream LIF neuron did. Figure 6 shows that this gain in percent difference of input and output rates is maximized at a certain threshold value (), depending on the specific values of the input rates.
The LIF neuron increases the pre- and post-change-point percent difference of its output rates relative to its input rates. The percent difference between the mean false alarm () and detection delay () waiting times for an LIF neuron are plotted against a range of the values of the threshold on the -axis. Blue, beige, and green classes represent a very small (, i.e., 5% difference), small (, i.e., 20% difference), and large (, i.e., 200% difference) input rate change point, respectively. Dark-tone, mid-tone, and light-tone traces in each color represent 1, 100, and 1000 independent and identical sensory neurons, each spiking at rates indicated by their class, feeding inputs to the LIF neuron. In each trace, the percent difference between and is maximized for a particular value of the threshold (). Blue, beige, and green numbers in the tables report and at , colored accordingly. In all traces, and the percent difference between and at that threshold value increase as the number of input sensory neurons increases. The inset further explores this trend (top right). and are plotted against orders of magnitude of the number of input neurons (from 1 to input neurons). We fixed the LIF neuron threshold (, of the light-tone blue trace) and the input rates of each input neuron (). (diamond markers, gray) and (cross marks, red) are plotted, and the blue numbers report the percent difference between them. Observe the decline of and and the increase of the percent difference between them as the number of input neurons increases.
The LIF neuron increases the pre- and post-change-point percent difference of its output rates relative to its input rates. The percent difference between the mean false alarm () and detection delay () waiting times for an LIF neuron are plotted against a range of the values of the threshold on the -axis. Blue, beige, and green classes represent a very small (, i.e., 5% difference), small (, i.e., 20% difference), and large (, i.e., 200% difference) input rate change point, respectively. Dark-tone, mid-tone, and light-tone traces in each color represent 1, 100, and 1000 independent and identical sensory neurons, each spiking at rates indicated by their class, feeding inputs to the LIF neuron. In each trace, the percent difference between and is maximized for a particular value of the threshold (). Blue, beige, and green numbers in the tables report and at , colored accordingly. In all traces, and the percent difference between and at that threshold value increase as the number of input sensory neurons increases. The inset further explores this trend (top right). and are plotted against orders of magnitude of the number of input neurons (from 1 to input neurons). We fixed the LIF neuron threshold (, of the light-tone blue trace) and the input rates of each input neuron (). (diamond markers, gray) and (cross marks, red) are plotted, and the blue numbers report the percent difference between them. Observe the decline of and and the increase of the percent difference between them as the number of input neurons increases.
In each color class, we also studied the effect of increasing the number of independent and identical input neurons on . From section 2.3, we saw that increasing the number of independent and identical input neurons is equivalent to increasing the input rates to the LIF neuron multiplicatively. For example, if Hz and Hz, then the input spikes from 100 independent and identical sensory neurons are equivalent to one sensory neuron spiking at 200 Hz or 600 Hz. Figure 6 reports the gain of the LIF neuron across given 1 (dark-tone traces), 100 (mid-tone traces), and 1000 (light-tone traces) independent and identical sensory neuron afferents. The legend above Figure 6 reports the input rates for each trace in the main plot.
In the green class (top left, Figure 6 main plot), observe that increases from to a maximum value as is increased up to . As the threshold increases to , the frequency of false alarms reduces because requires more input spikes in a given time window to reach its threshold. A higher threshold also increases the detection delay, but less than it increases false alarm waiting times (because ). Collectively, increases as the threshold increases to . As is increased past , we see a sharp decline in . While false alarms continue to decrease as increases, detection delays also significantly rise. We see that returns to and eventually becomes negligible with respect to (so the gain approaches zero). This trend is consistent across different numbers of input neurons (medium-tone and light-tone green traces). As the number of input neurons increases, the maximum possible gain also increases and is achieved at a higher value of . With an increased number of input neurons, false alarms become more frequent and require higher thresholds to filter them out. The value of for each trace is stated in the figure (colored dashed lines, bottom). The values of and at for each trace are also stated alongside the traces, colored accordingly.
The beige class (middle left, Figure 6), demonstrates similar trends to the green class. One key difference between the green and beige classes is that is larger. Since the difference between input rates is smaller in the beige class, the time constant of the LIF neuron is larger (see equation 2.7). Therefore the decay of its membrane potential is slower, and the LIF neuron requires a higher threshold to filter out the same number of false alarms and achieve a higher gain, all else being equal. Another key difference between the classes is that the gains in the beige class are smaller than they were for the green class. CPD amplifies the percent difference more when the input rates already exhibit a high percentage difference. Raising the threshold reduces false alarms and increases detection delays, but these effects depend on the difference between input rates. When the difference between input rates is high (e.g., the green class), raising the threshold quickly filters out false alarms without significantly affecting detection delay. The gain of the LIF neuron increases quickly as we raise the threshold. But when the difference between input rates is lower (e.g., the beige class), raising the threshold filters false alarms and delays detection more evenly. The gain of the LIF neuron increases more slowly as we raise the threshold.
The blue class (bottom, Figure 6) maximizes at even higher values of . This result follows from the same reasoning as for the beige class, but with an even smaller difference between input rates. Notice that as decreases (from the green to beige to blue classes), the maximum possible gain at also decreases. For example, consider the medium-tone traces in each class that represent 100 independent and identical input neurons to the LIF neuron. The maximum gains of those three traces are about for the green class, about for the beige class, and about for the blue class. The smaller the percent difference in input signals, the less gain the LIF neuron can achieve for a fixed number of input neurons. But the dark-tone, mid-tone, and light-tone traces in each class suggest that we can amplify the gain by connecting more independent and identical input neuron afferents to the LIF neuron.
The inset in Figure 6 (top right) reports the effect of increasing the number of independent and identical input neurons, each with a 5% difference in input rates, on and of an LIF neuron. The -axis represents the number of input neurons (from 1 to ), and the -axis represents the mean waiting times on a log-log scale ( in gray, in red). was fixed at 31.29 (i.e., of the light-tone blue trace). is stated in blue above each pair of and . Notice that the mean waiting times drastically reduce, and increases with the number of input sensory neurons. The inset in Figure 6 suggests that the LIF neuron can amplify very small differences in input rates given enough input neurons. More input neurons expedite detection of a change point at the cost of causing more frequent false alarms. In principle, false alarms can be filtered out by passing the output of the LIF neuron into another one downstream that runs CPD on it. So it might be possible for a network of LIF neuron layers to implement quick CPD on very small changes in input rates with very few false alarms.
2.5 Feedforward LIF Neural Networks Can Achieve Quick, Reliable CPD on Small Shifts in Input Spiking Rates
Figure 7 shows that a feedforward LIF neural network can quickly detect tiny changes in Poisson rates with very rare false alarms. Each layer of the network increases relative to as observed in sections 2.4. By iterating this computation, grows as we propagate down layers of the network. of the deciding node (i.e., the bottom of the network in Figure 7) can be very high. Therefore this simple feedforward network can implement quick yet reliable CPD, even if of the sensory input layer is tiny.
A feedforward LIF neural network can quickly and reliably detect very small changes in Poisson input spiking rates given enough neurons in the network. Left: Schematic of a feedforward LIF neural network receiving inputs from a sensory layer (deep blue neurons, top). It comprises six hidden layers (red through gray neurons) and a deciding LIF neuron node (green neuron, bottom). Each neuron receives inputs from 10 neurons in the layer above it, with a synaptic weight of 1. The black numbers to the left state the number of neurons in each layer. Right: Neurons in the sensory layer spike with a Poisson rate of 2 Hz or 2.1 Hz before or after the change point, respectively (blue numbers, top row, right column). The plots show false alarm (FA, left column) and detection delay (DD, right column) waiting time histograms for each neuron in a given layer. Similar to Figure 4, each histogram is fitted with an exponential curve (solid green and red traces) scaled to the mean false alarm (, green numbers) or mean detection delay (, red numbers) waiting times. These distributions show that the outputs of each layer are approximately Poisson distributed. Therefore, those outputs can be input into another layer of LIF neurons that run CPD on them. The green numbers at the bottom report and for the deciding node. The network detects small changes in sensory rates within 20 ms, with false alarms occurring only once every 400,000 seconds (i.e., 4 days) on average.
A feedforward LIF neural network can quickly and reliably detect very small changes in Poisson input spiking rates given enough neurons in the network. Left: Schematic of a feedforward LIF neural network receiving inputs from a sensory layer (deep blue neurons, top). It comprises six hidden layers (red through gray neurons) and a deciding LIF neuron node (green neuron, bottom). Each neuron receives inputs from 10 neurons in the layer above it, with a synaptic weight of 1. The black numbers to the left state the number of neurons in each layer. Right: Neurons in the sensory layer spike with a Poisson rate of 2 Hz or 2.1 Hz before or after the change point, respectively (blue numbers, top row, right column). The plots show false alarm (FA, left column) and detection delay (DD, right column) waiting time histograms for each neuron in a given layer. Similar to Figure 4, each histogram is fitted with an exponential curve (solid green and red traces) scaled to the mean false alarm (, green numbers) or mean detection delay (, red numbers) waiting times. These distributions show that the outputs of each layer are approximately Poisson distributed. Therefore, those outputs can be input into another layer of LIF neurons that run CPD on them. The green numbers at the bottom report and for the deciding node. The network detects small changes in sensory rates within 20 ms, with false alarms occurring only once every 400,000 seconds (i.e., 4 days) on average.
The network illustrated in Figure 7 (left) comprises eight layers. The first layer (in blue) is a sensory input layer that generates Poisson inputs at 2 Hz before a change point and 2.1 Hz after it. of each sensory neuron is very small. Following the sensory layer are six hidden layers of LIF neurons (indicated by the yellow bar at the left of Figure 7). The number of neurons in each layer is shown on the left (bold, black numbers). Every neuron, except those in the sensory layer, receives input from 10 neurons in the previous layer. Every input has a synaptic weight (, in equation 2.5) of 1. A neuron in an upstream layer only feeds spikes to one neuron in the downstream layer. So the spiking outputs of all neurons in a layer are conditionally independent of each other. The thresholds for each layer are shown on the right. Alongside each layer, we show the false alarm and detection delay waiting time distributions of any individual neuron in that layer (right side of Figure 7, panels). Each distribution is fitted with an exponential curve (solid green and red traces) scaled to the mean of that distribution ( and ). The input rates for the next layer are the reciprocals of those average mean waiting times from the layer above it.
The exponential curves fitted to the waiting time distributions (see Figure 7, right panels) show that the outputs of each LIF neuron in a layer are approximately Poisson distributed. Recall that in Figure 4 (second row), the exponential curve did not accurately fit the output waiting time distributions given small differences in input rates and . The plots in Figure 7 (right side) show that the goodness of fit of the exponential distribution to waiting time distributions can be improved by increasing the number of afferents and the threshold of the LIF neuron. When the waiting times to output spikes of an LIF neuron are accurately approximated by an exponential distribution, CPD can be implemented recursively through layers in a network. The plots in Figure 7 show that as we propagate down layers in the network, s and s increase for each neuron in that layer. We then expedite detection by connecting each neuron in a layer to 10 afferents upstream. The final ”deciding node” LIF neuron (green neuron and green numbers, bottom) detects a 5% change in input rates within approximately 20 ms on average, with false alarms occurring every 400,000 seconds, or approximately 4 days on average. This LIF neuron network can swiftly, yet reliably, detect tiny changes in input rates.
3 Discussion
We established that the LIF neuron implements online CUSUM CPD for a specific family of compound Poisson processes. This result allows us to interpret salient neural features in a rigorous a statistical framework, at least for the LIF neuron:
Afferent spikes are input for a CPD algorithm (see, Figure 1, top raster plot) that detects changes in their statistical properties.
The voltage of the LIF neuron represents a likelihood ratio of those inputs (see, Figure 1, main panel).
The resting potential of the LIF neuron is the reflecting barrier of the likelihood ratio.
Synaptic weights correspond to the change in jump heights of an underlying compound Poisson process before and after the change point.
The LIF neuron’s time constant determines the likelihood ratio’s rate of decay between input spikes.
Its threshold determines a speed-accuracy trade-off for change-point assertions.
Its output spikes (see, Figure 1, bottom raster plot) are its assertions that it has detected a change point at that moment in time.
These output spikes can be input to another CPD algorithm, that is, CPD can be implemented recursively by successive layers of LIF neurons.
Many interpretations of neural action potentials have been proposed over the past half-century (Eggermont, 1998; Monk et al., 2024). There is no consensus on how nervous systems use them to represent the world and make decisions. The predominant interpretation of action potentials in computational neuroscience is that trains of them represent information about the environment via some cryptic code (Kumar et al., 2010; Knutsen & Ahissar, 2009; O’Keefe & Burgess, 2005). For example, rate coding suggests that the frequency of action potentials “encodes” stimulus information (Gautrais & Thorpe, 1998; McDonnell & Stocks, 2008). Interspike interval coding claims that the waiting times between action potentials “encode” stimulus information (Van Rullen & Thorpe, 2001; Johansson & Birznieks, 2004). Population coding argues that neurons are noisy, so they should only be studied at a systems or networks level (Saxena & Cunningham, 2019; Quian Quiroga & Panzeri, 2009). Phase coding asserts that the time of a neuron’s action potential relative to some background activity “encodes” stimulus information (Kayser et al., 2009). Our results suggest that each individual action potential is a neuron’s assertion that some spiking statistic(s) of its input neurons have changed at that moment in time. Statistics of a spike train (e.g., rate, phase, or waiting times) are important in the sense that animals need to detect when they change. But these statistics are not “encoding” anything (Brette, 2019). Instead, each action potential is the output of a neuron’s computation that it executed by comparing its membrane potential to a threshold.
Animals and their nervous systems can quickly and accurately detect small changes in sensory spiking behavior (Knudsen, 1981; Polis, 1979; Sourakov, 2009). For mathematical simplicity, we assumed that sensory spikes were Poisson distributed (Dayan & Abbott, 2005). For certain parameter values, the output spikes of an LIF neuron were also approximately Poisson distributed (see Figures 4 and 7). Therefore those output spikes can be inputs to another downstream LIF neuron, which implies that CPD algorithms can be implemented recursively. Successive layers of a feedforward network filter out false-positive change-point assertions from the layer above it, at the cost of increasing detection delay. But detection delay can be reduced by feeding inputs from many independent and identical afferent neurons to the same efferent neuron. Since the sum of independent and identical Poisson random variables is itself Poisson, that efferent neuron still implements CUSUM CPD on the sum of its inputs. As more independent afferent neurons are connected to the efferent neuron, the change in the sum of the input rates before and after the change point increases. All else being equal (e.g., the threshold), the false alarm rate and detection delays are reduced because a change point is easier to detect. Simple feedforward networks of thresholded biological cells are natural, principled solutions to the problem of quickly and reliably detecting changes in sensory spiking behavior.
CPD problems can be considered on more general input stochastic processes with a different likelihood ratio . For example, for an SPP (see equation 2.1) includes an exponential decay term, which we mapped to the passive leak current of an LIF neuron. If the waiting time distribution for input spikes were inverse-gaussian instead, then would decay differently between input spikes and would have a different update rule for when input spikes arrive. A different electrophysiological model of a neuron that performs online CUSUM CPD for this specific problem can thus be derived. This observation implies that CUSUM CPD can be a recursive algorithm even if the waiting time distributions of output and input spikes do not share the same form. Say our waiting time distributions for input spikes before and after a change point are the false alarm and detection delay waiting times of the inverse-gaussian CPD neuron. Then we can construct a likelihood ratio and derive another neuron that receives the output of the inverse-gaussian CPD neuron as input. We conjecture that nervous systems recursively implement CPD in this manner. If so, and if neural membrane potentials represent some likelihood ratio of its inputs, then a testable prediction immediately follows: the electrophysiological properties of an efferent neuron’s membrane must be related to the spiking statistics of its afferent inputs. We showed a simple example of this principle, specifically that the time constant of an LIF neuron is inversely related to the difference in its input rates before and after a change point (see equation 2.7). More generally, for a CPD problem that a given neuron needs to solve, we can predict its electrophysiological properties. Conversely, if we know the membrane electrophysiology of a neuron, then we can infer something about the CPD problem that it solves.
This relationship between afferent spiking statistics and efferent electrophysiology could be verified in at least two ways. First, the literature can be reviewed for examples of afferent-efferent neuron pairs whose spiking statistics and electrophysiological properties have been modeled in detail (Knudsen, 1981; Monk & van Schaik, 2020; Contreras et al., 2001; O’Keefe & Burgess, 2005; Villette et al., 2019; Broussard et al., 2014). Computational neuroscience has a long history of constructing both neural electrophysiological models (Koch, 1993; D’Angelo & Jirsa, 2022) and statistical models of their spiking behavior (Perkel et al., 1967; Gabbiani & Koch, 1998). We are unaware of theoretical explanations for why these models should be related for afferent-efferent neuron pairs connected by a synapse. For example, it is well known that barn owl auditory nerve fibers produce (approximately) Poisson-distributed spikes at a rate up to approximately 100 Hz in silence and approximately 300 Hz given a pure-tone sound stimulus (Neubauer et al., 2009). It is also well known that one to five auditory nerve fibers feed spikes into nucleus magnocellularis efferent neurons (Carr & Boudreau, 1991; Oline et al., 2016) whose time constants range from 1 ms to 5 ms (Fukui & Ohmori, 2004; Gerstner et al., 1996; Fontaine & Brette, 2011). However, it has not been proposed that the time constants of nucleus magnocellularis neurons are related to the spiking rates and number of afferent auditory nerve fibers, that is, 1/(300 Hz-100 Hz) = 5 ms and 1/(5(300 Hz-100 Hz)) = 1 ms. Second, this relationship in slice preparations could be directly measured. Say we have a barrel cortex slice preparation from a mouse (Shlosberg et al., 2012; Buskila et al., 2013, 2014) and perform the experiment reported by Buskila and Amitai (2010). Namely, we identify a dendrite from layer 2/3 and insert a recording electrode into it, and separately drive the spiking of a neuron from layer 4 with a stimulating electrode. That stimulating electrode is then moved until the recording electrode observes the postsynaptic potentials being driven. The neurons in layers 4 and 2/3 must then be synaptically connected. If we change the spiking statistics of the layer 4 neuron, then the electrophysiological properties of the layer 2/3 neuron’s membrane should change (Buskila et al., 2013). For example, the spiking rate of the layer 4 neuron may be inversely related to the dendritic time constant of the layer 2/3 neuron, as equation 2.7 suggests.
We wonder if any electrophysiological neuron model can implement a particular online CPD algorithm on its inputs. This article showed two examples of this principle: an LIF neuron with synaptic weights that are proportional to its membrane potential implements CPD on the SPP (see equations 2.1 and 2.2), and then an LIF neuron with constant synaptic weights implements CPD on a specific family of CPPs (see equations 2.4 and 2.6). These examples show that some simple neuron models can solve some simple CPD problems. The CPD problems that we consider here are particularly restrictive. We assume that input spikes are strictly excitatory and Poisson-distributed at one of two possible input rates. More complicated neural inputs like lateral inhibition (Mu & Ploog, et al., 1981), self-excitation (Steriade et al., 1998; Rolls, 2021), or feedback loops (Kennedy & Bullier, 1985; Miller, 2003) cannot be modeled by excitatory SPPs. In early neural sensory pathways, our mathematical restrictions can still be reasonable approximations of neural spiking behavior and physiology. For example, barn owl auditory afferents fire approximately Poisson-distributed excitatory spikes (Neubauer et al., 2009), and the next neuron in that pathway is reasonably modeled as an LIF neuron (Huo, 2010). Farther down neural pathways, for example, in cortical regions, these assumptions break down. For example, neurons in the cat’s primary visual cortex (V1) integrate inputs over temporal patterns to respond to specific features (Guitchounts et al., 2020; Freeman, 2021). They also receive feedback from higher cortical areas and lateral inputs from neighboring neurons (Pan et al., 2021; Ding et al., 2022), contributing to more complex computations and context-dependent processing (Sengupta et al., 2021; Ding et al., 2022). Our mathematical simplifications cannot account for such complexity. But it might be possible to relax many of our mathematical restrictions, if not all of them.
Extensions of the likelihood ratio for spatiotemporal and history-dependent point processes have been rigorously derived and studied (Reinhart, 2018; Dachian & Kutoyants, 2006). They maintain a similar form to the one presented here (see equation 2.4) as their construction is derived from inhomogeneous Poisson measures (Diggle, 2013). Their similarity in form suggests that it is possible to radically generalize the restrictive CPD problem that we studied here. The two measures compared by the likelihood ratio can represent anything from scalar values to exotic functions resembling neural receptive fields (Park & Pillow, 2011, 2013). For example, could represent background noise, while could represent specific spatiotemporal patterns of inputs to a dendrite. Notice that both measures are now functions of space and time. They represent the intensity of a generic point process with a known distribution (including, but not limited to, the Poisson distribution). The measures can also be self-exciting, where past activity influences future responses (e.g., a Hawkes process; Reinhart, 2018). Spatiotemporal, self-exciting intensities are very broad measures for point processes. We can define a CUSUM algorithm to run CPD on two such measures.
The likelihood ratio for this broad family of point processes shares the same structure as our equation 2.1 despite these significant extensions (Dachian & Kutoyants, 2006; Reinhart, 2018). This observation suggests that we can derive a wider variety of neural models and electrophysiological features from a generalized version of our CPD framework. For example, inhibitory spikes emerge from our CPD framework by allowing the jump size function to take negative values (). Feature selectivity can emerge by assigning greater importance to input spikes that fit some particular spatiotemporal pattern. Neural feedback loops can be modeled as self-exciting point processes, as in cortex (Mei & Eisner, 2017; Garnier et al., 2015; Cools, 1987). Combining these extensions could map complex neural computations to CPD problems and vice versa. Generalizing our CPD framework could thus provide a rigorous probabilistic interpretation for a much broader range of neural electrophysiological models and networks, moving beyond simple stimulus detection tasks like the one shown in Figure 7.
Acknowledgments
This work was partially funded by the DFG within the Cluster of Excellence EXC 1077/1 – project 390895286. We also thank Michael G. Paulin and Angel Mario Castro Martinez for useful discussion.