## Abstract

Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding and time decoding methods have been studied using the nonuniform sampling theory for band-limited spaces, as well as for generic shift-invariant spaces. This letter proposes a new framework for studying IF time encoding and decoding by reformulating the IF time encoding problem as a uniform sampling problem. This framework forms the basis for two new algorithms for reconstructing signals from spike time sequences. We demonstrate that the proposed reconstruction algorithms are faster, and thus better suited for real-time processing, while providing a similar level of accuracy, compared to the standard reconstruction algorithm.

## 1 Introduction

The integrate-and-fire (IF) neuron, one of the most common models for describing the behavior of spiking neurons (Lapicque, 1907; Tuckwell, 1988), is a type of time encoding machine (TEM)—an operator that maps a real-valued function *u*, belonging to the space *H*, to a strictly increasing sequence of reals satisfying .

TEMs were originally introduced and studied by Lazar and Tóth (2003) in the context of band-limited signals. They proposed and evaluated practical algorithms for reconstructing band-limited signals from TEM sequences with arbitrary accuracy (Lazar, 2004; Lazar & Tóth, 2004a, 2004b; Lazar & Pnevmatikakis, 2008).

Using the nonuniform sampling framework developed by Aldroubi and Gröchenig (2001), Aldroubi and Feichtinger (1998), Feichtinger, Gröchenig, and Strohmer (1995), Gröchenig (1992, 1993), and Gröchenig and Schwab (2003), Gontier and Vetterli (2014) extended the results of Lazar and Tóth (2003) to a broader class of functions belonging to shift-invariant spaces (Unser, 2000).

A characteristic of time encoding is that the sampling times depend on function *u*. As a consequence, the input is reconstructed in a space spanned by a frame consisting of functions that depend on the sampling times (Lazar & Pnevmatikakis, 2008; Gontier & Vetterli, 2014). When reconstructing from several sequences , this frame needs to be recalculated for every sequence *i*. For large values of *N*, this process becomes computationally demanding. Fast algorithms, which involve approximating the input with a periodic function, have been proposed by Lazar, Simonyi, and Tóth (2005), and Lazar, Simonyi, and Tóth (2008). However, these algorithms solve a new linear system for every sequence of sampling times and thus introduce a significant delay in computation for large values of

This letter introduces a new framework for studying time encoding and decoding by reformulating the nonuniform sampling operation performed by the IF-TEM, represented by an ideal IF model, as an equivalent uniform sampling operation applied to a transformed signal. This framework is particularly useful because it enables the application of the system modeling and analysis tools already available for uniformly sampled systems.

We exploit this framework to develop two new reconstruction algorithms that are significantly faster than the one proposed by Lazar and Pnevmatikakis (2008). Through numerical simulations, we show the advantage of our framework.

The letter is structured as follows. Section 2 gives an overview of time encoding and decoding on band-limited spaces and introduces the new theoretical framework for analyzing the IF-TEM, which shows how input *u* can be transformed uniquely into function , whose uniform samples are equal to the IF-TEM output. Furthermore, we use this framework to develop a fast reconstruction algorithm for band-limited signals. Section 3 reviews frame theory for shift-invariant spaces and describes the iterative algorithm in Gontier and Vetterli (2014). We use this method to generalize the noniterative reconstruction algorithm in Lazar and Tóth (2004b) to shift-invariant spaces and develop a second algorithm to reconstruct input *u* via the auxiliary function . The reconstruction speeds of the two proposed algorithms are compared with the one in Lazar and Pnevmatikakis (2008), through numerical simulations, in section 4. Conclusions are given in section 5.

## 2 Time Encoding and Decoding Using IF Neurons in Paley-Wiener Spaces

### 2.1 Direct Reconstruction from Nonuniform Samples

An IF-TEM with test functions generates a sampling sequence , when presented with input , such that:

The value of at a given time is , where .

.

, where is an operator mapping function

*u*onto the real axis, .

*b*are the threshold, integration constant, and bias, respectively. It follows that operator satisfies where Equation 2.1 is also known as the

*t*-transform (Lazar & Pnevmatikakis, 2008). Without reducing the generality, we assume that

Lazar and Tóth (2004b) proposed the following algorithm for reconstructing the input of an IF-TEM.

See the proof of corollary ^{2} in Lazar and Pnevmatikakis (2008).

From a computational point of view, the main disadvantage of this reconstruction approach is that a new set of functions , matrix , and its pseudoinverse have to be calculated for every sequence of sampling times. In section 2.2, we propose a new reconstruction framework that addresses this issue.

### 2.2 A Novel Framework of Encoding and Decoding for IF Neurons

For a generic input , the IF-TEM generates a nonuniform sequence . We show that an IF-TEM with parameters and *b* is a uniform sampler with sampling points , for an auxiliary function uniquely determined from input *u*. The following theorem establishes a number of important properties of function :

Let be the sampling time sequence generated by an ideal IF-TEM with associated test functions ,, for a given input function . Let . It follows that *y* admits an inverse such that

where , and .

and ,

where denotes the standard norm in .

The following corollary establishes the expression of *u* as a function of .

It follows from equation 2.4 by applying the change of variable .

We show that can be approximated with arbitrary precision by a function whose essential bandwidth is finite and define a bound for the approximation error.

See the appendix.

According to theorem ^{3}, for any function and for any IF-TEM with parameters and *b*, the function satisfies where , is the inverse of , and Given that the function can be approximated with arbitrary precision by a function with essential bandwidth bounded by , the function is reconstructed in the space .

See the proof of theorem ^{3} and corollary ^{4} from Lazar and Tóth (2004b).

According to lemma ^{5}, increasing *M* reduces the approximation error, increases the bandwitdh , and, as a consequence, requires a smaller for condition 2.9 to be satisfied. Through numerical simulations, we found that for , the error introduced by approximation 2.8 is comparable to the common numerical errors.

*u*at points . For the particular case of piecewise linear interpolation, it follows that, :

Through numerical simulations, presented in section 4, we concluded that a higher interpolation degree does not significantly increase the accuracy of our method. We propose the following algorithm for reconstructing a function *u* over the interval , based on a finite number of time samples generated by an IF-TEM with parameters This algorithm has a significant speed improvement over the standard method in corollary ^{2}.

**Algorithm 1.**

**Step 2.**Calculate , the Moore-Penrose pseudoinverse of

Algorithm 1 calculates matrix and the values of functions and off-line, in steps 1 and 2. Processing subsequent sets of spike times for every new set *h* of spikes involves only a few additions and multiplications to recover input *u* on time interval (see steps 3 to 7 in algorithm 1).

## 3 Time Encoding and Decoding Using IF Neurons in Shift-Invariant Spaces

If and , then *u* is band-limited and .

*f*and

*g*, respectively.

### 3.1 Direct Reconstruction from Nonuniform Samples

Here we review frame theory for shift-invariant spaces and present the reconstruction algorithm proposed by Gontier and Vetterli (2014) for the IF-TEM. We use this method to generalize the noniterative reconstruction algorithm proposed by Lazar and Tóth (2004b) to shift-invariant spaces.

For completeness, we state the following theorem.

See the proof of theorem 7.2.3 in Christensen (2003).

Let be an RKHS. Then the unique function that satisfies is called the reproducing kernel of .

*f*, which satisfies where is the class of smooth functions on with compact support. Restricting to a Sobolev space is required for proving that is an RKHS, as the next lemma will show. Let Then its weak derivative is bounded by (Gontier & Vetterli, 2014) where To ensure that this bound is finite and different from 0, we restrict to satisfy conditions 3.2 and 3.3 for , namely, , where

For completeness, we state the following lemma (Gontier & Vetterli, 2014):

Let Then is an RKHS and , where denotes the class of continuous functions on

See the proof of lemma 3.2 in Gontier and Vetterli (2014).

Next we prove a lemma that forms the basis for a noniterative reconstruction algorithm for signals in shift-invariant spaces, which will be presented in section 3.2. For now, we will assume that is an arbitrary sequence of reals satisfying some density properties. The particular case of IF-TEM will be considered at the end of this section:

^{12}, using that and are strictly increasing, we prove that the latter sequence is also relatively separated, as follows:

*c*is an arbitrary sequence and is a well-defined operator with expression To show that is well defined, we will prove that Function

*u*is continuous due to lemma

^{15}, and according to the mean value theorem for integrals, it follows that for all , there exist such that Using the Cauchy-Schwarz and AM-GM inequalities, it follows that

The following theorem is proven in Gontier and Vetterli (2014), where the reconstruction involves a projection of on . We add the requirement that is relatively separated, which, given lemma ^{16}, allows using the operator on its own for reconstruction:

See the proof of theorem ^{10} in Gontier and Vetterli (2014).

In practice, reconstruction is performed using a finite sequence of spike times. For each given sequence, we can calculate the spike density , and choose an appropriate input space such that satisfies equation 3.6.

Theorem ^{17} can be used to prove the following corollary, which generalizes corollary ^{4} in Lazar and Tóth (2004b) to shift-invariant spaces:

^{17}, It will be shown by induction that where and represents the identity matrix. From theorem

^{17}, . We assume that holds. From equation 3.7, it follows that The last term can be calculated with From equations 3.8 and 3.9, it follows that Finally, the equality holds (Strohmer, 1993; Lazar & Tóth, 2004b):

The following corollary generalizes theorem ^{3} in Lazar and Pnevmatikakis (2008) to shift-invariant spaces:

*u*. Then the following holds: and Then is - dense and relatively separated; thus, by applying corollary

^{19}, we obtain the required result.

From a computational point of view, the main disadvantage of this reconstruction approach is that a new set of reproducing kernels , matrix , and its pseudoinverse have to be calculated for every sequence of sampling times.

### 3.2 Fast, Indirect Reconstruction from Uniform Samples

According to theorem ^{3}, for any function and for any IF-TEM with parameters and *b*, the function satisfies where , is the inverse of , and

For , according to the orthogonal projection theorem, , such that and , where is the orthogonal subspace of space . Then , and therefore the energy of *e* can be bounded. For particular cases, we showed that this bound can be made arbitrarily small, (e.g., band-limited spaces—lemma ^{5}). As a consequence, we reconstruct the function in the space .

The proof follows immediately from corollary ^{19}.

*u*at points . For piecewise linear interpolation, we calculate , where for

The reconstruction algorithm is summarized in algorithm 2.

**Algorithm 2.**

**Step 2.**Calculate , the Moore-Penrose pseudoinverse of

Algorithm 2 calculates matrices off-line, in steps 1 and 2. Processing subsequent sets of spike times for every new set *h* of spikes involves only a few additions and multiplications to recover input *u* on time interval (see algorithm steps 3–7).

## 4 Simulation Results

In this section we compare, through numerical simulations, the computation time for the method in Lazar and Pnevmatikakis (2008) with our methods in algorithms 1 and 2. In addition, we characterize the relationship between computation time and the length of the sampling time sequence . The generic algorithm 2 is implemented for a space generated by B-splines.

The normalized error above represents of the overall reconstruction normalized error, calculated in norm 2. Furthermore, we repeated the simulation several times by changing the linear interpolation of to higher-order spline interpolation and noticed that the plot of the normalized error was not significantly different.

The parameters chosen satisfy the reconstruction conditions , , and , which are invoked by the standard method (Lazar & Pnevmatikakis, 2008), algorithms 1 and 2, respectively.

*u*, denote the original and reconstructed signals, respectively. We simulated algorithm 2 for a range of values for parameter and selected the one for which the mean was the largest: mean .

The computing times were measured for the two proposed algorithms and the method in Lazar and Pnevmatikakis (2008) on the time interval immediately before and after calling the respective reconstruction function. The results are displayed in Figure 2. The simulations were carried out in Matlab Version 7.5.0.342 on a 3.1 GHz Intel Single Core PC workstation.

The choice for parameter *M* is an important step in the implementation of algorithm 1 because it determines the approximation error for , as well as the bandwidth of the reconstructed signal . To illustrate how it affects the algorithm, we used the same time sequences to reconstruct inputs for *M* varying from 1 to 5, using the same model for the IF-TEM. For every value of *M*, Figure 3 displays the corresponding bandwidth of and the reconstruction of in terms of mean and standard deviation. For increasing *M*, the plateaus, and, through repeated simulations, we conclude that the choice , corresponding in this case to an error of relative to , can be used to achieve good results. Increasing the bandwidth further leads to an increase in numerical errors. This also applies to the algorithm in Lazar and Pnevmatikakis (2008), when reconstructing for bandwidths much higher than the input bandwidth.

In order to investigate the impact of increasing the number of spikes on the time taken to reconstruct the signal, we carried out numerical simulations in which we reconstructed the input signal from spike time sequences with lengths respectively, using algorithm 1. There is no significant difference in the computation times of algorithms 1 and 2, since steps 3 to 7 of both algorithms involve calculations with matrices that have the same sizes. The estimated distributions of the reconstruction times for algorithm 1 as well for the Lazar-Pnevmatikakis algorithm, shown in Figure 4, demonstrate that for the proposed algorithm, the reconstruction time is less sensitive with respect to the length of the spike time sequences. Specifically, the computation time corresponding to spikes increases times for the proposed algorithm and times for the Lazar-Pnevmatikakis algorithm, compared to the time taken to process spikes.

## 5 Conclusion

This letter introduced a new framework for representing a nonuniformly distributed sampling time sequence generated by an IF-TEM for a given function *u* as a sample sequence obtained by sampling uniformly an auxiliary function , with a sampling period that depends on only IF-TEM parameters. This framework is particularly useful because it enables applying the tools for the system modeling and analysis that are already available for discrete time systems.

The new theoretical framework formed the basis for two new fast reconstruction algorithms applicable to functions belonging to band-limited spaces and first-degree B-splines generated spaces, respectively. We have also proposed a general algorithm that is applicable to functions belonging to general shift-invariant spaces. Using numerical simulations, we demonstrate that algorithms 1 and 2 are significantly faster than the one proposed by Lazar and Pnevmatikakis (2008). In addition, the rate of increase in computation time for the algorithms proposed, relative to the input size (number of spike times processed), is times smaller than the rate for the algorithm in Lazar and Pnevmatikakis (2008). We also demonstrated that algorithm 2 can accommodate signals that belong to shift-invariant subspaces and performs with marginally better accuracy than algorithm 1.

## Appendix: Additional Proofs

To prove lemma ^{5}, an auxiliary lemma is required:

^{24}and the definition of , it follows that for any and such that where and Let and .

^{15}. The reproducing kernel is given by Gontier and Vetterli (2014), where represents the biorthogonal frame of frame , which satisfies where is the Kronecker delta function. As in Gontier and Vetterli (2014), using equation 3.1, it can be checked that the function satisfying these requirements is given, in Fourier domain, by

## Acknowledgments

D.F. gratefully acknowledges that this work was supported by a University Prize Scholarship awarded by the University of Sheffield. D.C. gratefully acknowledges that this work was supported by BBSRC under grant BB/H013849/1. We also gratefully acknowledge reviewers’ comments, which helped improve the quality of the letter.