Abstract

Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding and time decoding methods have been studied using the nonuniform sampling theory for band-limited spaces, as well as for generic shift-invariant spaces. This letter proposes a new framework for studying IF time encoding and decoding by reformulating the IF time encoding problem as a uniform sampling problem. This framework forms the basis for two new algorithms for reconstructing signals from spike time sequences. We demonstrate that the proposed reconstruction algorithms are faster, and thus better suited for real-time processing, while providing a similar level of accuracy, compared to the standard reconstruction algorithm.

1  Introduction

The integrate-and-fire (IF) neuron, one of the most common models for describing the behavior of spiking neurons (Lapicque, 1907; Tuckwell, 1988), is a type of time encoding machine (TEM)—an operator that maps a real-valued function u, belonging to the space H, to a strictly increasing sequence of reals satisfying .

TEMs were originally introduced and studied by Lazar and Tóth (2003) in the context of band-limited signals. They proposed and evaluated practical algorithms for reconstructing band-limited signals from TEM sequences with arbitrary accuracy (Lazar, 2004; Lazar & Tóth, 2004a, 2004b; Lazar & Pnevmatikakis, 2008).

Using the nonuniform sampling framework developed by Aldroubi and Gröchenig (2001), Aldroubi and Feichtinger (1998), Feichtinger, Gröchenig, and Strohmer (1995), Gröchenig (1992, 1993), and Gröchenig and Schwab (2003), Gontier and Vetterli (2014) extended the results of Lazar and Tóth (2003) to a broader class of functions belonging to shift-invariant spaces (Unser, 2000).

A characteristic of time encoding is that the sampling times depend on function u. As a consequence, the input is reconstructed in a space spanned by a frame consisting of functions that depend on the sampling times (Lazar & Pnevmatikakis, 2008; Gontier & Vetterli, 2014). When reconstructing from several sequences , this frame needs to be recalculated for every sequence i. For large values of N, this process becomes computationally demanding. Fast algorithms, which involve approximating the input with a periodic function, have been proposed by Lazar, Simonyi, and Tóth (2005), and Lazar, Simonyi, and Tóth (2008). However, these algorithms solve a new linear system for every sequence of sampling times and thus introduce a significant delay in computation for large values of

This letter introduces a new framework for studying time encoding and decoding by reformulating the nonuniform sampling operation performed by the IF-TEM, represented by an ideal IF model, as an equivalent uniform sampling operation applied to a transformed signal. This framework is particularly useful because it enables the application of the system modeling and analysis tools already available for uniformly sampled systems.

We exploit this framework to develop two new reconstruction algorithms that are significantly faster than the one proposed by Lazar and Pnevmatikakis (2008). Through numerical simulations, we show the advantage of our framework.

The letter is structured as follows. Section 2 gives an overview of time encoding and decoding on band-limited spaces and introduces the new theoretical framework for analyzing the IF-TEM, which shows how input u can be transformed uniquely into function , whose uniform samples are equal to the IF-TEM output. Furthermore, we use this framework to develop a fast reconstruction algorithm for band-limited signals. Section 3 reviews frame theory for shift-invariant spaces and describes the iterative algorithm in Gontier and Vetterli (2014). We use this method to generalize the noniterative reconstruction algorithm in Lazar and Tóth (2004b) to shift-invariant spaces and develop a second algorithm to reconstruct input u via the auxiliary function . The reconstruction speeds of the two proposed algorithms are compared with the one in Lazar and Pnevmatikakis (2008), through numerical simulations, in section 4. Conclusions are given in section 5.

2  Time Encoding and Decoding Using IF Neurons in Paley-Wiener Spaces

2.1  Direct Reconstruction from Nonuniform Samples

Let , where is the Paley-Wiener space of bandwidth :
formula
In the following, we present a general definition of the IF-TEM (Gontier & Vetterli, 2014).
Definition 1.

An IF-TEM with test functions generates a sampling sequence , when presented with input , such that:

  1. The value of at a given time is , where .

  2. .

  3. , where is an operator mapping function u onto the real axis, .

For the ideal IF-TEM considered in this letter, the test functions are where , and , and b are the threshold, integration constant, and bias, respectively. It follows that operator satisfies
formula
2.1
where Equation 2.1 is also known as the t-transform (Lazar & Pnevmatikakis, 2008). Without reducing the generality, we assume that

Lazar and Tóth (2004b) proposed the following algorithm for reconstructing the input of an IF-TEM.

Corollary 1.
Let be the sequence of sampling times generated by an IF-TEM with parameters , for a given input function If , u can be reconstructed from sequence as
formula
2.2
where denotes the transpose of matrix , stands for the Moore-Penrose pseudoinverse of matrix and
Proof.

See the proof of corollary 2 in Lazar and Pnevmatikakis (2008).

From a computational point of view, the main disadvantage of this reconstruction approach is that a new set of functions , matrix , and its pseudoinverse have to be calculated for every sequence of sampling times. In section 2.2, we propose a new reconstruction framework that addresses this issue.

2.2  A Novel Framework of Encoding and Decoding for IF Neurons

For a generic input , the IF-TEM generates a nonuniform sequence . We show that an IF-TEM with parameters and b is a uniform sampler with sampling points , for an auxiliary function uniquely determined from input u. The following theorem establishes a number of important properties of function :

Theorem 1.

Let be the sampling time sequence generated by an ideal IF-TEM with associated test functions ,, for a given input function . Let . It follows that y admits an inverse such that

where , and .

  1. and ,

where denotes the standard norm in .

Proof.
(1) From its definition, y is strictly increasing and continuous. Furthermore,
formula

It follows that y is one-to-one and onto function. Thus it has an inverse .

From equation 2.1 and the definition of y, it follows that and . Consequently,
formula
2.3
The proof follows from the definition of and equations 2.3.
(2) By applying the inverse function theorem, it follows that . Furthermore,
formula
2.4
It follows that
formula
2.5
and
formula

The following corollary establishes the expression of u as a function of .

Corollary 2.
Let and let be the inverse of . Then the following holds:
formula
2.6
where
Proof.

It follows from equation 2.4 by applying the change of variable .

From equation 2.4, the function can be written as
formula
where .
In general, function is not band-limited, that is, . For a function , the composition is a time-warped function (Cochran & Clark, 1990), whose exact bandwidth formula has not yet been derived for the general case. The approximation used for this bandwidth is the essential maximum frequency or the essential bandwidth, which is defined as the the bandwidth where most of the signal energy resides (Do, Marchand-Maillet, & Vetterli, 2012; Bergner, Moller, Weiskopf, & Muraki, 2006):
formula
2.7
where denotes the essential bandwidth of function .

We show that can be approximated with arbitrary precision by a function whose essential bandwidth is finite and define a bound for the approximation error.

Lemma 1.
For any , function can be decomposed as
formula
2.8
where
formula
Proof.

See the appendix.

According to theorem 3, for any function and for any IF-TEM with parameters and b, the function satisfies where , is the inverse of , and Given that the function can be approximated with arbitrary precision by a function with essential bandwidth bounded by , the function is reconstructed in the space .

Corollary 3.
Let where and let , such that
formula
2.9
Then and can be reconstructed with arbitrary precision from using
formula
2.10
formula
2.11
where , and ,
Proof.

See the proof of theorem 3 and corollary 4 from Lazar and Tóth (2004b).

Remark 1.

According to lemma 5, increasing M reduces the approximation error, increases the bandwitdh , and, as a consequence, requires a smaller for condition 2.9 to be satisfied. Through numerical simulations, we found that for , the error introduced by approximation 2.8 is comparable to the common numerical errors.

Functions and calculated with equations 2.10 and 2.11, are sampled on the uniform grid , where is the sampling period for reconstruction and the values for are given by equation 2.6. The reconstructed signal is the interpolation of u at points . For the particular case of piecewise linear interpolation, it follows that, :
formula

Through numerical simulations, presented in section 4, we concluded that a higher interpolation degree does not significantly increase the accuracy of our method. We propose the following algorithm for reconstructing a function u over the interval , based on a finite number of time samples generated by an IF-TEM with parameters This algorithm has a significant speed improvement over the standard method in corollary 2.

Algorithm 1.

  1. Step 1. Calculate , where
    formula
  2. Step 2. Calculate , the Moore-Penrose pseudoinverse of

  3. Step 3. Calculate :
    formula
  4. Step 4. Calculate :
    formula
  5. Step 5. Calculate and where
    formula
  6. Step 6. Calculate
    formula
  7. Step 7. Calculate , the interpolation of u at points , and sample it at For the particular case of piecewise linear interpolation, the expression of is
    formula
    for , where satisfies

Algorithm 1 calculates matrix and the values of functions and off-line, in steps 1 and 2. Processing subsequent sets of spike times for every new set h of spikes involves only a few additions and multiplications to recover input u on time interval (see steps 3 to 7 in algorithm 1).

3  Time Encoding and Decoding Using IF Neurons in Shift-Invariant Spaces

A shift-invariant space of order 2, generated by real function , is defined by
formula

If and , then u is band-limited and .

The space is a Hilbert space with inner product (Gontier & Vetterli, 2014)
formula
3.1
where , , and are the coefficients in of f and g, respectively.

3.1  Direct Reconstruction from Nonuniform Samples

Here we review frame theory for shift-invariant spaces and present the reconstruction algorithm proposed by Gontier and Vetterli (2014) for the IF-TEM. We use this method to generalize the noniterative reconstruction algorithm proposed by Lazar and Tóth (2004b) to shift-invariant spaces.

Definition 2.
A sequence is a frame with bounds for Hilbert space if
formula
Definition 3.
A sequence is a Bessel sequence with bound B for Hilbert space if
formula

For completeness, we state the following theorem.

Theorem 2.
Let and Then is a frame sequence with bounds if and only if
formula
3.2
where Moreover, is a Bessel sequence with bound if and only if
formula
3.3
Proof.

See the proof of theorem 7.2.3 in Christensen (2003).

Definition 4.
A sequence is -dense if
formula
Definition 5.
A sequence is relatively separated if there exists such that
formula
Definition 6.
A reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions f on domain D such that the linear operator
formula
is bounded
Definition 7.

Let be an RKHS. Then the unique function that satisfies is called the reproducing kernel of .

We restrict function to space , a Sobolev space defined by
formula
where is a weak derivative of f, which satisfies
formula
where is the class of smooth functions on with compact support. Restricting to a Sobolev space is required for proving that is an RKHS, as the next lemma will show. Let Then its weak derivative is bounded by (Gontier & Vetterli, 2014)
formula
3.4
where To ensure that this bound is finite and different from 0, we restrict to satisfy conditions 3.2 and 3.3 for , namely, , where
formula
3.5

For completeness, we state the following lemma (Gontier & Vetterli, 2014):

Lemma 2.

Let Then is an RKHS and , where denotes the class of continuous functions on

Proof.

See the proof of lemma 3.2 in Gontier and Vetterli (2014).

Next we prove a lemma that forms the basis for a noniterative reconstruction algorithm for signals in shift-invariant spaces, which will be presented in section 3.2. For now, we will assume that is an arbitrary sequence of reals satisfying some density properties. The particular case of IF-TEM will be considered at the end of this section:

Lemma 3.
Let and let be a -dense, relatively separated, strictly increasing sequence, and
formula
where K is the reproducing kernel of , , and It follows that is a well-defined, bounded operator.
Proof.
From definition 12, using that and are strictly increasing, we prove that the latter sequence is also relatively separated, as follows:
formula
According to Aldroubi and Gröchenig (2001), this implies that is a Bessel sequence for . Christensen (2003) proved that the latter statement is equivalent to
formula
where c is an arbitrary sequence and is a well-defined operator with expression To show that is well defined, we will prove that Function u is continuous due to lemma 15, and according to the mean value theorem for integrals, it follows that for all , there exist such that
formula
Using the Cauchy-Schwarz and AM-GM inequalities, it follows that
formula
It follows that (see equation 3.4)
formula

The following theorem is proven in Gontier and Vetterli (2014), where the reconstruction involves a projection of on . We add the requirement that is relatively separated, which, given lemma 16, allows using the operator on its own for reconstruction:

Theorem 3.
Let Then any -dense, relatively separated sequence can be used to reconstruct u with arbitrary precision from if
formula
3.6
where The reconstruction is performed with
formula
3.7
where is the identity operator. The functions satisfy
formula

Proof.

See the proof of theorem 10 in Gontier and Vetterli (2014).

Remark 2.

In practice, reconstruction is performed using a finite sequence of spike times. For each given sequence, we can calculate the spike density , and choose an appropriate input space such that satisfies equation 3.6.

Theorem 17 can be used to prove the following corollary, which generalizes corollary 4 in Lazar and Tóth (2004b) to shift-invariant spaces:

Corollary 4.
Let and be a -dense, relatively separated sequence. Provided that , u can be reconstructed from sequence as
formula
where , denotes the transpose of vector , stands for the Moore-Penrose pseudoinverse of matrix and
Proof.
According to theorem 17, It will be shown by induction that
formula
where and represents the identity matrix. From theorem 17, . We assume that holds. From equation 3.7, it follows that
formula
3.8
The last term can be calculated with
formula
3.9
From equations 3.8 and 3.9, it follows that
formula
Finally, the equality holds (Strohmer, 1993; Lazar & Tóth, 2004b):
formula

The following corollary generalizes theorem 3 in Lazar and Pnevmatikakis (2008) to shift-invariant spaces:

Corollary 5.
Let be the sequence of sampling times generated by an IF-TEM, for a given function . Then u can be reconstructed from if
formula
Proof.
Sequence satisfies . The mean value theorem for integrals is used in equation 2.1, together with the continuity of u. Then the following holds:
formula
and
formula
Then is - dense and relatively separated; thus, by applying corollary 19, we obtain the required result.

From a computational point of view, the main disadvantage of this reconstruction approach is that a new set of reproducing kernels , matrix , and its pseudoinverse have to be calculated for every sequence of sampling times.

3.2  Fast, Indirect Reconstruction from Uniform Samples

According to theorem 3, for any function and for any IF-TEM with parameters and b, the function satisfies where , is the inverse of , and

For , according to the orthogonal projection theorem, , such that and , where is the orthogonal subspace of space . Then , and therefore the energy of e can be bounded. For particular cases, we showed that this bound can be made arbitrarily small, (e.g., band-limited spaces—lemma 5). As a consequence, we reconstruct the function in the space .

Corollary 6.
Let , such that
formula
where Then and can be reconstructed with arbitrary precision from using
formula
3.10
formula
3.11
where is the reproducing kernel of and ,
Proof.

The proof follows immediately from corollary 19.

The values of are calculated using equation 2.6, where . To reconstruct the signal, we interpolate u at points . For piecewise linear interpolation, we calculate , where
formula
for

The reconstruction algorithm is summarized in algorithm 2.

Algorithm 2.

  1. Step 1. Calculate , where
    formula
  2. Step 2. Calculate , the Moore-Penrose pseudoinverse of

  3. Step 3. Calculate :
    formula
  4. Step 4. Calculate :
    formula
  5. Step 5. Calculate and where
    formula
  6. Step 6. Calculate
    formula
  7. Step 7. Calculate , the interpolation of u at points , and sample it at For the particular case of piecewise linear interpolation,
    formula
    for , where satisfies

Algorithm 2 calculates matrices off-line, in steps 1 and 2. Processing subsequent sets of spike times for every new set h of spikes involves only a few additions and multiplications to recover input u on time interval (see algorithm steps 3–7).

Here we particularize algorithm 2 for , , where represents the B-spline of degree 1,
formula
where is the characteristic function of interval

Corollary 7.
The following is satisfied:
formula
Moreover, and the reproducing kernel of has the expression
formula
where represent the Fourier coefficients of
Proof.

See the appendix.

Corollary 8.

Let and . Then can be reconstructed from sequence , provided that

Proof.

See the appendix.

4  Simulation Results

In this section we compare, through numerical simulations, the computation time for the method in Lazar and Pnevmatikakis (2008) with our methods in algorithms 1 and 2. In addition, we characterize the relationship between computation time and the length of the sampling time sequence . The generic algorithm 2 is implemented for a space generated by B-splines.

We generated a set of band-limited signals,
formula
4.1
where , , and are random coefficients drawn from the standard uniform distribution on . The signals were uniformly sampled with sampling period . Every input was normalized such that
The signals were encoded by an IF-TEM with parameters , which was used to generate sampling time sequences using the following IF-TEM implementation for a discrete-time input:
formula
where is computed using the trapezoid rule and l is the solution of
formula
For algorithm 1, we applied lemma 5 with such that , where
formula
In step 7 of algorithm 1, we implemented a piecewise linear interpolation scheme. To quantify the error introduced by this step on its own, we computed the error between and , the piecewise linear interpolation of at values . These values were calculated with step 5 of algorithm 1. The normalized error between and , displayed in Figure 1, has the expression
formula
Figure 1:

Normalized error introduced by the interpolation in step 7 of algorithm 1.

Figure 1:

Normalized error introduced by the interpolation in step 7 of algorithm 1.

The normalized error above represents of the overall reconstruction normalized error, calculated in norm 2. Furthermore, we repeated the simulation several times by changing the linear interpolation of to higher-order spline interpolation and noticed that the plot of the normalized error was not significantly different.

The parameters chosen satisfy the reconstruction conditions , , and , which are invoked by the standard method (Lazar & Pnevmatikakis, 2008), algorithms 1 and 2, respectively.

We performed reconstruction using algorithm 1 and the standard algorithm proposed by Lazar and Pnevmatikakis (2008) and measured the accuracy by computing the the signal-to-error ratio (SER) defined by
formula
where u, denote the original and reconstructed signals, respectively. We simulated algorithm 2 for a range of values for parameter and selected the one for which the mean was the largest: mean .

The computing times were measured for the two proposed algorithms and the method in Lazar and Pnevmatikakis (2008) on the time interval immediately before and after calling the respective reconstruction function. The results are displayed in Figure 2. The simulations were carried out in Matlab Version 7.5.0.342 on a 3.1 GHz Intel Single Core PC workstation.

Figure 2:

Comparison between algorithm 1 and the Lazar-Pnevmatikakis algorithm. Probability density functions corresponding to (A) reconstruction time and (B) reconstruction accuracy, estimated based on 100 input sequences.

Figure 2:

Comparison between algorithm 1 and the Lazar-Pnevmatikakis algorithm. Probability density functions corresponding to (A) reconstruction time and (B) reconstruction accuracy, estimated based on 100 input sequences.

The choice for parameter M is an important step in the implementation of algorithm 1 because it determines the approximation error for , as well as the bandwidth of the reconstructed signal . To illustrate how it affects the algorithm, we used the same time sequences to reconstruct inputs for M varying from 1 to 5, using the same model for the IF-TEM. For every value of M, Figure 3 displays the corresponding bandwidth of and the reconstruction of in terms of mean and standard deviation. For increasing M, the plateaus, and, through repeated simulations, we conclude that the choice , corresponding in this case to an error of relative to , can be used to achieve good results. Increasing the bandwidth further leads to an increase in numerical errors. This also applies to the algorithm in Lazar and Pnevmatikakis (2008), when reconstructing for bandwidths much higher than the input bandwidth.

Figure 3:

Reconstruction SER of algorithm 1 for different values of .

Figure 3:

Reconstruction SER of algorithm 1 for different values of .

In order to investigate the impact of increasing the number of spikes on the time taken to reconstruct the signal, we carried out numerical simulations in which we reconstructed the input signal from spike time sequences with lengths respectively, using algorithm 1. There is no significant difference in the computation times of algorithms 1 and 2, since steps 3 to 7 of both algorithms involve calculations with matrices that have the same sizes. The estimated distributions of the reconstruction times for algorithm 1 as well for the Lazar-Pnevmatikakis algorithm, shown in Figure 4, demonstrate that for the proposed algorithm, the reconstruction time is less sensitive with respect to the length of the spike time sequences. Specifically, the computation time corresponding to spikes increases times for the proposed algorithm and times for the Lazar-Pnevmatikakis algorithm, compared to the time taken to process spikes.

Figure 4:

Computation time as function of the number of spikes used in reconstruction.

Figure 4:

Computation time as function of the number of spikes used in reconstruction.

To simulate the two proposed algorithms for non-band-limited stimuli, we generated functions,
formula
where represent the B-splines of degree 1, and are random coefficients drawn from the standard uniform distribution on . The signals were uniformly sampled with sampling period . Every input was normalized such that An IF-TEM with parameters was used to generate sampling time sequences To find the optimal and for the two algorithms, we decoded the time sequences for different values of and , respectively, and found that the optimal reconstruction parameters are and The probability density functions for these parameters are shown in Figure 5.
Figure 5:

Comparison of the computation time (left) and reconstruction accuracy (right) between algorithms 1 and 2.

Figure 5:

Comparison of the computation time (left) and reconstruction accuracy (right) between algorithms 1 and 2.

5  Conclusion

This letter introduced a new framework for representing a nonuniformly distributed sampling time sequence generated by an IF-TEM for a given function u as a sample sequence obtained by sampling uniformly an auxiliary function , with a sampling period that depends on only IF-TEM parameters. This framework is particularly useful because it enables applying the tools for the system modeling and analysis that are already available for discrete time systems.

The new theoretical framework formed the basis for two new fast reconstruction algorithms applicable to functions belonging to band-limited spaces and first-degree B-splines generated spaces, respectively. We have also proposed a general algorithm that is applicable to functions belonging to general shift-invariant spaces. Using numerical simulations, we demonstrate that algorithms 1 and 2 are significantly faster than the one proposed by Lazar and Pnevmatikakis (2008). In addition, the rate of increase in computation time for the algorithms proposed, relative to the input size (number of spike times processed), is times smaller than the rate for the algorithm in Lazar and Pnevmatikakis (2008). We also demonstrated that algorithm 2 can accommodate signals that belong to shift-invariant subspaces and performs with marginally better accuracy than algorithm 1.

Appendix:  Additional Proofs

To prove lemma 5, an auxiliary lemma is required:

Lemma 4.
Let , where , and . Then such that
formula
Proof.
By using the expansion of p in Taylor series around 0, it follows that
formula
A.1
where
formula
Let and . Using , it follows that
formula
A.2
From the convolution theorem for Fourier transform, by induction, it follows that
formula
A.3
where denotes the convolution operator. Using Titchmarsh’s convolution theorem in equation A.3, it follows that (Titchmarsh, 1926), and from equation A.2, the following holds:
formula
Proof of Lemma 1.
Using lemma 24 and the definition of , it follows that for any and such that
formula
where and Let and .
From equation 2.4, it can be derived that , since Then the following hold:
formula
A.4
formula
A.5
formula
A.6
Using the boundedness of , it follows that
formula
A.7
Inequality A.7 is then used in equation A.6 to complete the proof.
Proof of Corollary 7.
The Fourier coefficients of have the expression
formula
A.8
It follows that and The expression of is then derived by taking the square root of the Fourier series calculated with coefficients .
To show that , we calculate one of its weak derivatives:
formula
Then the following holds:
formula
Using the same technique as in equation A.8, it follows that , where are the Fourier coefficients of . The values of these coefficients are It follows that
formula
Therefore, according to equation 3.5, with , and and thus is an RKHS, due to lemma 15. The reproducing kernel is given by Gontier and Vetterli (2014),
formula
where represents the biorthogonal frame of frame , which satisfies where is the Kronecker delta function. As in Gontier and Vetterli (2014), using equation 3.1, it can be checked that the function satisfying these requirements is given, in Fourier domain, by
formula
We calculate as follows:
formula
where are the Fourier coefficients of . The final expression of the kernel is calculated using that .
Proof of Corollary 8.
According to theorem 17, this result holds true for a -dense, relatively separated sequence if , where
formula

Acknowledgments

D.F. gratefully acknowledges that this work was supported by a University Prize Scholarship awarded by the University of Sheffield. D.C. gratefully acknowledges that this work was supported by BBSRC under grant BB/H013849/1. We also gratefully acknowledge reviewers’ comments, which helped improve the quality of the letter.

References

Aldroubi
,
A.
, &
Feichtinger
,
H.
(
1998
).
Exact iterative reconstruction algorithm for multivariate irregularly sampled functions in spline-like spaces: The - theory
.
Proceedings of the American Mathematical Society
,
126
(
9
),
2677
2686
.
Aldroubi
,
A.
, &
Gröchenig
,
K.
(
2001
).
Nonuniform sampling and reconstruction in shift-invariant spaces
.
SIAM Rev.
,
43
(
4
),
585
620
.
Bergner
,
S.
,
Moller
,
T.
,
Weiskopf
,
D.
, &
Muraki
,
D. J.
(
2006
).
A spectral analysis of function composition and its implications for sampling in direct volume visualization
.
IEEE Transactions on Visualization and Computer Graphics
,
12
,
1353
1360
.
Christensen
,
O.
(
2003
).
An introduction to frames and Riesz bases.
Boston
:
Birkhäuser
.
Cochran
,
D.
, &
Clark
,
J. J.
(
1990
).
On the sampling and reconstruction of time-warped bandlimited signals
. In
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 1990
(pp.
321
326
).
Piscataway, NJ
:
IEEE
.
Do
,
M. N.
,
Marchand-Maillet
,
D.
, &
Vetterli
,
M.
(
2012
).
On the bandwidth of the plenoptic function
.
IEEE Transactions on Image Processing
,
21
,
708
717
.
Feichtinger
,
H. G.
,
Gröchenig
,
K.
, &
Strohmer
,
T.
(
1995
).
Efficient numerical methods in non-uniform sampling theory
.
Numerische Mathematik
,
69
,
423
440
.
Gontier
,
D.
, &
Vetterli
,
M.
(
2014
).
Sampling based on timing: Time encoding machines on shift-invariant subspaces
.
Applied and Computational Harmonic Analysis
,
36
(
1
),
63
78
.
Gröchenig
,
K.
(
1992
).
Reconstruction algorithms in irregular sampling
.
Math. Comp.
,
59
,
181
194
.
Gröchenig
,
K.
(
1993
).
A discrete theory of irregular sampling
.
Linear Algebra Appl.
,
193
,
129
150
.
Gröchenig
,
K.
, &
Schwab
,
H.
(
2003
).
Fast local reconstruction methods for nonuniform sampling in shift invariant spaces
.
SIAM J. Matrix Anal. Appl.
,
24
,
899
913
.
Lapicque
,
L.
(
1907
).
Recherches quantitatives sur l’excitation électrique des nerfs traiteé comme une polarization
.
J. Physiol. Pathol. Gen.
,
9
,
620
635
.
Lazar
,
A. A.
(
2004
).
Time encoding with an integrate-and-fire neuron with a refractory period
.
Neurocomputing
,
58–60
,
53
58
.
Lazar
,
A. A.
, &
Pnevmatikakis
,
E. A.
(
2008
).
Faithful representation of stimuli with a population of integrate-and-fire neurons
.
Neural Computation
,
20
(
11
),
2715
2744
.
Lazar
,
A. A.
,
Simonyi
,
E. K.
, &
Tóth
,
L. T.
(
2005
).
Fast recovery algorithms for time encoded bandlimited signals
. In
Proceedings of the IEEE Int. Conf. Acoustics, Speech, and Signal Processing
(vol. 4, pp.
237
240
).
Piscataway, NJ
:
IEEE
.
Lazar
,
A. A.
,
Simonyi
,
E. K.
, &
Tóth
,
L. T.
(
2008
).
An overcomplete stitching algorithm for time decoding machines
.
IEEE Transactions on Circuits and Systems: Regular Papers
,
55
(
9
),
2619
2630
.
Lazar
,
A. A.
, &
Tóth
,
T. L.
(
2003
).
Time encoding and perfect recovery of bandlimited signals
. In
1990 Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing
(vol. 6, pp.
709
712
).
Piscataway, NJ
:
IEEE
.
Lazar
,
A. A.
, &
Tóth
,
T. L.
(
2004a
).
Sensitivity analysis of time encoded bandlimited signals
. In
Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing
(vol. 2, pp.
709
712
).
Piscataway, NJ
:
IEEE
.
Lazar
,
A. A.
, &
Tóth
,
L. T.
(
2004b
).
Perfect recovery and sensitivity analysis of time encoded bandlimited signals
.
IEEE Transactions on Circuits and Systems–I: Regular Papers
,
51
(
10
),
2060
2073
.
Strohmer
,
T.
(
1993
).
Irregular sampling, frames and pseudoinverse
.
Master’s thesis, University of Vienna
.
Titchmarsh
,
E. C.
(
1926
).
The zeros of certain integral functions
. In
Proc. London Math. Soc.
,
1926
,
283
302
.
Tuckwell
,
H. C.
(
1988
).
Introduction to theoretical neurobiology.
Cambridge
:
Cambridge University Press
.
Unser
,
M.
(
2000
).
Sampling—50 Years after Shannon
.
Proceedings of the IEEE
,
88
,
569
587
.