The winner-take-all (WTA) computation in networks of recurrently connected neurons is an important decision element of many models of cortical processing. However, analytical studies of the WTA performance in recurrent networks have generally addressed rate-based models. Very few have addressed networks of spiking neurons, which are relevant for understanding the biological networks themselves and also for the development of neuromorphic electronic neurons that commmunicate by action potential like address-events. Here, we make steps in that direction by using a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events. Our work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval. We show first for the case of spike rate inputs that input discrimination and the effects of self-excitation and inhibition on this discrimination are consistent with results obtained from the standard rate-based WTA models. We also extend this discrimination analysis of spiking WTAs to nonstationary inputs with time-varying spike rates resembling statistics of real-world sensory stimuli. We conclude that spiking WTAs are consistent with their continuous counterparts for steady-state inputs, but they also exhibit high discrimination performance with nonstationary inputs.
The winner-take-all (WTA) computation is an intrinsic property of recurrent networks, which abound in cortex. Several studies have discussed the computational power a WTA network offers (Riesenhuber & Poggio, 1999; Yuille & Geiger, 1998; Maass, 1999; Maass, 2000) and its role in cortical processing models, for example, a hierarchical model of vision in cortex (Riesenhuber & Poggio, 1999) or a model of selective attention and recognition processes (Itti, Koch, & Niebur, 1998).
This computation is thought to be an intrinsic decision component of the cortical microcircuit. Decision processes in the brain are not localized in one specific region, but evolve in a distributed manner when different brain regions cooperate to reach a consistent interpretation. The winner-take-all circuit with both cooperative and competitive properties is a main building block that contributes to this distributed decision process (Amari & Arbib, 1982; Douglas, Koch, Mahowald, Martin, & Suarez, 1995).
Because of these properties, WTA networks have been of great interest to researchers. Yuille and Grzywacz (1989) and Ermentrout (1992) are classical references to theoretical analyses. In these early models, the neurons are nonspiking, that is, they receive an analog input and have an analog output. The analog WTA computation can be efficiently implemented in very-large-scale integrated (VLSI) transistor circuits. With initial circuits described in Lazzaro, Ryckebusch, Mahowald, and Mead (1989), a whole series of analog models (e.g., Kaski & Kohonen, 1994; Barnden & Srinivas, 1993; Hahnloser, Sarpeshkar, Mahowald, Douglas, & Seung, 2000) and implementations has been developed (He & Sanchez-Sinencio, 1993; Starzyk & Fang, 1993; Serrano-Gotarredona & Linares-Barranco, 1995; Kincaid, Cohen, & Fang, 1996; Indiveri, 1997; Indiveri, 2001; Moller, Maris, Tomes, & Mojaev, 1998; Hahnloser, Sarpeshkar, Mahowald, Douglas, & Seung, 2000; Liu, 2000; Liu, 2002).
In the past decade, spiking neuron models and their electronic counterparts have gained increasing interest. Spike-based networks capture the asynchronous and time-continuous computation inherent in biological nervous systems. Neuron models with analog inputs and analog outputs can be converted into models with spiking output if a thresholding operating is introduced to the neuron. Coultrip, Granger, and Lynch (1992) is an early theoretical analysis, with further descriptions in Jin and Seung (2002) and Yu, Giese, and Poggio (2002) and VLSI implementations in Chicca, Indiveri, and Douglas (2004), Abrahamsen, Häfliger, and Lande (2004), and Oster, Wang, Douglas, and Liu (2008).
The next theoretical consideration are models with both spiking input and spiking output. Previous theoretical studies focused on population models (e.g., Lumer, 2000), where the population firing represents a graded analog value. Indiveri, Horiuchi, Niebur, and Douglas (2001) and Chicca, Lichtsteiner, Delbruck, Indiveri, and Douglas (2006) are VLSI implementations that use the firing rate as an analog input and output encoding. A special case is presented in Carota, Indiveri, and Dante (2004) and Bartolozzi and Indiveri (2004), in which the core of the winner-take-all is analog but the signals are converted to spike rates for communication with the outside world. Other theoretical studies consider alternate neuron models, such as oscillatory neurons (Wang, 1999) or differentiating units (Jain & Wysotzki, 2004).
No analysis until now has considered the effect of single spikes and spike timing on the winner-take-all computation with spiking inputs and outputs. Gautrais and Thorpe (1998) start their analysis from a similar point of view, that is, how the network decides which of two input spike rates is higher, but they do not consider sampling of this estimation in the output spikes (this analysis could be classified as spiking input and analog output).
The emergence of multichip spiking systems that incorporate the WTA as part of their decision process (Serrano-Gotarredona et al., 2005; Choi, Merolla, Arthur, Boahen, & Shi, 2005; Chicca, Lichtsteiner, Delbruck, Indiveri, & Douglas, 2006; Vogelstein, Mallik, Culurciello, Cauwenberghs, & Etienne-Cummings, 2007) highlights the necessity for theoretical quantification of a WTA network based on different network parameters (e.g., CAVIAR), especially if these systems are to be used in different applications. To address this need, we develop a framework for quantifying the performance of a spiking WTA network with spiking inputs based on the network parameters and input statistics. We start with the definition of the hard WTA architecture in section 2 and treat the network as an event-based classifier. The network has to decide which neuron, the “winner,” receives the strongest input after a certain time interval, and it indicates its decision with an output spike. In a spiking system, the term strongest input has to be defined: How is the input signal encoded in the spikes? What are the statistical properties of the spike trains? We consider the following cases: stationary inputs of regular frequencies in section 3, stationary inputs of Poisson distribution, effects of self-excitation and inhibition on the WTA performance in section 4, and a model of nonstationary Poisson inputs in section 5 for two cases: where the strongest input switches between two neurons and where the input activity travels across the neurons of the network. The latter resembles the statistics of real-world sensory stimuli. This formalism has been applied during the programming of the WTA module in the CAVIAR multichip asynchronous vision system (Oster, Wang, Douglas, & Liu, 2008; Serrano-Gotarredona et al., 2005).
2. Winner-Take-All Network Connectivity
We assume a network of integrate-and-fire neurons that receives input spikes through excitatory or inhibitory connections. The WTA operation is implemented by having these neurons compete against one another through inhibition. In biological networks, excitation and inhibition are specific to the neuron type. Excitatory neurons make only excitatory connections to other neurons, and inhibitory neurons make only inhibitory connections. This specificity is obeyed in a typical WTA network where inhibition between excitatory neurons is mediated by populations of inhibitory interneurons (see Figure 1a).
To adjust the amount of inhibition between the neurons (and thereby the strength of the competition), both types of connections could be modified: the excitatory connections from excitatory neurons to interneurons and the inhibitory connections from interneurons to excitatory neurons. In our model, we assume the forward connections between the excitatory and the inhibitory neurons to be strong, so that each spike of an excitatory neuron triggers a spike in the global inhibitory neurons. The amount of inhibition between the excitatory neurons is adjusted by tuning the connections from the global inhibitory neurons to the array neurons. This configuration allows the fastest spreading of inhibition through the network.
With this configuration, we can simplify the network by replacing the global inhibitory interneurons with full inhibitory connectivity between the excitatory neurons (see Figure 1b). This simplification is used only for the analysis. In an electronic implementation, the configuration with the global interneurons is more suitable.
While a cortical neuron can receive possibly correlated inputs from up to 10,000 synapses, we assume that the external inputs to the neurons are uncorrelated and that all the inputs can be summed together into one spiking input. This assumption is valid especially for Poisson-distributed input, since summing spike trains with Poisson statistics results again in a spike train with Poisson statistics.
In addition to the excitation from the external inputs and inhibition from other neurons, each neuron has a self-excitatory synapse that facilitates its selection as the winner in the next cycle once it has been chosen as the winner. In this analysis, we do not consider external inhibitory input spikes even though this could easily be done by assigning a sign to every input spike. Since the statistical properties stay the same, we disregard the external inhibitory input for clarity.
2.1. Neuron Model.
3. Stationary Inputs of Regular Rates
We first discuss how we set up the network connectivity for the case of the hard WTA mode where the neurons receive external input spike trains of regular frequency. In this mode, a winning neuron is selected after it has received a predetermined number of input spikes, n, needed to generate an output spike. This neuron is the one whose external input spike train has the smallest interspike interval. We now consider how to set up the constraints on VE and VI so that even though other neurons beside neuron k might first produce transient output spikes because of initial conditions, these neurons will not fire again once neuron k spikes.
We assume that all neurons i ∈ 1,…, N, receive an external input spike train of constant frequency ri, and neuron k receives an external input with the highest frequency (rk>ri; ∀ i ≠ k). We now describe the conditions for fulfilling the constraints for this hard WTA mode (see Figure 2):
- As soon as neuron k spikes once, no other neuron i ≠ k should be able to spike because of the inhibitory spike from neuron k. Another neuron can receive up to n spikes even if its input spike frequency is lower than that of neuron k because the neuron is reset to Vself after a spike, as illustrated in Figure 2. Hence, the inhibitory weight should be such that
If a neuron j other than neuron k spikes in the beginning, there will be some time in the future when neuron k spikes and becomes the winning neuron. From then on, conditions 1 and 2 hold, so a neuron j ≠ k can generate a few transient spikes, but neuron k wins.To see this, let us assume that the external inputs to neurons j and k spike with almost the same frequency (but rk>rj). For the interspike intervals Δi = 1/ri, this means Δj>Δk. Since the spike trains are not synchronized, the corresponding input spikes to both neurons k and j have a changing temporal offset ϕ. At every output spike of neuron j, the input spike temporal offset decreases by nk(Δj − Δk) until ϕ < nk(Δj − Δk). When this happens, neuron k receives (nk + 1) input spikes before neuron j spikes again and crosses threshold:
Case 3 happens only under certain initial conditions, for example, when Vk ≪ Vj or when neuron j initially receives an external input spike frequency that is higher than that to neuron k. A leaky integrate-and-fire model will ensure that all membrane potentials are discharged (Vi = 0) at the onset of a stimulus. The network will then select the winning neuron after receiving a predetermined number of input spikes, and this winner will have the first output spike. Even if the conditions above are fulfilled, the neuron with the smaller input could stay as the winner for a long time before the neuron with the larger input takes over, as the switching dynamics depends on the initial conditions of the network.
If the rate is regular, the information about the strongest input is already contained in one interspike interval. If Vth/2 < VE < Vth and Vth/2 < Vself < Vth are chosen, the network performs the selection in one interspike interval. We call this an optimal decision in the sense that the network will always choose the winning neuron as the one receiving the highest input frequency assuming perfect network homogeneity.
A WTA network can exhibit multistability or hysteresis, that is, dependent on initial conditions, a neuron with a lower input frequency can remain the winner. By choosing Vself>VE, a neuron k that is initially the winner will remain the winner even if another neuron j receives the stronger input. The increase in input frequency that is needed for neuron j to take over as the winner will depend on Vself.
WTA models that exploit firing rate thresholds are normally dependent on the number of neurons in the network. The WTA computation that we describe here exploits the spike timing; hence, the mechanism of competition is independent of the number of neurons. The network can be scaled to any size as long as the inhibitory neuron can still completely inhibit the excitatory neurons with one output spike.
4. Stationary Poisson Rates
For stationary Poisson-distributed input spikes, we first consider a network of two neurons, labeled 0 and 1, with the connectivity shown in Figure 3. Because of the Poisson statistics of the inputs, the probability of selecting the correct winner depends on the ratio of their input Poisson rates ν and the number of input spikes, n, needed for the neuron to reach threshold.
In general, the probability that the first spike of the network comes from the neuron receiving the higher input rate increases as the neurons integrate over more spikes. Equation 4.1 cannot be solved easily in closed form, but it can be integrated numerically.
In the analysis, we assume that the neurons were initially discharged in the derivation of P0outI. This probability is not necessarily the same as the general case of P0out, which describes the probability that any output spike of the network comes from neuron 0. However, in the case of strong inhibition (VI = Vth) and no self-excitation (Vself = 0), both neurons are also completely discharged after any output spike so P0out = P0outI. The network has no memory of its past input, and integration will start again with the initial conditions.
4.1. Filter Model.
An alternate way of viewing this WTA process is to consider the network as a filter that increases the percentage of spikes encoding for neuron 0. Figure 4 details this interpretation as a filter model in signal theory. Interestingly, P0out is independent of the total input rate, ν. To show this, we replace ν0 → P0inν (rate of input spikes to neuron 0) and ν1 → (1 − P0in)ν (rate of input spikes to neuron 1) with ν = ν0 + ν1 in equation 4.1. We substitute νt → t′ and ν dt → dt′. The integration limits (0; ∞) do not change:
We quantify the effect of the WTA on the input and output probabilities for different values of n in Figure 5. The optimal classification would result in a step function: for any P0in>0.5, the output probability P0out is 1. The results show that the output performance improves as the neurons integrate over more input spikes.
4.2. General Case.
4.3. Effect of Self-Excitation.
The Markov model allows us to quantify the effect of the self-excitation on the WTA performance. Here, we assume VI = Vth, so p is independent of the membrane potential at the time of the inhibition (see equation 4.8). Strong self-excitation improves the probability that an output spike of the network is emitted from the neuron with the higher firing rate (see Figure 7). Interestingly, the output rate for a network that integrates over many input spikes and has strong self-excitation is similar to that of a network that integrates over fewer input spikes and has no self-excitation.
4.4. Effect of Inhibition.
We now look at the performance in the case where the inhibition is weak, that is, VI < Vth. From equation 4.8, the neuron that did not spike needs p spikes to reach threshold after being inhibited, while the neuron that last spiked needs m spikes to reach threshold. If p < m, a neuron that did not spike before will have a greater probability of spiking next. Starting from the same initial state for both neurons, the neuron that receives the strongest input is the most probable one to spike. After this output spike, the network will then select a neuron that did not spike before, and the probability of a correct decision of the network will decrease correspondingly. Hence, lowering the strength of the inhibition will lead to a decrease in performance.
Our description does not make any predictions about the membrane voltage V−E1 of the non-winning neuron before the output spike, except 0 < V−E1 < Vth. Let us assume that the membrane was sitting close to threshold before the neuron receives inhibition VI. After inhibition, V+E1 = Vth − VI. The non-winning neuron will then reach threshold with p ≈ VI/VE spikes. With this assumption, weakening the inhibition leads to a drastic decrease in performance (see Figure 8a). But depending on the history of the spike input to the non-winning neuron, the membrane potential will be significantly lower than Vth before the inhibition, that is, less inhibition is needed to achieve the same effect. We can address this in simulation (see Figure 8b), showing that the network performance does not decrease that rapidly. For m = 10, inhibition can be decreased to about 70% of Vth before the network shows a significant decrease in performance.
We conclude that weakening the strong inhibition always leads to a decrease in performance. For weak inhibition, the functional description underestimates the performance of the network, while simulation results show that inhibition can be decreased to about 70% of Vth before the network shows a significant decrease in performance.
4.5. Generalization to N Neurons.
The probability of an output spike of the network originating from a neuron k is then given by a vector P. In the equilibrium state, P* is the first eigenvector of matrix P.
5. Time-Varying Firing Rates
We extend our analysis of stationary inputs to inputs with time-varying firing rates in two computational tasks that involve the WTA. In the first task, we evaluate the discrimination performance of a neuron to rate changes in Poisson-distributed inputs (see section 5.1), in particular, we look at the effect of hysteresis from self-excitation, and the subsequent impact of self-excitation on the switching time of the neuron to the input rate change. In the second task, we look at the ability of the network to accurately reconstruct the position of a traveling object that generates an input wave across the neurons of the network (see section 5.2). In this case, each neuron sees a time-varying Poisson input rate due to the movement of the object. For such inputs, each neuron will receive only a finite number of input spikes for a finite duration of time. As a result, the number n of input spikes that a neuron can integrate to make a decision is constrained by the dynamics of the input to the network.
5.1. Switching of Stronger Input.
We first discuss the case where the input with the higher rate switches between the two neurons in a network with self-excitation. Using again a two-neuron network as in Figure 6, we assume that neuron 1 initially receives the higher rate input, that is, ν1>ν0. At time t = 0, neuron 0 now receives the higher rate input, ν0>ν1. We also assume that the duration of the inputs is long enough before the switching so that the network has a stable output, that is, the output spikes of the network come from neuron 1.
5.1.1. Classification Performance.
To quantify the classification performance of the WTA network, we consider the first spike of neuron 0 as the criterion for a correct detection of the changed inputs. This decision is described by two variables:
PTP(t)—the probability that neuron 0 spikes in time t after the inputs are switched, that is, the probability that the network indicates a correct decision (true positive)
PFP(t)—the probability that neuron 0 spikes in time t if the rates did not switch, that is, the probability that the network indicates a switch even though the rates did not change (false positive).
Figure 9 shows the true and false-positive rates similar to the receiver operating characteristic curve (ROC). The ROC curves quantify the dependence of the classifier performance on a parameter. Varying this parameter determines the true and false-positive probabilities of the classifier. For our classifier, this parameter is time, and it corresponds to the different values of PTP and PFP. The area under the ROC curve quantifies the classifier performance. We follow the engineering definition of the discrimination performance, which is the area under the ROC curve measured between the curve and chance level (dashed line).
The dependence of the discrimination performance and average switching time as a function of n and m for the two-neuron network is shown in Figure 10. In general, the discrimination performance increases if n increases or if self-excitation is increased for a fixed value of n. However, the switching time also increases in both cases. Interestingly, for the same average switching time, a network with a larger n and no self-excitation has a better discrimination performance than a network with a smaller n and self-excitation.
5.2. Traveling Input.
Real-time systems that interact with the world usually have to process nonstationary input, for example, retina spikes from an object moving in space or auditory spikes from music and speech. Here, we show the analysis of a situation where a network has to determine with high accuracy the position of a ball traveling at a constant speed. We model the input by a gaussian-shaped wave traveling across the neurons. This simplifying description of the input is a good description of the output spikes from one of the feature extraction stages of a multichip asynchronous vision system (Oster, Douglas, & Liu, 2007). It can also be a general description of the outputs of temporal feature filters.
We analyze how we can set n, the number of input spikes that a neuron needs to reach threshold, so that we obtain optimal discriminability performance of the WTA network. In this scenario, this discriminability measure corresponds to a neuron firing only once and exactly at the time when the maximum of the gaussian wave moves over that neuron. Since the input scenario corresponds to the overlap of gaussian inputs in space and time, the discrimination analysis is related to the discriminability measure used in pyschophysics.
We can compare the temporal distance d to the discriminability measure used in psychophysics: it measures the distance between the center of two gaussian distributions that are normalized to σ = 1. is the overlap between the distributions, which reflects the difficulty in separating the two distributions. In our example, d is the distance between two neurons that receive spike input consecutively. Nevertheless, d is also a measure of the difficulty in classifying the ball position like the discriminability measure in psychophysics.
The performance of the WTA is best if the output is as sparse as possible. Since we require that each neuron makes, on average, one spike when the object passes over it, the average integration time of the WTA decision is then the time needed for the object to move from one neuron to the next, d. It is natural to center this interval on the neuron, for neuron 0 at t = 0. In this time, the neuron receives a higher firing than all its neighbors (see Figure 11). Integration of inputs to the WTA starts then at the intersection of two neighboring input firing rates at t = −d/2. We use the variable T to indicate the integration time (T = t + d/2). To start the analysis, we assume that all neurons are reset at this time.
Deviation of the stimulus position reconstructed from network output from the ideal case results from two types of errors: jitter in the timing of the output spikes and spikes from neurons that are not aligned to the stimulus position. We will summarize the effect of both by defining the total position error e as the area between the stimulus position reconstructed from the network output and the ideal case. We norm this area to the number of neurons, so that e denotes an average deviation of the reconstructed stimulus position per neuron.
Spikes from neurons that are not aligned to the stimulus position contribute to the classification error eclass. We assume that if the network decides on an incorrect stimulus position, this position will be kept until the next spike of the network at time d later. We define the error as the area between the correct position 0 and j, weighted by the probability Pjout that neuron j makes the first output spike of the network:
Results of the position error obtained by evaluation of the functional description therefore cannot be directly compared to simulation data. Nevertheless, the data fit qualitatively (see Figure 12).
We present an analytical study for quantifying the discrimination performance of a spike-based WTA network based on different network parameters and spiking input statistics. Our work is in contrast to previous analyses of WTA behavior that considered only analog or spike-rate coded inputs. Rate coding schemes are useful for stationary inputs but fail to describe the dynamic input signals that occur in real-time systems for time-varying stimuli. In particular, these studies do not consider the transition region between single spike and spike rate coding in a WTA network.
We assume a nonleaky integrate-and-fire neuron model in our analysis in part because we could formulate closed-form solutions for the input statistics considered and also in part because the VLSI implementation follows this model. The effect of adding leakage would be rather small, at least if we added linear leakage. Adding leakage with an exponential dependency of the membrane voltage (the same effect as conductance-based synapses) would lead to a more complex neuron behavior exhibiting random walk, as discussed before. Unfortunately, an analysis with closed-form equations and inequalities would not be feasible in this case, but the network would have to be simulated.
We have also not considered the influence of noise on the membrane because of the following two reasons. First, we consider only excitatory external input, and, second, our neuron model does not include conductance-based synapses. Therefore, the membrane potential does not have a stable intermediate voltage as in a random walk case where the neurons can quickly reach threshold with any additional input, including noise input. Since the considered inhibition in our network is strong, the other neurons besides the winner are discharged to V = 0. The condition that all neurons (except the winner) have a membrane potential of zero will therefore happen frequently—after every output spike of the network. This behavior is specific to the chosen model of neuron and synapses, which was a prerequisite to a closed formulation of the winner-take-all behavior.
Our analysis is centered on spike numbers and spike times, thus making this analysis extendable for any size network. This analysis is especially relevant for the development of neuromorphic electronic spiking networks used in engineering applications. For example, we used this theoretical work to predict the outputs of our hardware VLSI implementation of the WTA network in an asynchronous multichip spiking system called CAVIAR (Serrano-Gotarredona et al., 2005). The CAVIAR system consists of a temporal contrast retina (Lichtsteiner, Pösch, & Delbrück, 2008), a bank of spike-based convolution filters (Serrano-Gotarredona, Serrano-Gotarredona, Acosta-Jimenez, & Linares-Barranco, 2006), a winner-take-all network (Oster et al., 2008), and a learning module (Häfliger, 2007) that classifies trajectories of moving objects.
The WTA module in CAVIAR computes the best feature represented in the image based on the spike outputs of the convolution filters and also computes the locations of this best feature. When the system sees a moving object, the input to the WTA network is represented by a spike wave of gaussian shape that travels across the neurons. We show that the output of the higher stages of CAVIAR can be well approximated by Poisson statistics (Oster et al., 2007), although the retina and convolution chips are completely deterministic. This is the same input that we used for our analysis of WTA behavior in section 5.2, and we can compare the predicted performance of our theoretical model with the performance of the implemented model in CAVIAR. The chip's performance follows the theoretical prediction for inputs of constant currents, regular spike rates, and Poisson spike trains (Oster et al., 2008). The achieved performance, that is, the detection of the position of a moving object by the WTA module, is close to optimal in the case of nonstationary Poisson statistics (Oster et al., 2007).
Fast detection tasks that can be performed by a system like CAVIAR with a finite number of neurons (≈ 32,000 neurons) do not allow time for a system to encode inputs as spike rates. Single spikes should be considered instead. Our analytical approach can be used to evaluate systems whose outputs range from single spike codes to spike rate coding by quantifying the performance of the classifier dependent on the number the input spikes the network needs to reach a decision, that is, to elicit an output spike. It is also applicable to systems that do not make any assumptions about the specific coding scheme, for example, in the visual information processing system, SpikeNET, in which signals are encoded in a rank-order code (Van Rullen & Thorpe, 2001, 2002).
Spike-based networks that capture the asynchronous and time-continuous computation inherent in biological nervous systems can provide a powerful alternate technology to today's digital processors. Applying the principles of biological processing to engineering applications requires a thorough understanding of the underlying computation: the processing architecture, the range of network and circuit parameters, and the resulting performance.
We made steps in that direction by using a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events. Our work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval. We showed that for the case of Poisson spike inputs, the discrimination performance of the network (i.e., the probability of the network making a correct decision) increases as self-excitation is increased, but as expected, the self-excitation leads to hysteresis, which means that the switching time for the network to detect a change in input rates increases. We find that weak inhibition primarily decreases the network performance.
We also extended this discrimination analysis of spiking WTAs to nonstationary inputs with time-varying spike rates resembling statistics of real-world sensory stimuli. We used this analysis to predict the performance of a WTA chip in a large-scale, multichip, asynchronous vision system. We conclude that spiking WTAs also exhibit high discrimination performance with nonstationary inputs.
This work was supported in part by EU grants IST-2001-34124 and FP6-2005-015803. We acknowledge Sebastian Seung for discussions on the winner-take-all mechanism. We also acknowledge Prashanth D'Souza and YingXue Wang for a careful reading of the draft and the anonymous referees for their helpful comments.