Abstract

In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 0.18 times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 5.79 ms longer fading memory than random liquids, which have shown 92.8 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively.

1  Introduction

In neural networks, “plasticity of synapses” refers to their connection rearrangements and changes in strengths over time. Over the past decade, a plethora of learning rules have been proposed that are capable of training networks of spiking neurons through various forms of synaptic plasticity (Gardner, Sporea, & Grüning, 2015; Kuhlmann, Hauser-Raspe, Manton, Grayden, Tapson, & van Schaik, 2014; Sporea & Grüning, 2013; Florian, 2013; Ponulak & Kasiński, 2010; Gutig & Sompolinsky, 2006; Brader, Senn, & Fusi, 2007; Arthur & Boahen, 2006; Gerstner & Kistler, 2002; Moore, 2002; Poirazi & Mel, 2001). A large majority of this work has explored weight plasticity, where a neural network is trained by modifying (strengthening or weakening) the synaptic strengths. We identify another unexplored form of plasticity mechanism, termed structural plasticity, that trains a neural network through formation and elimination of synapses. Since structural plasticity involves changing the network connections over time, it does not need the provision to keep high-resolution weights. Hence, it is inherently a low-resolution learning rule. Such a type of low-resolution learning rule is motivated by the following biological observations:

  1. Biological experiments have shown that the strength of synaptic transmission at cortical synapses can experience considerable fluctuations “up” and “down” representing facilitation and depression, respectively, or both, when excited with short synaptic stimulation, and these dynamics are distinctive of a particular type of synapse (Thomson, Deuchars, & West, 1993; Markram & Tsodyks, 1996; Varela, Sen, Gibson, Fost, Abbott, & Nelson, 1997; Hempel, Hartman, Wang, Turrigiano, & Nelson, 2000). This kind of short-time dynamics is contrary to the traditional connectionist models assuming high-resolution synaptic weight values and conveys that synapses may have only a few states.

  2. Experimental research on long-term potentiation (LTP) in the hippocampus region of the brain has revealed that excitatory synapses may exist in only a small number of long-term stable states, where the continuous grading of synaptic efficacies observed in common measures of LTP may exist only in the average over a huge population of low-resolution synapses with randomly staggered thresholds for learning (Petersen, Malenka, Nicoll, & Hopfield, 1998).

Since the learning happens through formation and elimination of synapses, structural plasticity has been used to train networks of neurons with active dendrites and binary synapses (Poirazi & Mel, 2001; Hussain, Liu, & Basu, 2015; Roy, Basu, & Hussain, 2013; Roy, Banerjee, & Basu, 2014; Roy, San, Hussain, Wei, & Basu, 2016). This work demonstrated that networks constructed of neurons with nonlinear dendrites and binary synapses and trained through structural plasticity rules can obtain superior performance for supervised and unsupervised pattern recognition tasks.

However, until now, structural plasticity has been employed as a learning rule only in the context of neurons with nonlinear dendrites and binary synapses. We identify that it is a generic rule and can be tailored to train any neural network, thereby allowing it to evolve in low resolutions. We intend to venture into this domain, and as a first step we have chosen to modify the sparse recurrent connections of a liquid or reservoir of liquid state machine (LSM) (Maass, Natschläger, & Markram, 2002) constructed of standard leaky integrate-and-fire (LIF) neurons through structural plasticity and study its effects. In our algorithm, structural plasticity happening in longer timescales is guided by a fitness function updated by a rule in shorter timescales inspired by spike, timing dependent plasticity. In contrast to the recently proposed algorithms that aim to enhance LSM performance by evolving its liquid (Xue, Hou, & Li, 2013; Hourdakis & Trahanias, 2013; Notley & Gruning, 2012; Obst & Riedmiller, 2012), our structural plasticity-based training mechanism provides the following advantages:

  1. For hardware implementations, the choice of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Unlike traditional algorithms modifying and storing real-valued synaptic weights, online learning in real-time scenarios is achieved by the proposed algorithm only through modification of the connection table stored in memory.

  2. Due to the presence of positive feedback connections in liquid, training it through weight plasticity might lead to an average increase in synaptic weight and eventually take it to the unstable region. We conjecture that since our connection-based learning rule keeps the average synaptic weight of the liquid constant throughout learning, it reduces the chance of leading the liquid into instability.

The rest of the letter is organized as follows. In the following section, we discuss the LSM framework and throw some light on previous work on LSM and structural plasticity. In section 3, we propose the structural plasticity-based unsupervised learning rule we have developed to train a liquid or reservoir composed of spiking neurons. In section 4, we share and discuss the results obtained from the experiments performed to evaluate the performance of our liquid. We conclude by discussing the implications of our work and future directions in the last section. The appendix lists the specifications of the liquid architecture and values of parameters used in this letter.

2  Background and Theory

In this section, we briefly describe the LSM framework and look at some work done over the past few years that aspired to improve the liquid or reservoir of LSM. Next we briefly review a few supervised and unsupervised structural plasticity learning rules.

2.1  Theory of Liquid State Machine

LSM is a reservoir computing method developed from the viewpoint of computational neuroscience by Maass, Natschläger, & Markram, (2002). It supports real-time computations by employing a high-dimensional heterogeneous dynamical system that is continuously perturbed by time-varying inputs. The basic structure of LSM is shown in Figure 1. It comprises three parts: an input layer, a reservoir or liquid, and a memoryless readout circuit. The liquid is a recurrent interconnection of a large number of biologically realistic spiking neurons and synapses. The readout is implemented by a pool of neurons with no lateral interconnections. The spiking neurons of the liquid are connected to the neurons of the readout. The liquid does not create any output; instead, it transforms the lower-dimensional input stream to a higher-dimensional internal state. These internal states act as an input to the memoryless readout circuit, which is responsible for producing the final output of LSM.

Figure 1:

The LSM framework. LSM consists of three stages. The first stage, the input layer, is followed by a pool of recurrent spiking neurons whose synaptic connections are generated randomly and are usually not trained. The third stage is a simple linear classifier that is selected and trained in a task-specific manner.

Figure 1:

The LSM framework. LSM consists of three stages. The first stage, the input layer, is followed by a pool of recurrent spiking neurons whose synaptic connections are generated randomly and are usually not trained. The third stage is a simple linear classifier that is selected and trained in a task-specific manner.

Following Maass, Natschläger, & Markram, (2002), if is the input to the reservoir, then the liquid neuron circuit can be represented mathematically as a liquid filter , which maps the input function to the internal states as
formula
2.1
The next part of LSM—the readout circuit—takes these liquid states as input and transforms them at every time instant into the output given by
formula
2.2

The liquid circuit is general and does not depend on the problem at hand, whereas the readout is selected and trained in a task-specific manner. Moreover, multiple readouts can be used in parallel for extracting different features from the internal states produced by the liquid. (For more details on the theory and applications of LSM, refer to Maass et al., 2002.)

2.2  Previous Research on Improvement of Liquid

Xue et al. (2013), proposed a novel spike-timing-dependent plasticity (STDP)–based learning rule for generating a self-organized liquid. The showed that LSM with the STDP learning rule provides better performance than LSM with randomly generated liquid. Ju, Xu, Chong, and Vandongen (2013) studied in detail the effect of the distribution of synaptic weights and synaptic connectivity in the liquid on LSM performance. In addition, the proposed a genetic algorithm–based rule for the evolution of the liquid from a minimum structure to an optimized kernel having an optimal number of synapses and high classification accuracy. Hourdakis and Trahanias (2013) used the Fisher's discriminant ratio as a measure of the separation property of the liquid. Subsequently, they used this measure in an evolutionary framework to generate liquids with suitable parameters, such that the performance of readouts is optimized. Sillin et al. (2013) implemented a reservoir by using atomic switch networks (ASN), which are capable of using nonlinear dynamics without needing to control or train the connections in the reservoir. They showed a method for optimizing the physical device parameters to maximize efficiency for a given task. Schliebs, Fiasché, and Kasabov (2012) proposed an algorithm to dynamically vary the firing threshold of the liquid neurons in the presence of neural spike activity such that LSM is able to achieve both a high sensitivity of the liquid to weak inputs as well as an enhanced resistance to overstimulation for strong stimuli. Wojcik (2012) simulated the LSM by forming the liquid with Hodgkin-Huxley neurons instead of LIF neurons as used in Maass et al. (2002). They provided a detailed analysis of the influence of cell electrical parameters on the separation property of this Hodgkin-Huxley liquid. Notley and Gruning (2012) proposed a learning rule that updates the synaptic weights in the liquid by using a triphasic STDP rule. Frid, Hazan, and Manevitz (2012) proposed a modified version of LSM that can successfully approximate real-valued continuous functions. Instead of providing spike trains as input to the liquid, they have directly provided the liquid with continuous inputs. Moreover, they have also used neurons with firing history–dependent sliding threshold to form the liquid. Hazan and Manevitz (2012) showed that LSM in its normal form is less robust to noise in data, but when certain biologically plausible topological constraints are imposed the robustness can be increased. Schliebs, Mohemmed, and Kasabov (2011) presented a liquid formed with stochastic spiking neurons and termed their framework pLSM. They showed that due to the probabilistic nature of the proposed liquid, in some cases pLSM is able to provide better performance than traditional LSM. Rhéaume, Grenier, and Bossé (2011) proposed novel techniques for generating liquid states to improve classification accuracy. First, they presented a state generation technique that combines the membrane potential and firing rates of liquid neurons. Second, they suggested representing the liquid states in frequency domain for short-time signals of membrane potentials. Third, they showed that combining different liquid states leads to better readout performance. Kello and Mayberry (2010) presented a selftuning algorithm that is able to provide a stable liquid firing activity in the presence of a varied range of inputs. A selftuning algorithm is used with the liquid neurons that adjusts the postsynaptic weights in such a way that the spiking dynamics remains between subcritical and supercritical. Norton and Ventura (2010) proposed a learning rule, separation-driven synaptic modification (SDSM), for training the liquid, which is able to construct a suitable liquid in fewer generations than random search.

2.3  Previous Research Involving Structural Plasticity

Poirazi and Mel (2001) showed that supervised classifiers employing neurons with active dendrites (those having lumped nonlinearities and binary synapses) can be trained through structural plasticity to recognize high-dimensional Boolean inputs. Inspired by Poirazi and Mel (2001), Hussain et al. (2015) proposed a supervised classifier composed of neurons with nonlinear dendrites and binary synapses suitable for hardware implementations. Roy et al. (2013, 2014) upgraded this classifier to accommodate spike train inputs and proposed a spike-based structural plasticity rule that was amenable to neuromorphic implementations. Roy et al. (2015) proposed another supervised spike-based structural plasticity rule inspired by Tempotron (Gutig & Sompolinsky, 2006) for training a threshold-adapting neuron with active dendrites and binary synapses. Moreover, an unsupervised spike-based strutural plasticty learning rule was proposed in Roy and Basu (in press) for training a winner-take-all architecture composed of neurons with active dendrites and binary synapses.

Until now, the work related to structural plasticity that we have showcased in this section has been confined to networks employing neurons with nonlinear dendrites and binary synapses. In recent years, structural plasticity has been used to train generic neural networks as well. For example, George, Mayr, Indiveri, and Vassanelli (2015) proposed an STDP learning rule interlaced with structural plasticity to train feedforward neural networks. George, Diehl, Cook, Mayr, and Indiveri (2015), modified this work and applied it to a highly recurrent spiking neural network. In these two works, the synaptic weights are modified by STDP, and when a critical period is reached, the synapses are pruned through structural plasticity. Unlike these works, which implement an interplay between STDP and structural plasticity, our proposed learning rule structural plasticity or connection modifications happen on longer timescales (at the end of patterns), guided by the fitness function or correlation coefficient updated by an STDP-inspired rule in shorter timescales (at each pre- and postsynaptic spike). In the following section, we propose our connection-based unsupervised learning rule.

3  Unsupervised Structural Plasticity in Liquid

The liquid is a three-dimensional recurrent architecture of spiking neurons that are sparsely connected with synapses of random weights. We used leaky integrate-and-fire (LIF) neurons and biologically realistic dynamic synapses to construct our liquids. The specifications of the architecture and the parameter values are listed in the appendix, and unless otherwise mentioned, we use these values in our experiments. In the proposed algorithm, learning happens through the formation and elimination of synapses instead of the traditional way of updating real-valued weights associated with them. Hence, to guide the unsupervised learning, we define a correlation coefficient–based fitness value for the synapse (if present) connecting the th excitatory neuron to the th excitatory neuron, as a substitute for its weight. Since the liquid neurons are sparsely connected, s are defined only for the neuron pairs that have synaptic connections. Note that the proposed learning rule modifies only the connections between excitatory neurons. The operation of the network and the learning process comprises the following steps whenever a pattern is presented:

  • is initialized as excitatory neuron pairs having a synaptic connection from the th to the th neuron. Here, is the number of excitatory liquid neurons, and and . Note that is the total number of neurons in the liquid, which comprises excitatory and inhibitory neurons.

  • The value of is depressed at presynaptic and potentiated at postsynaptic spikes according to the following rule:

    1. Depression. If a postsynaptic spike of the th excitatory neuron appears after some time delay as a presynaptic spike to the th excitatory neuron at time , then the value of is updated by a quantity given by
      formula
      where and are the output and the postsynaptic trace of the th excitatory spiking neuron. In this work, we have chosen an exponential form of given by .
    2. Potentiation. If the th excitatory neuron fires a postsynaptic spike at time , then for each synapse connected to it is updated by given by:
      formula
      where and are its delay and the presynaptic trace of the spiking input it receives.
    A pictorial explanation of this update rule of is shown in Figure 2.

  • After the network has been integrated over the current pattern, the synaptic connections of the excitatory neurons that have produced at least one spike are modified.

  • If we consider that out of excitatory neurons have produced a postsynaptic spike for the current pattern, then the synaptic connections of the neuron are updated by tagging the synapse () having the lowest value of correlation coefficient out of all the synapses connected to it, for possible replacement.

  • To aid the unsupervised learning process, randomly chosen sets containing of the excitatory neurons are forced to make connections to the neuron through silent synapses having the same physiological parameters as . We refer to these synapses as silent since they do not contribute to the computation of the neuron’s membrane voltage and therefore do not alter the liquid output when the same pattern is reapplied. The value of is calculated for synapses in , and is replaced with the synapse having maximum () in . That is, the presynaptic neuron connected to is swapped with the presynaptic neuron connected to .

  • All the values are reset to zero, and the previous steps are repeated for every pre- and postsynaptic spike. The proposed learning rule is explained by demonstrating the connection modification of a single neuron in Figure 3.

Figure 2:

An example of the update rule of fitness value () associated with the synapse connecting the th excitatory neuron to the th excitatory neuron. When a postsynaptic spike is emitted by excitatory neuron at , the value of increases by . When excitatory neuron emits a postsynaptic spike at , it reaches neuron at due to the presence of synaptic delay. The arrival of a presynaptic spike at reduces by as shown in the figure.

Figure 2:

An example of the update rule of fitness value () associated with the synapse connecting the th excitatory neuron to the th excitatory neuron. When a postsynaptic spike is emitted by excitatory neuron at , the value of increases by . When excitatory neuron emits a postsynaptic spike at , it reaches neuron at due to the presence of synaptic delay. The arrival of a presynaptic spike at reduces by as shown in the figure.

Figure 3:

The proposed learning rule is explained by showing an example of synaptic connection modification. After the presentation of a pattern, a set of excitatory neurons produces output spikes. Although connection modification takes place for all the neurons in set , the 47th excitatory neuron is randomly chosen for this example and all the stages of its connection modification are shown in panels A to E. The 15th, 33rd, 64th, and 79th excitatory neurons are connected to the 47th excitatory neuron as shown in panel A. Above each connection or synapse, we mention the associated with that synapse. Below each synapse, we mention the set of physiological parameters associated with it, which includes the weight (), delay (), synaptic time constant (), and the , , and parameters. Note that the , , and parameters are applicable only for dynamic synapses. The values of all the connections are checked at the end of the pattern, and the one having the minimum value () is identified. In this example, the connection from the 64th excitatory neuron is identified as the minimum connection and tagged for replacement, Panel b. In the next step, panel c, a randomly chosen set of excitatory neurons is forced to make connections to the 44th neuron through silent synapses. Note that the physiological parameters for these silent synapses are set to —the same as the synapse connecting the 64th excitatory neuron to the 47th excitatory neuron. The next step is to identify the silent synapse having the maximum value of , , which is the connection from the 52nd excitatory neuron. Subsequently, our learning rule swaps these connections to form the updated morphology as shown in panels d and e.

Figure 3:

The proposed learning rule is explained by showing an example of synaptic connection modification. After the presentation of a pattern, a set of excitatory neurons produces output spikes. Although connection modification takes place for all the neurons in set , the 47th excitatory neuron is randomly chosen for this example and all the stages of its connection modification are shown in panels A to E. The 15th, 33rd, 64th, and 79th excitatory neurons are connected to the 47th excitatory neuron as shown in panel A. Above each connection or synapse, we mention the associated with that synapse. Below each synapse, we mention the set of physiological parameters associated with it, which includes the weight (), delay (), synaptic time constant (), and the , , and parameters. Note that the , , and parameters are applicable only for dynamic synapses. The values of all the connections are checked at the end of the pattern, and the one having the minimum value () is identified. In this example, the connection from the 64th excitatory neuron is identified as the minimum connection and tagged for replacement, Panel b. In the next step, panel c, a randomly chosen set of excitatory neurons is forced to make connections to the 44th neuron through silent synapses. Note that the physiological parameters for these silent synapses are set to —the same as the synapse connecting the 64th excitatory neuron to the 47th excitatory neuron. The next step is to identify the silent synapse having the maximum value of , , which is the connection from the 52nd excitatory neuron. Subsequently, our learning rule swaps these connections to form the updated morphology as shown in panels d and e.

4  Experiments and Results

In this section, we describe the experiments performed to evaluate the performance of our algorithm and discuss the results. We employ various metrics to explore different properties of our liquid and compare it with the traditionally used randomly generated liquid. Moreover, we compare its performance with another algorithm that performs iterative refining of liquids.

4.1  Separation Capability

In this section, we throw some light on the computational power of our liquid trained by structural plasticity by evaluating its separation capability on spike train inputs. As a first test, we choose the pairwise separation property considered in Maass et al. (2002) as a measure of its kernel quality. First, we take a spike train pair and of duration sec and give it as input to numerous randomly generated liquids having different initial conditions in each trial. In this experiment, while the inputs remain the same, the liquids are different for different trials. The resulting internal states of the liquid and are noted for and , respectively. We calculate the average Euclidean norm between these two internal states and plot them in Figures 4 and 5 against time for various fixed values of distance between the two injected spike trains and . To compute the distance between and , we use the same method proposed in Maass et al. (2002). Moreover, we show the pairwise separation obtained at each trial at the output of the liquid, which is given by the following formula,
formula
4.1
where the liquid output is sampled at times .
Figure 4:

Same input, different liquid. This figure compares the pairwise separation property of randomly generated liquids and the same liquids when trained through the proposed unsupervised structural plasticity based learning rule. The blue lines indicate the separation capability of randomly generated liquids, and the red lines are obtained by evolving the same liquids by the proposed algorithm. For this simulation, the connections are updated for 15 iterations in each trial. In each iteration, one connection update happens for each spike train of the pair; hence the liquid encounters 30 connection modifications during training. (a) The noise level of both the random and the learned liquid is similar. Hence, during training, our learning rule does not induce extra noise. (b) Liquid trained by structural plasticity is able to obtain more separation than random liquids for 0.1 for all the trials.

Figure 4:

Same input, different liquid. This figure compares the pairwise separation property of randomly generated liquids and the same liquids when trained through the proposed unsupervised structural plasticity based learning rule. The blue lines indicate the separation capability of randomly generated liquids, and the red lines are obtained by evolving the same liquids by the proposed algorithm. For this simulation, the connections are updated for 15 iterations in each trial. In each iteration, one connection update happens for each spike train of the pair; hence the liquid encounters 30 connection modifications during training. (a) The noise level of both the random and the learned liquid is similar. Hence, during training, our learning rule does not induce extra noise. (b) Liquid trained by structural plasticity is able to obtain more separation than random liquids for 0.1 for all the trials.

Figure 5:

Same input, different liquid. The experiment performed to generate Figure 4 has been repeated for input spike train pairs of distance 0.2 (a) and 0.4 (b). Similar to Figure 4b, the trained liquids demonstrate more separation for all trials.

Figure 5:

Same input, different liquid. The experiment performed to generate Figure 4 has been repeated for input spike train pairs of distance 0.2 (a) and 0.4 (b). Similar to Figure 4b, the trained liquids demonstrate more separation for all trials.

In Figures 4 and 5, the blue curves correspond to the traditional randomly generated liquid with no evolution, as proposed in Maass et al. (2002), and the red curves correspond to the scenario when the same randomly generated liquid (for each trial) is trained through structural plasticity. Hence, the blue curves indicate the initial condition for generation of the red curves. In Figure 4a, the curves for are generated by applying the same spike train to two randomly chosen initial conditions of the liquid. Hence, these curves show the noise level of the liquid. Figures 4b, 5a, and 5b correspond to , , and , respectively. The parameters of the liquids are kept as shown in Table 1 in the appendix . Figure 4a depicts that the separation of the liquid states is similar for both methods. This demonstrates that the noise level is the same for both liquids. Figures 4b, 5a, and 5b show that the proposed algorithm is able to produce liquid states with more separation than the traditional liquid with no evolution. In other words, morphologically evolved liquids are better in capturing and amplifying the separation between input spike trains as compared to randomly generated liquids.

In the next experiment, we use the same liquid for all trials but vary the input spike pairs. Five hundred different spike train pairs and (here varies from 1 to 500) of distance are generated and given as input to the same liquid separately. The internal state distance separation for all the trials is averaged and plotted in Figure 6 against time for both the random and trained liquids. The input spike trains typically tend to produce peaks and troughs in the state distance plot. They appear at different regions for different input spike trains. However, these peaks and troughs average out in Figure 6 since we take the mean of state distances obtained from 500 trials with different inputs. This figure clearly depicts that a liquid trained through structural plasticity consistently shows higher separation than a random liquid for different inputs.

Figure 6:

Same liquid, different input. The same liquid is excited by 500 different input spike trains, and the separation between the internal state distances is recorded. The resulting state distance, averaged over 500 trials, is plotted for both the randomly generated liquid and the same liquid when trained through structural plasticity. It is clear that the trained liquid always produces more separation than the random liquid at its output.

Figure 6:

Same liquid, different input. The same liquid is excited by 500 different input spike trains, and the separation between the internal state distances is recorded. The resulting state distance, averaged over 500 trials, is plotted for both the randomly generated liquid and the same liquid when trained through structural plasticity. It is clear that the trained liquid always produces more separation than the random liquid at its output.

Moreover, to evaluate the separability across a wide range of input distances, we generate numerous spike train pairs having progressively increasing distances and note the internal state distances of both the random and trained liquids. While the state distances at sec for both the random and trained liquids are plotted in Figure 7a, their ratio is shown in Figure 7b. Figures 7a and 7b suggest that for inputs with smaller distance, which might correspond to the intraclass separation during classification tasks, the separability provided by both liquids is close. When the distance between inputs increases, which might correspond to interclass separation for classification tasks, the separation provided by trained liquids is more than random liquid. According to Figure 7b, the ratio of separation provided by our liquid trained by structural plasticity and random liquid increases and finally saturates at  1.36. Hence, our trained liquid provides 1.36 0.18 times more interclass separation than a random liquid while maintaining a similar intraclass separation. The increased separation achieved by our morphological learning rule provides the subsequent linear classifier stage with an easier recognition problem.

Figure 7:

(a) The internal state distances (averaged over 200 trials) of randomly generated liquids and the same liquids when trained through structural plasticity are plotted against the distances between the input spike trains. While at lower input distances (intraclass) the separation achieved by both is similar, the trained liquid provides more separation at higher input distances (interclass). (b) The ratio (averaged over 200 trials) of state distance obtained by trained and random liquids gradually increases with input distance and saturates at approximately 1.36 0.18.

Figure 7:

(a) The internal state distances (averaged over 200 trials) of randomly generated liquids and the same liquids when trained through structural plasticity are plotted against the distances between the input spike trains. While at lower input distances (intraclass) the separation achieved by both is similar, the trained liquid provides more separation at higher input distances (interclass). (b) The ratio (averaged over 200 trials) of state distance obtained by trained and random liquids gradually increases with input distance and saturates at approximately 1.36 0.18.

The pairwise separation property is a good measure of the extent to which the details of the input streams are captured by the liquid’s internal state. However, in most real-world problems, we require the liquid to produce a desired internal state not only for two, but for a fairly large number of significantly different input spike trains. Although we could test whether a liquid can separate each of the pairs of such inputs, we still would not know whether a subsequent linear classifier would be able to generate given target outputs for these inputs. Hence, a stronger measure of kernel quality is required. Maass, Legenstein, Bertschinger, and Graz (2005) addressed this issue and proposed a rank-based measure termed the linear separation property as a more suitable metric for evaluating a liquid’s kernel quality or computational power. Although for completeness we discuss here how this quantitative measure is calculated for given spike trains, we invite the reader to look into Maass et al. (2005) for its detailed proof.

The method for evaluating the linear separation property for a liquid C for different spike trains is shown in Figure 8. First, these spike trains are injected into the liquid, and the internal states are noted (see Figures 8a to 8c). Next, on matrix is formed (see Figure 8d) for all the inputs , whose columns are the liquid states resulting at the end of the preceding input spike train of duration . Maass et al. (2005) suggested that the rank of matrix reflects the linear separation property and can be considered a measure of the kernel quality or computational power of the liquid , since is the number of degrees of freedom that a subsequent linear classifier has in assigning target outputs to these inputs . Hence, a liquid with more computational power or better kernel quality has a higher value of . The proposed unsupervised learning rule modifies the liquid connections after the end of each pattern. Hence in real-time simulations, where patterns are presented continuously, the connections within our liquid are modified through structural plasticity in an online fashion. To show that the online update generates liquids with progressively better kernel quality, we plot in Figure 9 the value of the mean rank against the number of spike trains () applied to the circuit. is calculated by taking an average of the ranks obtained from each trial. In a real-world implementation, the liquid connections will get updated without any intervention, and hence calculating rank is not required. However, to probe into the performance of our learning rule, we create the matrix after each input is presented based on all the input spike trains at our disposal and calculate its rank . The curves presented in Figure 9 are generated by applying 100 different spike trains to liquids having random initial states for each trial. Each point in the curves is averaged over 200 trials. Figure 9 clearly shows that the average rank increases as more inputs are presented, until it reaches saturation. This suggests that our proposed unsupervised structural plasticity–based learning rule is capable of generating liquids with more computational power as compared to the traditional method of randomly generating the liquid connections. Note that the point in the curve corresponding to reflects the kernel quality or computational power of the traditional randomly generated liquid. The proposed algorithm is able to generate liquids having , that is, 2.05 0.27 times better than random liquids.

Figure 8:

The process of creating the matrix , the rank of which is used to assess the quality of a liquid. (a) different input spike trains of duration . (b) The input spike trains are injected into the liquid having neurons to obtain the postsynaptic spikes from all the neurons. denotes the firing profile of the liquid when the th input spike train is presented. (c) is passed through an exponential kernel to generate the liquid’s internal states . (d) The internal states at the end of the spike trains, , are read for each input spike train to create the matrix .

Figure 8:

The process of creating the matrix , the rank of which is used to assess the quality of a liquid. (a) different input spike trains of duration . (b) The input spike trains are injected into the liquid having neurons to obtain the postsynaptic spikes from all the neurons. denotes the firing profile of the liquid when the th input spike train is presented. (c) is passed through an exponential kernel to generate the liquid’s internal states . (d) The internal states at the end of the spike trains, , are read for each input spike train to create the matrix .

Figure 9:

The rank of matrices and , averaged over 200 trials and denoted by and , respectively, against the number of input spike trains presented in succession. At each point of the curves, the rank is calculated based on all the spike trains. Hence, the value of and for corresponds to a randomly generated liquid. Our learning rule trains this randomly generated liquid through structural plasticity as the input spike trains are gradually presented. The curve shows that gradually increases until it reaches saturation at . Moreover, this figure depicts that by applying an unsupervised low-resolution training mechanism, we are able to generate liquids with 2.05 0.27 times more computational power or better kernel quality as compared to traditional randomly generated liquids. Furthermore, this figure throws some light on the generalization performance of liquids trained through our proposed structural plasticity–based learning rule. The matrix is formed from liquids produced at each stage of training corresponding to the evolution of . Its rank (averaged over 200 trials), denoted by , is plotted against the number of different input spike trains presented successively to the liquid. The almost flat line represents that our learning rule can retain the generalization ability of random liquids. At a particular value of , and are calculated based on the same liquid, thereby providing an insight to both its separation and generalization property.

Figure 9:

The rank of matrices and , averaged over 200 trials and denoted by and , respectively, against the number of input spike trains presented in succession. At each point of the curves, the rank is calculated based on all the spike trains. Hence, the value of and for corresponds to a randomly generated liquid. Our learning rule trains this randomly generated liquid through structural plasticity as the input spike trains are gradually presented. The curve shows that gradually increases until it reaches saturation at . Moreover, this figure depicts that by applying an unsupervised low-resolution training mechanism, we are able to generate liquids with 2.05 0.27 times more computational power or better kernel quality as compared to traditional randomly generated liquids. Furthermore, this figure throws some light on the generalization performance of liquids trained through our proposed structural plasticity–based learning rule. The matrix is formed from liquids produced at each stage of training corresponding to the evolution of . Its rank (averaged over 200 trials), denoted by , is plotted against the number of different input spike trains presented successively to the liquid. The almost flat line represents that our learning rule can retain the generalization ability of random liquids. At a particular value of , and are calculated based on the same liquid, thereby providing an insight to both its separation and generalization property.

4.2  Generalization Capability

Until now, we have looked into the separation capability of a liquid to assess its computational performance. However, this is only one piece of the puzzle, with another being its capability to generalize a learned computational function to new inputs—input it has not seen during the training phase. It is interesting to note that Maass et al. (2005) suggested using the same rank measure used in section 4.1 to measure a liquid’s generalization capability. However, in this case, the inputs to the liquid are entirely different from the ones used in section 4.1. For a particular trial, a single spike train is considered, and noisy variations of it are created to form the set . In other words, contains many jittered versions of the same input signal. Similar to section 4.1, an matrix is formed by injecting these spike trains into a liquid and by noting the liquid states resulting at the end of the preceding input spike train of duration . However, unlike section 4.1, a lower value of rank of the matrix corresponds to better generalization performance. To assess our liquid’s generalization capability, we provide input spike trains to the liquids produced at each stage of learning corresponding to the evolution of curve in Figure 9 and note the average rank of matrix . . versus the number inputs () presented to the liquid is shown in Figure 9. An almost flat curve in Figure 9 suggests that our morphologically trained liquids are capable of retaining the generalization performance shown by random liquids. This revelation, combined with the insight from section 4.1, suggests that our trained liquids are capable of amplifying the interclass separation while retaining the intraclass distances for classification problems.

4.3  Generality

By performing the previous experiments, we got a fair idea about the separation and generalization property of our trained liquids. Next, we look into its generality, that is, whether the trained liquid is still general enough to separate inputs that it has not seen before and are not related (e.g., noisy, jittered) to the training inputs in any way. We consider the initial random liquid before training and the final trained liquid after training on 100 spike train inputs of Figure 9 from each trial and separately inject both of them with 500 different spike train pairs and having . We note the average state distances of both liquids for these 500 spike train pairs for the current trial. This experiment is repeated for each trial of Figure 9, and the mean state distances are shown in Figure 10. It is clear from the two closely placed curves of Figure 10 that the trained and random liquids show similar generality. We conclude that even if a liquid is trained on a set of inputs, it is still fairly general to previously unseen significantly different inputs.

Figure 10:

The effect of training on the generality of liquids. The blue and the red curves (averaged over 200 trials) correspond to the separation of random and trained liquids, respectively, when they are injected with 500 previously unseen and significantly different (with respect to the training set) input spike train pairs of distance . The comparable curves obtained by both the random and trained liquids suggest that our structural plasticity rule does not decrease a liquid’s generality.

Figure 10:

The effect of training on the generality of liquids. The blue and the red curves (averaged over 200 trials) correspond to the separation of random and trained liquids, respectively, when they are injected with 500 previously unseen and significantly different (with respect to the training set) input spike train pairs of distance . The comparable curves obtained by both the random and trained liquids suggest that our structural plasticity rule does not decrease a liquid’s generality.

4.4  Fading Memory

Until now, we have discussed the separation property, generalization capability, and generality of liquids. Another component that defines a liquid is its fading memory. Since a liquid is a recurrent interconnection of spiking neurons, the effect of an input applied to it may be felt at its output even after it is gone. A liquid with a superior fading memory is able to remember a given input activity longer. To study the effect of our structural plasticity rule on a liquid’s fading memory, we performed the following experiment. First, an input spike train of duration sec was generated having a burst of spikes at a random time. Next, this spike train was injected into a randomly generated liquid, and the postsynaptic spikes of its neurons were recorded. Subsequently, this liquid was trained through our learning rule and the same spike train was reapplied. This experiment was repeated times for different spike trains, and liquids and the time of the last spike at the liquid output were noted for all these trials to compute the following measure:
formula
4.2
where and are the time to last spike at the output of random and trained liquids for the th trial. The computed metric serves as a parameter to assess the amount of additional fading memory provided by our training mechanism. For our simulations, we keep .

In Figure 11, we show the outcome of a single trial. Figure 11a shows that an input spike train with a burst of spikes around 500 ms is given as input to a random liquid, and its corresponding output is recorded. We train this random liquid through the proposed algorithm and show in Figure 11b its output when the same spike train is reapplied. It can be seen from Figure 11 that 0.6128 sec and 0.6929 sec, the last spike provided by the trained liquid, occurs 80.1 ms after the one provided by the random liquid. Combining the result from all the trials, we obtain that trained liquids provide 83.67 5.79 ms longer fading memory than random liquids having 92.8 5.03 ms fading memory. This experiment suggests that the fading memory of a liquid can be increased by applying structural plasticity.

Figure 11:

(a) An input spike train with a burst of spikes around 500 ms is presented to a randomly generated liquid. Its output is shown, and the portion of its output where spikes are present is magnified. (b) The random liquid is taken and trained by our structural plasticity rule. Subsequently, it is injected with the same input spike train and its output (along with the magnified version) is shown. Comparing the right-most, figures of panels a and b, it is clear that our learning rule endows the liquid with a higher fading memory.

Figure 11:

(a) An input spike train with a burst of spikes around 500 ms is presented to a randomly generated liquid. Its output is shown, and the portion of its output where spikes are present is magnified. (b) The random liquid is taken and trained by our structural plasticity rule. Subsequently, it is injected with the same input spike train and its output (along with the magnified version) is shown. Comparing the right-most, figures of panels a and b, it is clear that our learning rule endows the liquid with a higher fading memory.

4.5  Liquid Connectivity

In the previous sections, we have analyzed the output of our morphologically trained liquids in various ways and for different inputs. Here we delve into the liquid itself and analyze the effect of our learning rule on the recurrent connectivity. Figure 12 shows a representative example of the conducted experiments depicting the number of postsynaptic connections each neuron in the liquid has before (see Figure 12a) and after (see Figure 12b) training on distinct spike trains. The triangles in Figure 12 identify the neurons that have the input spike train as a presynaptic input. While the postsynaptic connections are distributed uniformly across the neurons of the random liquid as shown in Figure 12a, we see in Figure 12b that some neurons have more postsynaptic connections compared to others. Since the number of connections is same for both Figures 12a and 12b, the postsynaptic connections of some neurons increase during learning while the others decrease.

Figure 12:

(a) The number of postsynaptic connections versus the neuron index is shown before training, that is, for the random liquid. The connections are uniformly distributed across the neurons of the liquid. (b) The number of postsynaptic connections versus the neuron index are shown after the training is complete. The connections get rearranged during learning in such a way that after training, some neurons have more postsynaptic connections than the others. Note that the total number of connections is the same for both panels since it remains constant throughout the learning procedure. Our learning rule does not create any new connections; instead, it reorganizes them. The down-pointing triangle denotes the neurons that have the input line as a presynaptic connection. The figure reveals that most of the neurons that increased postsynaptic connections have the input line as a presynaptic connection.

Figure 12:

(a) The number of postsynaptic connections versus the neuron index is shown before training, that is, for the random liquid. The connections are uniformly distributed across the neurons of the liquid. (b) The number of postsynaptic connections versus the neuron index are shown after the training is complete. The connections get rearranged during learning in such a way that after training, some neurons have more postsynaptic connections than the others. Note that the total number of connections is the same for both panels since it remains constant throughout the learning procedure. Our learning rule does not create any new connections; instead, it reorganizes them. The down-pointing triangle denotes the neurons that have the input line as a presynaptic connection. The figure reveals that most of the neurons that increased postsynaptic connections have the input line as a presynaptic connection.

A close inspection of Figure 12b reveals that most of the neurons that have more postsynaptic connections than the others have the input line as a presynaptic connection. This essentially means that the neurons to which the input gets randomly distributed are more likely to be selected as a replacement during the connection swapping procedure (see section 3) of our learning.

4.6  Comparison with Other Works

After its inception in Maass et al. (2002), researchers have looked into improving the reservoir of liquid state machines; we provided a brief survey of this in section 2.2. Of the algorithms showcased in section 2.2, we compare the proposed learning rule with the work, termed SDSM, provided in Norton and Ventura (2010) since it is architecturally similar to our algorithm and iteratively updates a randomly generated liquid like us. Norton and Vantura derived a metric based on the separation property of the liquid and used it to update the synaptic parameters. We consider the pattern recognition task they described and compare the performance of the proposed algorithm with the results obtained by SDSM.

In this task, a data set with a variable number of classes of spiking patterns is considered. Each pattern has eight input dimensions, and patterns of each class are generated by creating jittered versions of a random template. The random template for each class is generated by plotting individual spikes with a random distance between one another. This distance is drawn from the absolute value of a normal distribution with a mean of 10 ms and a standard deviation of 20 ms. The amount of jitter added to each spike is randomly drawn from a normal distribution with zero mean and 5 ms standard deviation. Similar to Norton and Ventura (2010), we consider 4-, 8-, and 12-class versions of this problem. The number of training and testing patterns per class have been kept to 400 and 100, respectively.

For fair comparison, the readout and the experimental setup are similar to that of Norton and Ventura (2010). The readout is implemented by a single layer of perceptrons where each perceptron is trained to identify a particular class of patterns. The results are averaged over 50 trials, and in each trial the liquid is evolved for 500 iterations. The liquids are constructed with LIF neurons and static synapses, and the parameters have been set to the values listed in Norton and Ventura (2010).

The liquid separation and classification accuracy for the testing patterns, averaged over 50 trials, are plotted in Figures 13a and 13b, respectively. It is evident from Figure 13 that LSMs with liquid evolved through structural plasticity obtain superior performance as compared to LSMs trained through SDSM and LSMs with random liquid. Quantitatively, our algorithm provides a 9.30%, 15.21%, and 12.52% increase in liquid separabilities and 2.8%, 9.1%, and 7.9% increase in classification accuracies for 4-, 8-, and 12-class recognition tasks, respectively, as compared to SDSM. Since the proposed learning rule creates liquids with higher separation, as shown in Figure 13b, the readout is able to provide better classification accuracies.

Figure 13:

(a) Mean separation and (b) mean accuracy are shown for random liquids and for the same random liquid when trained separately through SDSM and the proposed structural plasticity–based learning rule. The results depict that our algorithm outperforms the SDSM learning rule.

Figure 13:

(a) Mean separation and (b) mean accuracy are shown for random liquids and for the same random liquid when trained separately through SDSM and the proposed structural plasticity–based learning rule. The results depict that our algorithm outperforms the SDSM learning rule.

5  Conclusion

We have proposed an unsupervised learning rule that trains the liquid or reservoir of LSM by rearranging the synaptic connections. The proposed learning rule does not modify synaptic weights and hence keeps the average synaptic weight of the liquid constant throughout learning. Since it involves only modification and storage of the connection matrix during learning, it can be easily implemented by AER protocols. An analysis of the pairwise separation property reveals that liquids trained through the proposed learning rule provide 1.36 0.18 times more interclass separation while maintaining similar intraclass separation as compared to the traditional random liquid. Next we looked into the linear separation property. From the experiments use performed, it is clear that our trained liquids are 2.05 0.27 times better than random liquids. Moreover, experiments performed to test the generalization property and generality of liquids formed by our learning algorithm reveal that they are capable of inheriting the performance provided by random liquids. Furthermore, we have shown that our trained liquids have 83.67 5.79 ms longer fading memory than random liquids providing 92.8 5.03 ms fading memory for a particular type of spike train inputs. These results suggest that our learning rule is capable of eliminating the weaknesses of a random liquid while retaining its strengths. We have also analyzed the evolution of internal connections of the liquid during training. Furthermore, we have shown that compared to a recently proposed method of liquid evolution termed SDSM, we provide 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1% and 7.9% better classification accuracies for 4-, 8-, and 12-class classifications, respectively, on a task described in section 4.6.

The plans for our future work include developing a framework that combines the proposed liquid with the readout proposed in Roy, Banerjee, and Basu (2014) which is composed of neurons with nonlinear dendrites and binary synapses and trained through structural plasticity. Banerjee, Bhaduri, Roy, Kar, and Basu (2015) have proposed a hardware implementation of this readout; the next step is to implement the proposed liquid in hardware. Subsequently, we will combine them to form a complete structural plasticity–based LSM system and deploy it for real-time applications. Moreover, having achieved success in applying structural plasticity rules to train a generic spiking neural recurrent architecture, we will move forward to develop spike-based morphological learning rules for multilayer feedforward spiking neural networks. We will employ these networks to classify individuated finger and wrist movements of monkeys (Aggarwal et al., 2008) and recognize spoken words Verstraeten, Schrauwen, Stroobandt, and Van Campenhout (2005).

Appendix:  Liquid Specification and Paramater Values

Table 1 lists specifications of the liquid architecture and values of the parameters used in this letter. Unless otherwise mentioned, these are the values used for the experiments.

Table 1:
Liquid Specification and Parameter Values.
Liquid specification 
Number of neurons (135 
Percentage of excitatory neurons (80 
Percentage of inhibitory neurons (20 
Structure Single column 
Excitatory-excitatory connection probability 0.3 
Excitatory-inhibitory connection probability 0.2 
Inhibitory-excitatory connection probability 0.4 
Inhibitory-inhibitory connection probability 0.1 
Leaky integrate-and fire-(LIF) neuron parameters 
Membrane time constant 30 ms 
Input resistance 1 M 
Absolute refractory period of excitatory neurons 3 ms 
Absolute refractory period of inhibitory neurons 2 ms 
Threshold voltage 15 mV 
Reset voltage 13.5 mV 
Constant nonspecific background current 13.5 nA 
Dynamic synapse parameters 
Excitatory-excitatory  0.5 
Excitatory-excitatory  1.1 sec 
Excitatory-excitatory  0.05 sec 
Excitatory-inhibitory  0.05 
Excitatory-inhibitory  0.125 sec 
Excitatory-inhibitory  1.2 sec 
Inhibitory-excitatory  0.25 
Inhibitory-excitatory  0.7 sec 
Inhibitory-excitatory  0.02 sec 
Inhibitory-inhibitory  0.32 
Inhibitory-inhibitory  0.144 sec 
Inhibitory-inhibitory  0.06 sec 
Standard deviation of , and  50% of respective mean 
Time constant of excitatory synapse (3 ms 
Time constant of inhibitory synapse 6 ms 
Excitatory-excitatory transmission delay 1.5 ms 
Transmission delay of other connections 0.8 ms 
Structural plasticity parameters 
Number of silent synapses (25 
Slow time constant of kernel (3 ms 
Fast time constant of kernel (Very small positive value 
Liquid specification 
Number of neurons (135 
Percentage of excitatory neurons (80 
Percentage of inhibitory neurons (20 
Structure Single column 
Excitatory-excitatory connection probability 0.3 
Excitatory-inhibitory connection probability 0.2 
Inhibitory-excitatory connection probability 0.4 
Inhibitory-inhibitory connection probability 0.1 
Leaky integrate-and fire-(LIF) neuron parameters 
Membrane time constant 30 ms 
Input resistance 1 M 
Absolute refractory period of excitatory neurons 3 ms 
Absolute refractory period of inhibitory neurons 2 ms 
Threshold voltage 15 mV 
Reset voltage 13.5 mV 
Constant nonspecific background current 13.5 nA 
Dynamic synapse parameters 
Excitatory-excitatory  0.5 
Excitatory-excitatory  1.1 sec 
Excitatory-excitatory  0.05 sec 
Excitatory-inhibitory  0.05 
Excitatory-inhibitory  0.125 sec 
Excitatory-inhibitory  1.2 sec 
Inhibitory-excitatory  0.25 
Inhibitory-excitatory  0.7 sec 
Inhibitory-excitatory  0.02 sec 
Inhibitory-inhibitory  0.32 
Inhibitory-inhibitory  0.144 sec 
Inhibitory-inhibitory  0.06 sec 
Standard deviation of , and  50% of respective mean 
Time constant of excitatory synapse (3 ms 
Time constant of inhibitory synapse 6 ms 
Excitatory-excitatory transmission delay 1.5 ms 
Transmission delay of other connections 0.8 ms 
Structural plasticity parameters 
Number of silent synapses (25 
Slow time constant of kernel (3 ms 
Fast time constant of kernel (Very small positive value 

Acknowledgments

We acknowledge financial support from MOE through grant ARC 8/13.

References

Aggarwal
,
V.
,
Acharya
,
S.
,
Tenore
,
F.
,
Shin
,
H.
,
Cummings
,
R. E.
,
Schieber
,
M.
, &
Thakor
,
N.
(
2008
).
Asynchronous decoding of dexterous finger movements using M1 neurons
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
16
(
1
),
3
14
.
Arthur
,
J. V.
, &
Boahen
,
K.
(
2006
).
Learning in silicon: Timing is everything
. In
Y.
Weiss
,
B.
Schöllwpf
, &
J. C.
Patts
(Eds.),
Advances in neural information processing systems, 17
.
Cambridge, MA
:
MIT Press
.
Banerjee
,
A.
,
Bhaduri
,
A.
,
Roy
,
S.
,
Kar
,
S.
, &
Basu
,
A.
(
2015
).
A current-mode spiking neural classifier with lumped dendritic nonlinearity
. In
Proceedings of the IEEE International Symposium on Circuits and Systems
.
Piscataway, NJ
:
IEEE
.
Brader
,
J.
,
Senn
,
W.
, &
Fusi
,
S.
(
2007
).
Learning real-world stimuli in a neural network with spike-driven synaptic dynamics
.
Neural Computation
,
19
(
11
),
2881
2912
.
Florian
,
R. V.
(
2013
).
The chronotron: A neuron that learns to fire temporally precise spike patterns
.
PLoS ONE
,
7
(
8
),
e40233
.
Frid
,
A.
,
Hazan
,
H.
, &
Manevitz
,
L.
(
2012
).
Temporal pattern recognition via temporal networks of temporal neurons
. In
Proceedings of the 27th Convention of Electrical Electronics Engineers in Israel
, (pp.
1
4
),
Piscataway, NJ
:
IEEE
.
Gardner
,
B.
,
Sporea
,
I.
, &
Grüning
,
A.
(
2015
).
Learning spatiotemporally encoded pattern transformations in structured spiking neural networks
.
Neural Computation
,
27
,
2548
2586
.
George
,
R.
,
Diehl
,
P.
,
Cook
,
M.
,
Mayr
,
C.
, &
Indiveri
,
G.
(
2015
).
Modeling the interplay between structural plasticity and spike-timing-dependent plasticity
.
BMC Neuroscience
16
(
Suppl. 1
),
P107
.
George
,
R.
,
Mayr
,
C.
,
Indiveri
,
G.
, &
Vassanelli
,
S.
(
2015
).
Event-based softcore processor in a biohybrid setup applied to structural plasticity
. In
Proceedings of the 2015 International Conference on Event-Based Control, Communication, and Signal Processing
, (pp.
1
4
),
Piscataway, NJ
:
IEEE
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models: An introduction
.
New York
:
Cambridge University Press
.
Gutig
,
S.
, &
Sompolinsky
,
H.
(
2006
).
The tempotron: A neuron that learns spike timing–based decisions
.
Nature Neuroscience
,
9
(
1
),
420
428
.
Hazan
,
H.
, &
Manevitz
,
L. M.
(
2012
).
Topological constraints and robustness in liquid state machines
.
Expert Syst. Appl.
,
39
(
2
),
1597
1606
.
Hempel
,
C. M.
,
Hartman
,
K. H.
,
Wang
,
X. J.
,
Turrigiano
,
G. G.
, &
Nelson
,
S. B.
(
2000
).
Multiple forms of short-term plasticity at excitatory synapses in rat medial prefrontal cortex
.
Journal of Neurophysiology
,
83
(
5
),
3031
3041
.
Hourdakis
,
E.
, &
Trahanias
,
P.
(
2013
).
Use of the separation property to derive liquid state machines with enhanced classification performance
.
Neurocomputing
,
107
,
40
48
.
Hussain
,
S.
,
Liu
,
S. C.
, &
Basu
,
A.
(
2015
).
Hardware-amenable structural learning for spike-based pattern classification using a simple model of active dendrites
.
Neural Computation
,
27
(
4
),
845
897
.
Ju
,
H.
,
Xu
,
J. X.
,
Chong
,
E.
, &
Vandongen
,
A. M. J.
(
2013
).
Effects of synaptic connectivity on liquid state machine performance
.
Neural Networks
,
38
,
39
51
.
Kello
,
C.
, &
Mayberry
,
M.
(
2010
).
Critical branching neural computation
. In
Proceedings of the International Joint Conference on Neural Networks
, (pp.
1
7
),
Piscataway, NJ
:
IEEE
.
Kuhlmann
,
L.
,
Hauser-Raspe
,
M.
,
Manton
,
J. H.
,
Grayden
,
D. B.
,
Tapson
,
J.
, &
van Schaik
,
A.
(
2014
).
Approximate, computationally efficient online learning in bayesian spiking neurons
.
Neural Computation
,
26
(
3
),
472
496
.
Maass
,
W.
,
Legenstein
,
R.
,
Bertschinger
,
N.
, &
Graz
,
T. U.
(
2005
).
Methods for estimating the computational power and generalization capability of neural microcircuits
. In
L. K.
Savl
,
Y.
Weiss
, &
L.
Bottov
(Eds.),
Advances in neural information processing systems
, (pp.
865
872
),
Cambridge, MA
:
MIT Press
.
Maass
,
W.
,
Natschläger
,
T.
, &
Markram
,
H.
, (
2002
).
Real-time computing without stable states: A new framework for neural computation based on perturbations
.
Neural Computation
,
14
(
11
),
2531
2560
.
Markram
,
H.
, &
Tsodyks
,
M.
(
1996
).
Redistribution of synaptic efficacy between neocortical pyramidal neurons
.
Nature
,
382
(
6594
),
807
810
.
Moore
,
S. C.
(
2002
).
Back-propagation in spiking neural networks
.
Master’s thesis, University of Bath
.
Norton
,
D.
, &
Ventura
,
D.
(
2010
).
Improving liquid state machines through iterative refinement of the reservoir
.
Neurocomputing
,
73
(
16–18
),
2893
2904
.
Notley
,
S.
, &
Gruning
,
A.
(
2012
).
Improved spike-timed mappings using a tri-phasic spike timing–dependent plasticity rule
. In
Proceedings of the International Joint Conference on Neural Networks
, (pp.
1
6
).
Piscataway, NJ
:
IEEE
.
Obst
,
O.
, &
Riedmiller
,
M.
(
2012
).
Taming the reservoir: Feedforward training for recurrent neural networks
. In
Proceeding of the 2012 International Joint Conference on Neural Networks
, (pp.
1
7
),
Piscataway, NJ
:
IEEE
.
Petersen
,
C. C. H.
,
Malenka
,
R. C.
,
Nicoll
,
R. A.
, &
Hopfield
,
J. J.
(
1998
).
All-or-none potentiation at CA3-CA1 synapses
.
Proc. Natl. Acad. Sci. USA
,
95
(
8
),
4732
4737
.
Poirazi
,
P.
, &
Mel
,
B. W.
(
2001
).
Impact of active dendrites and structural plasticity on the memory capacity of neural tissue
.
Neuron
,
29
,
779
796
.
Ponulak
,
F.
, &
Kasiński
,
A.
(
2010
).
Supervised learning in spiking neural networks with resume: Sequence learning, classification, and spike shifting
.
Neural Computation
,
22
(
2
),
467
510
.
Rhéaume
,
F.
,
Grenier
,
D.
, &
Bossé
,
L.
(
2011
).
Multistate combination approaches for liquid state machine in supervised spatiotemporal pattern classification
.
Neurocomputing
,
74
(
17
),
2842
2851
.
Roy
,
S.
,
Banerjee
,
A.
, &
Basu
,
A.
(
2014
).
Liquid state machine with dendritically enhanced readout for low-power, neuromorphic VLSI implementations
.
IEEE Transactions on Biomedical Circuits and Systems
,
8
(
5
),
681
695
.
Roy
,
S.
, &
Basu
,
A.
(
in press
).
An online unsupervised structural plasticity algorithm for spiking neural networks
.
IEEE Transactions on Neural Networks and Learning Systems
.
Roy
,
S.
,
Basu
,
A.
, &
Hussain
,
S.
(
2013
).
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines
. In
Proceedings of the IEEE Biomedical Circuits and Systems
, (pp.
302
305
),
Piscataway, NJ
:
IEEE
.
Roy
,
S.
,
San
,
P. P.
,
Hussain
,
S.
,
Wei
,
L. W.
, &
Basu
,
A.
(
2015
).
Learning spike time codes through morphological learning with binary synapses
.
IEEE Transactions on Neural Networks and Learning Systems
,
27
,
1572
1577
.
Schliebs
,
S.
,
Fiasché
,
M.
, &
Kasabov
,
N.
(
2012
).
Constructing robust liquid state machines to process highly variable data streams
. In
Proceedings of the 22nd International Conference on Artificial Neural Networks and Machine Learning
, (pp.
604
611
),
New York
:
Springer
.
Schliebs
,
S.
,
Mohemmed
,
A.
, &
Kasabov
,
N.
(
2011
).
Are probabilistic spiking neural networks suitable for reservoir computing?
In
Proceedings of the International Joint Conference on Neural Networks
, (pp.
3156
3163
),
Piscataway, NJ
:
IEEE
.
Sillin
,
H. O.
,
Aguilera
,
R.
,
Shieh
,
H.-H.
,
Avizienis
,
A. V.
,
Aono
,
M.
,
Stieg
,
A. Z.
, &
Gimzewski
,
J. K.
(
2013
).
A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing
.
Nanotechnology
,
24
(
38
),
384004
.
Sporea
,
I.
, &
Grüning
,
A.
(
2013
).
Supervised learning in multilayer spiking neural networks
.
Neural Comput.
,
25
(
2
),
473
509
.
Thomson
,
A.
,
Deuchars
,
J.
, &
West
,
D.
(
1993
).
Single axon excitatory postsynaptic potentials in neocortical interneurons exhibit pronounced paired pulse facilitation
.
Neuroscience
,
54
(
2
),
347360
.
Varela
,
J.
,
Sen
,
K.
,
Gibson
,
J.
,
Fost
,
J.
,
Abbott
,
L.
, &
Nelson
,
S.
(
1997
).
A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex
.
Journal of Neuroscience
,
17
(
20
),
7926
7940
.
Verstraeten
,
D.
,
Schrauwen
,
B.
,
Stroobandt
,
D.
, &
Van Campenhout
,
J.
(
2005
).
Isolated word recognition with the liquid state machine: A case study
.
Inf. Process. Lett.
,
95
(
6
),
521
528
.
Wojcik
,
G. M.
(
2012
).
Electrical parameters influence on the dynamics of the Hodgkin-Huxley liquid state machine
.
Neurocomputing
,
79
,
68
74
.
Xue
,
F.
,
Hou
,
Z.
, &
Li
,
X.
(
2013
).
Computational capability of liquid state machines with spike-timing-dependent plasticity
.
Neurocomputing
,
122
(
0
),
324
329
.