Abstract

Periodic neuronal activity has been observed in various areas of the brain, from lower sensory to higher cortical levels. Specific frequency components contained in this periodic activity can be identified by a neuronal circuit that behaves as a bandpass filter with given preferred frequency, or best modulation frequency (BMF). For BMFs typically ranging from 10 to 200 Hz, a plausible and minimal configuration consists of a single neuron with adjusted excitatory and inhibitory synaptic connections. The emergence, however, of such a neuronal circuitry is still unclear. In this letter, we demonstrate how spike-timing-dependent plasticity (STDP) can give rise to frequency-dependent learning, thus leading to an input selectivity that enables frequency identification. We use an in-depth mathematical analysis of the learning dynamics in a population of plastic inhibitory connections. These provide inhomogeneous postsynaptic responses that depend on their dendritic location. We find that synaptic delays play a crucial role in organizing the weight specialization induced by STDP. Under suitable conditions on the synaptic delays and postsynaptic potentials (PSPs), the BMF of a neuron after learning can match the training frequency. In particular, proximal (distal) synapses with shorter (longer) dendritic delay and somatically measured PSP time constants respond better to higher (lower) frequencies. As a result, the neuron will respond maximally to any stimulating frequency (in a given range) with which it has been trained in an unsupervised manner. The model predicts that synapses responding to a given BMF form clusters on dendritic branches.

1.  Introduction

Periodic spiking neuronal activity is ubiquitous in the brain, from afferent sensory pathways (Krishna & Semple, 2000; Rees & Palmer, 1989; Langner & Schreiner, 1988; Rees & Møller, 1983, 1987) up to the cortex (Las, Stern, & Nelken, 2005; Fries, Reynolds, Rorie, & Desimone, 2001; Langner, 1992; Gray, König, Engel, & Singer, 1989). In the mammalian auditory brainstem, this activity reflects the periodicity of external stimuli and thus carries valuable information about the environment (Highley & Contreras, 2006; Joris, Schreiner, & Rees, 2004; Krishna & Semple, 2000; Rees & Palmer, 1989; Langner & Schreiner, 1988; Schreiner & Langner, 1988; Speck-Hergenröder & Barth, 1987; Rees & Møller, 1983, 1987). In contrast, oscillatory activity observed in higher-level brain areas, such as the cortex and hippocampus, does not originate directly from sensory inputs. The origin of these oscillations is not fully understood, but they are known to exhibit a spread range of frequencies, including the beta, gamma, and theta rhythms. These rhythms are hypothesized to play a role in the formation of mental objects (Shinn-Cunningham & Wang, 2008; Roelfsema, 2006; Gray et al., 1989) and attention (Bidet-Caulet et al., 2007; Knudsen, 2007; Roelfsema, 2006; Gonzalez Andino, Michel, Thut, Landis, & Grave de Peralta, 2005; Eggert & van Hemmen, 2002; Fries et al., 2001); the extent of their possible functions is an active topic of research (Banerjee & Ellender, 2009). At the scale of neuronal microcircuits, extracting information about the frequencies contained in the spiking activity may be an important part of spike-based computations.

Here we examine the tuning of a neuronal bandpass filter, that is, neurons that respond maximally when their stimulating inputs comprise a given periodicity. Recent findings have demonstrated the importance of inhibition during the critical period in the functional maturation of auditory pathways (Dorrn, Yuan, Barker, Schreiner, & Froemke, 2010). We analyze here a scheme where frequency selectivity arises from the activity-dependent plasticity of feedforward synaptic connections. Thus, we exclude the alternative approach where individual neurons themselves have frequency preference arising from specific membrane dynamics (Large & Crawford, 2002; Izhikevich, 2001). Rather, we consider a configuration with both excitatory and inhibitory incoming connections that allow frequency discrimination by selection of synaptic delays (Grothe, 1994) and time constants of the postsynaptic response (Bürck & van Hemmen, 2009; Nelson & Carney, 2004). Another alternative is frequency selection by coincidence detection using only excitatory synapses (Friedel, Bürck, & van Hemmen, 2007; Meddis & O'Mard, 2006; Licklider, 1951). Such a scheme has been used successfully with an ad hoc Hebbian learning rule (Grigorév & Bibikov, 2010). This will not be explored here: within the excitatory-inhibitory setup, we explain how synaptic plasticity can train a neuron to perform frequency selectivity in an unsupervised fashion.

To illustrate the general principles underlying frequency selectivity here, we focus on the auditory pathway in the midbrain, which exhibits a tonotopical organization that encodes the amplitude modulation of sound stimuli (Krebs, Lesica, & Grothe, 2008; Krishna & Semple, 2000). This means that the topological distribution of the network activity depends on the stimulating frequencies, thus creating a functional mapping of the input signals. Experimental observations in various animals have shown that a majority of best modulation frequency (BMF) lies in the range between 10 and 100 Hz (Krishna & Semple, 2000; Langner & Schreiner, 1988; Rees & Møller, 1983, 1987; Rees & Palmer, 1989). It turns out that this frequency range corresponds to the synaptic temporal properties found across many brain areas, that is, time course and delays in the order of milliseconds. With such “fast” timescales, rate-based plasticity mechanisms (Sejnowski, 1977; Bienenstock, Cooper, & Munro, 1982) cannot be used to train our amplitude modulation recognition system as they cannot capture the corresponding fine spike-time information.

Instead, spike-time-dependent plasticity (STDP) appears to be appropriate for generating neuronal specialization relying on such rapidly modulated signals (Gerstner, Kempter, van Hemmen, & Wagner, 1996). STDP is an implementation of Hebbian and anti-Hebbian learning at the synaptic level. It has been observed in various areas of the brain (Caporale & Dan, 2008; Dan & Poo, 2006), in particular, in the auditory brainstem (Tzounopoulos, Kim, Oertel, & Trussell, 2004; Tzounopoulos & Kraus, 2009; Bender & Trussell, 2011). The resulting learning dynamics can lead to the emergence of functional pathways depending on the fine temporal structure of input spike trains and has been the subject of numerous studies, mostly for excitatory synapses (Gilson, Burkitt, Grayden, Thomas, & van Hemmen, 2009; Gütig, Aharonov, Rotter, & Sompolinsky, 2003; Senn, 2002; Song, Miller, & Abbott, 2000; Kempter, Gerstner, & van Hemmen, 1999). Here, however, the focus is on STDP for inhibitory synapses (Woodin, Ganguly, & Poo, 2003; Haas, Nowotny, & Abarbanel, 2006). Although STDP has been used in a variety of setups with many different input signals (Roberts & Bell, 2000; Song & Abbott, 2001; Froemke & Dan, 2002; Cateau, Kitano, & Fukai, 2008; Lubenov & Siapas, 2008; Masquelier, Guyonneau, & Thorpe, 2008), its interplay with oscillatory neuronal input has just begun to be addressed (Scarpetta, Zhaoping, & Hertz, 2002; Pfister & Tass, 2010).

In the auditory system, previous studies have shown how STDP can organize excitatory synapses to generate a neuronal map that detects interaural time differences (ITDs; Leibold & van Hemmen, 2005; Leibold, Kempter, & van Hemmen, 2002; Gerstner et al., 1996). In such a setting, STDP modifies the synaptic strengths, which in turn affect the neuronal response to inputs, and this then takes effect on the learning. Among a population of synapses with various delays, STDP can select those corresponding to ITDs conveyed by the input spike trains. Dynamically, neurons that become selective are driven by increasingly stronger synapses, and this phenomenon self-reinforces. Consequently, the span of delays determines the range of ITDs that can be represented in the resulting synaptic structure.

In this letter, we take inspiration from these previous studies to obtain similar learning dynamics in the case of periodic input spike trains and plastic inhibitory synapses while excitatory connections remain fixed. We build our work on a previous study (Bürck & van Hemmen, 2009) that showed how a neuronal bandpass filter is tuned by the time constants and delays that determine the postsynaptic potentials (PSPs) in the case of feedforward excitatory and inhibitory connections that balance each other. In our model, the PSP time constants and delays (in addition to the synaptic weight) describe the variation of the soma potential induced by each presynaptic pulse. As in the case of ITDs, synaptic delays play a prominent role in determining the competition among synapses. We focus on the STDP-based selection process of synapses for a single neuron depending on the input spike trains that convey a given frequency through an oscillatory modulation of the spiking rate. As a first step toward obtaining an equivalent to a tonotopic map for BMFs, we examine the situation where a postsynaptic neuron responds maximally to the training frequency that is presented to it during a learning epoch, as compared to other frequencies. In other words, the training frequency should coincide with the BMF of the neuron after learning. However, the problem turns out to be more complex for frequency selectivity than for ITD selectivity. This follows because the frequency response of neurons is not all-or-nothing as with coincident detection for ITDs. We investigate conditions on the delays and PSP time constants that allow this unsupervised closed-loop learning scheme for any training frequency within a given range (here, from 20 to 100 Hz).

This letter has three parts. The following section presents a mathematical framework that describes the learning dynamics of feedforward inhibitory connections arising from pairwise STDP. Note that this framework can be straightforwardly transposed to study the converse situation of fixed inhibitory and plastic excitatory connections. In section 3, we derive conditions on the synaptic delays and PSP time constants under which a neuron becomes selective (i.e., it shows a BMF) to the frequency it is trained with. Finally, we present numerical simulations that support our theoretical predictions and compare them with experimental observations.

2.  Mathematical Framework for a Single Output Neuron

This section presents the theory required to examine the learning dynamics that leads to frequency selectivity. The analysis focuses on a single output neuron that receives oscillatory input spike trains through ME excitatory fixed synapses and MI plastic inhibitory synapses, as illustrated in Figure 1A. Plastic inhibitory synapses are modified using the STDP model that will be detailed in section 2.1. During the learning process, all input neurons are driven by an oscillation with a fixed frequency f. The resulting correlation structure and its influence on the learning dynamics will be analyzed in section 2.2. In section 2.3 we introduce a specific neuron model, which is needed to solve the learning dynamics in a closed form. To keep the analytical calculations simple, we examine the idealized case of a single excitatory synapse (ME=1) and an arbitrary number of inhibitory synapses with distinct properties, that is, distinct delays and PSP time constants. Later in the numerical simulations, these individual synapses are replaced by populations of synapses.

Figure 1:

(A) Schematic representation of a single output neuron (top circle) stimulated by MI plastic inhibitory (−) and ME fixed excitatory synapses (+). All input neurons (bottom open circles) are driven by the same oscillatory frequency represented by the bottom gray wavelet with frequency f. (B) Detail of a synapse from the input neuron k to the postsynaptic neuron “out.” The synapse is determined by the weight Jk, axonal delay dak, and dendritic delay ddk. (C) Two examples of symmetric learning window function for pairwise additive STDP. The expression in equation 2.1 corresponds to the dark solid curve and was used in the numerical simulations presented in the following figures. Similar results have been obtained using the gray dashed curve; the two symmetric sides are obtained by the difference of two decaying exponentials with a larger time constant for depression than potentiation.

Figure 1:

(A) Schematic representation of a single output neuron (top circle) stimulated by MI plastic inhibitory (−) and ME fixed excitatory synapses (+). All input neurons (bottom open circles) are driven by the same oscillatory frequency represented by the bottom gray wavelet with frequency f. (B) Detail of a synapse from the input neuron k to the postsynaptic neuron “out.” The synapse is determined by the weight Jk, axonal delay dak, and dendritic delay ddk. (C) Two examples of symmetric learning window function for pairwise additive STDP. The expression in equation 2.1 corresponds to the dark solid curve and was used in the numerical simulations presented in the following figures. Similar results have been obtained using the gray dashed curve; the two symmetric sides are obtained by the difference of two decaying exponentials with a larger time constant for depression than potentiation.

2.1.  Model of Spike-Timing-Dependent Plasticity.

Throughout this letter, we use the pairwise additive STDP model (Kempter et al., 1999) that captures the crucial dependence about spike timing in synaptic plasticity observed in experimental studies (Markram, Lübke, Frotscher, & Sakmann, 1997; Bi & Poo, 1998) but does not account for other properties such as the weight dependence or interactions between more than two spikes (Sjöström, Turrigiano, & Nelson, 2001; Gütig et al., 2003; Appleby & Elliott, 2006; Morrison, Diesmann, & Gerstner, 2008). In this way, we focus on weight specialization in the process of selecting synapses and leave aside issues about the saturation of individual weights. Additive STDP tends to separate synaptic weights into a bimodal distribution, which appears suitable for modeling the development of neuronal circuitry during the critical period for audition or vision, after which depressed synapses would be eliminated. Homeostatic stability for the weight dynamics is achieved by using additional rate-based plasticity terms (Kempter et al., 1999).

Importantly, STDP occurs at the synaptic site in our model, which means that propagation delays are taken into account and affect the timing of pre- and postsynaptic spiking activity “felt” by the synapse (Senn, Schneider, & Ruf, 2002). For the plastic synapse k from input neuron k to the output neuron (subscript “out”), we denote the axonal delay by dak to account for the axonal transmission and neurotransmitter diffusion. Likewise, the dendritic delay ddk models the propagation of PSPs along the dendritic tree toward the soma, as illustrated in Figure 1B. We further assume that for each synapse, the backpropagation of action potentials toward the synapse, which plays a role in synaptic plasticity, also corresponds to ddk. This assumption is reasonable for a passive membrane and simplifies the notation, but it turns out not to be critical in the calculations. In this way, a presynaptic spike fired by input neuron k at time tk will affect the synapse k at time tk+dak; likewise, the effect of a spike fired by the postsynaptic output neuron at time tout on the synapse will start at time tout+ddk.

Following these conventions, the weight update that is determined by two additive contributions, win for each presynaptic spike and for each pair of pre- and postspikes (Kempter et al., 1999; Gilson et al., 2009), is given by
formula
2.1
The learning rate ensures a slow weight evolution as compared to other neuronal mechanisms. This is a fair assumption in the context of developmental plasticity. The use of the rate-based term win follows from using additive (weight-independent) STDP: we require the learning dynamics to have a fixed point in order to generate homeostasis; the reasons for this choice are elaborated in section 3.1. An alternative is to use weight-dependent STDP alone (win=0), but care must be taken in choosing the parameters such that the resulting dynamics involves sufficiently strong competition between synapses (Gütig et al., 2003; Gilson & Fukai, 2011).
We use a symmetric learning window function W with both potentiation and depression, as illustrated by the black solid curve in Figure 1C:
formula
2.2
Similar results were obtained using the gray dashed curve, for which potentiation and depression have a broader temporal range. A similar rule was used in a recent paper (Vogels, Sprekeler, Zenke, Clopath, & Gerstner, 2011).
Because of the “adiabatic assumption” , the first moment of the weight evolution can be approximated by a time average over a sufficiently long period T, yielding (Kempter et al., 1999; van Hemmen, 2001)
formula
2.3
The constant is the integral value of the STDP learning window function W(u). The brackets denote the ensemble average over a stochastic process (Kempter et al., 1999; van Hemmen, 2001). In this framework, we consider spikes to be instantaneous events, that is, much faster than other neuronal mechanisms. The weight change in equation 2.3 is expressed in terms of the firing rates and , as well as the spike-time covariance Fk:
formula
2.4a
formula
2.4b
The spike-time covariance Fk(t, u) contains spiking information at a short timescale through u, which is affected by oscillatory spiking activity with sufficiently fast frequency, as we show below.

The above calculations require T to be much larger than the timescale of the neuronal activation mechanisms but much smaller than . This ensures that the variables , , and Fk are well defined, independent of the choice of a particular neuron model. (Details can be found in Gilson et al., 2009.)

2.2.  Oscillatory Inputs and Spike-Time Correlation Structure.

Now we examine the configuration where the input spike train Sk(t) for input k is generated by an inhomogeneous Poisson process with the sinusoidal rate function
formula
2.5
where the frequency f is in the range of STDP (i.e., above 10 Hz). The half-amplitude will be identical for all external inputs. Further assuming that the smoothing period T is also sufficiently large such that , the corresponding time-averaged firing rate for input k can be approximated by
formula
2.6
The same argument allows us to evaluate the time-averaged covariance Ckl(t, u) between two inputs k and l, which is defined in a similar fashion to Fk in equation 2.4b:
formula
2.7
Figure 2 illustrates a plot of Ckl(t, u) and shows that the oscillation frequency f is reflected in the dependence on the variable u.
Figure 2:

Plot of the spike-time correlogram as a function of the time difference u in gray for two oscillatory inputs at f = 100 Hz averaged over T = 200 s. The black dashed curve corresponds to the expression for C0(u) in equation 2.7.

Figure 2:

Plot of the spike-time correlogram as a function of the time difference u in gray for two oscillatory inputs at f = 100 Hz averaged over T = 200 s. The black dashed curve corresponds to the expression for C0(u) in equation 2.7.

When a neuron is stimulated by oscillatory inputs with a given frequency spectrum, the input-to-neuron covariance Fk in equation 2.4b is affected by the input-to-input covariances Ckl. In order to evaluate the synaptic filtering that transforms Ckl into Fk, we now choose a concrete neuron model.

2.3.  Poisson Neuron Model.

The Poisson neuron is a stochastic model of spiking neuron (Kempter et al., 1999). It generates a noisy output spike train, which accounts for the variability observed in vivo. The influence of a spike from input l on the soma potential of the postsynaptic neuron is delayed by the sum of the axonal and dendritic delays, dal+dal. The time course of the corresponding somatic PSP is described by a kernel function , which we scale by the synaptic weight Jl(t),
formula
where t0 is the time when the spike is fired by input l. For illustrative purposes, this analysis uses alpha functions that are defined by a single time constant for the kernels :
formula
2.8
The effects of all input spikes linearly add up over time to determine the rate function
formula
2.9
from which spikes are drawn using Poisson sampling. Here denotes the convolution between the PSP kernels and the spike trains. Recall that the inhibitory weights Jk are positive: larger values mean stronger inhibition.
Provided the rate function remains positive at all times, which we assume, the neuronal firing rate and the spike-time covariances Fk can be expressed in terms of the input rates and the input-input correlations Clk (Kempter et al., 1999; Gilson et al., 2009). The corresponding synaptic filtering is determined by the weights and the PSP properties:
formula
2.10a
formula
2.10b
The two expressions in equations 2.10a and 2.10b are good approximations when the weights vary very slowly compared to the neuronal activation dynamics; the dependence on time t then follows from the learning weights only.

3.  Results

We now analyze the mathematical formulation of the learning dynamics in section 2 to arrive at a general understanding of the dynamical principles to drive the emergence of frequency selectivity in a single neuron using a simplified configuration. More precisely, we examine conditions on the synaptic properties such that the BMF of the trained neuron corresponds to the training frequency, as detailed below. Then we present simulation results using a more realistic setting where a single neuron is tuned appropriately in order to verify the theoretical predictions.

Our scheme articulates three steps. First, learning must depend on the frequency of the training signal. The axonal and dendritic delays primarily determine how the synapses compete via STDP and, in particular, which ones are the “winners.” Second, the selected synapses give rise to frequency selectivity, that is, the trained neuron responds maximally to a specific frequency—its BMF. The BMF is determined by the PSP time constants and the delays of the winner synapses (Bürck & van Hemmen, 2009). Third, the BMF resulting from learning must be the same as the training frequency. Therefore, we need to match the distributions of PSP time constants and delays associated with the synapses. The following analysis describes the effect of these different PSP time constants and delays on the learning, which results in a complex relationship between the training frequency and BMF.

3.1.  Mathematical Analysis of STDP Dynamics and Neuronal Frequency Response.

The mathematical analysis is constrained to the case of a single excitatory synapse with fixed strength JE (typically taken equal to 1 in numerical examples) and inhibitory synapses with plastic strengths Jk(t), considered to be homogeneous initially. The (time-averaged) spike-time correlations contain information about the input frequency in contrast to the time-averaged firing rates (compare equations 2.6 and 2.7). To capture this information, competition between synapses must rely on spike-based effects. For this purpose, we use a form of STDP that itself generates homeostasis: the stabilization of the output firing rate leads to the nullification of rate-based effects in the learning equation, which corresponds to a fixed point in equation 2.3 for all input synapses:
formula
3.1
Stability is ensured by the term win<0 and the dominating potentiation for STDP, namely, . These conditions are the converse of those for plastic excitatory connections (Kempter et al., 1999; Gilson et al., 2009). From equation 3.1, we obtain the following equilibrium value for the neuronal firing rate:
formula
3.2
Because we use only a presynaptic rate-based contribution, the equilibrium firing rate is independent of the input firing rate . Note that the approximation in equation 3.2 neglects the overall effect of input spike-time correlations. Nevertheless, by adjusting the learning parameters, we can tune the neuronal output firing rate : in particular, a lower value for win leads to a higher mean inhibitory weight and thus a lower output firing rate. This property will be used to achieve an approximate balance between inhibition and excitation.
When homeostasis is satisfied, the synaptic competition is dominated by the spike effects. The latter can be evaluated through the convolution of the spike-time covariance in equation 2.10b and the STDP learning window shifted by the delays (see the third term on the right-hand side of equation 2.3). For synapse k, this “learning” integral is thus a function of the input frequency f and can be rewritten as
formula
3.3
where we have defined the “transfer” function G=Gt (it actually depends on the current distribution of the weights at time t) that describes the synaptic input-output filtering perform by all synapses,
formula
3.4
and the hat denotes the Fourier transform for any decent function h:
formula
3.5
From the right-hand side of equation 3.3, we see that the difference in the STDP effect between synapses is generated by the delay differences dakddk. This means that a distribution of values across the pool of synapses is required in order to obtain effective synaptic competition.

Now we detail the procedure used to evaluate any arbitrary configuration of PSP time constants and delays. As we explained earlier, the evaluation consists of three sequential steps, which are performed for all training frequencies in a given range:

  1. We examine which (single) synapse wins the STDP competition.

  2. The winner synapse determines the frequency responsiveness of the neuron, which gives its BMF.

  3. We evaluate how close the BMF is to each training frequency in the considered range.

We fix a given training frequency . We denote by the index of the “best” synapse, which gives a maximum for equation 3.3 and is thus expected to be maximally potentiated among all synapses. From this, corresponding to step 1, we evaluate the idealized filter for which the best synapse is potentiated to its maximal value (the upper bound) and all others are depressed to zero:
formula
3.6
This winner-take-all scenario estimates the frequency response of the neuron after training, step 2, via the Fourier transform of equation 3.6. Here we assume a perfect balance between inhibition and excitation, which has been shown to improve the frequency filtering of the neuron, in particular, by suppressing the very low frequencies Hz (Bürck & van Hemmen, 2009). This means that in the expression for above, so the frequency response is given by
formula
3.7
Finally, we can evaluate the frequency f that maximizes , namely, the BMF of the trained neuron, which we denote by . It is given by
formula
3.8
Step 3 consists of matching the training frequency with the BMF after training, such that the neuron becomes selective to the frequency that is presented, and this occurs for all values of in a given range. Intuitively, synapses with short delays and time constants should be selected when the neuron is trained by a high-frequency , as they lead to bandpass filters for high frequencies. Conversely, lower frequencies should be associated with larger delays and time constants. A trick here consists of keeping the sum of delays dak+ddk roughly constant for all synapses, such that the expression in equation 3.7 differs only because of the distinct time constants of across the synapses k. In this way, the frequency response given in equation 3.8 mainly relies on the time constants for the PSP response, while the learning selection mostly depends on the synaptic delays. However, this is not the only solution to this optimization problem, for which many sets of parameters can be found that give a good approximate theoretical solution.

In contrast to the scheme detailed here, the problem of tuning parameters to detect ITDs mainly consists in performing step 1—matching the synaptic delays and input ITDs to obtain the desired learning outcome. Since the trained neuron responds or not depending on the relative timing of two inputs (from the two ears), this means that steps 2 and 3 are much simpler in the case of ITDs.

3.2.  Dendritic Location and PSP Properties.

In our model, the shape of the STDP learning window W(u) is identical for each synapse. However, propagation delays along the dendrite shift the STDP curve, as was observed experimentally (Letzkus, Kampa, & Stuart, 2006, Fig. 8). Here both synaptic delays shift W(u) according to their difference dakddk (see equation 2.1). This can strongly affect the weight specialization (Leibold et al., 2002; Senn et al., 2002; Lubenov & Siapas, 2008; Gilson, Burkitt, Grayden, Thomas, & van Hemmen, 2010), as described here by FWk in equation 3.3. In parallel, the time constants associated with the somatic PSP kernels as well as also shape the response in frequency of the output neuron (Bürck & van Hemmen, 2009). Specifically, we postulate that the time constants for inhibitory PSPs depend on the distance to the soma, as schematically represented in Figures 3A and 3B. Before going further with the frequency selectivity, we take a closer look at the biological relevance of this assumption.

Figure 3:

Schematic dependence of the somatic PSP properties on the dendritic location. (A) The soma (gray filled circle) has two dendrites (thick straight lines), which are stimulated by input axons through inhibitory synapses 1 and 2 (axonal terminations represented by the open circles) and an excitatory synapse (filled circle). (B) Time courses of the corresponding PSP responses felt at the soma, including the delays. (C) Evolution over time of the potential following the injection of a inhibitory current. Darker gray corresponds to a more distant current injection from the soma (1 and 2 are extreme cases as in panel A). The solid curves are the solutions of the cable equation, 3.9, and the dashed curves are fitting alpha functions including delay. See the text for details on the fitting procedure. (D) Plots of the time constants (dashed curve) and delays (solid curve) of the alpha functions used to fit the solid curves in panel C. The x-axis is the distance to the soma along the dendrite. The dashed-dotted curve indicates a correction of the time constant to better approximate the peak of the solid curve.

Figure 3:

Schematic dependence of the somatic PSP properties on the dendritic location. (A) The soma (gray filled circle) has two dendrites (thick straight lines), which are stimulated by input axons through inhibitory synapses 1 and 2 (axonal terminations represented by the open circles) and an excitatory synapse (filled circle). (B) Time courses of the corresponding PSP responses felt at the soma, including the delays. (C) Evolution over time of the potential following the injection of a inhibitory current. Darker gray corresponds to a more distant current injection from the soma (1 and 2 are extreme cases as in panel A). The solid curves are the solutions of the cable equation, 3.9, and the dashed curves are fitting alpha functions including delay. See the text for details on the fitting procedure. (D) Plots of the time constants (dashed curve) and delays (solid curve) of the alpha functions used to fit the solid curves in panel C. The x-axis is the distance to the soma along the dendrite. The dashed-dotted curve indicates a correction of the time constant to better approximate the peak of the solid curve.

As an example to provide motivation, we consider a dendrite with diameter d=0.5 , membrane capacitance Cm=1 F/cm2, and specific resistance of the membrane Rm=10 kcm2 and of the intracellular medium Ri=200 cm. In the cable equation theory with passive propagation, these rather typical values lead to a length constant m and time constant ms. The somatic potential V(t) corresponding to a current injected at a distance x is described by Green's function,
formula
3.9
where the constant a<0 is proportional to the inhibitory synaptic current instantaneously injected at t = 0 (Koch & Segev, 1998). Here a is chosen in order that each PSP has a normalized area under the curve V(t). As illustrated in Figure 3C, a more distal synaptic excitation results in a slower somatic PSP (solid curves in darker gray). A fit to delayed alpha functions, namely, for , provides a fair approximation (dashed curves). This motivates our choice of using alpha functions in equation 2.8 to model the PSPs. The time constants (dashed curve) and dendritic delays ddk (solid curve) in Figure 3D correspond to the best fits in terms of area under the curves. For extreme cases such as inhibitory synapses 1 and 2, the alpha functions become less accurate in approximating the solutions of equation 3.9, especially the peaks. This led us to use a different fitting method based on the fitting delay obtained previously and the peak of the curve. This gives the dashed-dotted curve that has a broader range than the dashed one. In any case, the delay and PSP time constant monotonically increase with the distance to the soma (see Figure 3D).

3.3.  Tuning Synaptic Parameters to Obtain Frequency Selectivity.

Following the results in Figure 3, our model abstracts the physiological property that proximal synapses have shorter dendritic delays and PSP time constants than distal ones. This leaves a degree of freedom in tuning the axonal delays. Now we examine how to choose the latter together with the synaptic heterogeneity in order to obtain the desired learning dynamics. We focus on a range of training frequencies between 10 and 100 Hz, with only multiples of 5 Hz to simplify the analysis. For illustration purposes, we consider a pool of 100 synapses and tune the synaptic delays dak and ddk as well as the time constants for . Our choice of alpha functions for the PSP kernels in equation 2.8, which are described by a single time constant, also keeps the optimization as simple as possible. With such frequencies, appropriate parameters range between 0 and 10 ms, which is physiologically plausible, as shown earlier. We use a random exploration to obtain suitable sets of parameters, where the dendritic delays and PSP time constants vary jointly in a similar manner to Figure 3D.

Once the synaptic parameters are fixed, we can evaluate the effect of STDP via the learning integral FWk in equation 3.3. For the learning scheme to work efficiently, spike effects should be significant as compared to the rate effects in equation 2.3. This requires that the Fourier transform of W have significant power (as given by the absolute value) in the range of frequencies considered as compared to the integral value that corresponds to the value of the Fourier transform in f = 0, namely, , as illustrated in Figure 4. Likewise, the training neuronal filter will be more selective if each possible expression for in equation 3.7 has a clear maximum, and this for all possible potentiated synapse (without considering the relationship between and yet). Two examples are plotted in Figure 4 (see the dashed gray lines).

Figure 4:

Dependence of the strength of learning and neuronal response on the input frequency f. The black curve represents the absolute value of the Fourier transform of the learning window in equation 2.2 (see the black curve in Figure 1). The gray dashed and dashed-dotted curves are the Fourier transforms of the neuronal response, namely, , for the two extreme parameters and 9 ms, respectively, in Table 1.

Figure 4:

Dependence of the strength of learning and neuronal response on the input frequency f. The black curve represents the absolute value of the Fourier transform of the learning window in equation 2.2 (see the black curve in Figure 1). The gray dashed and dashed-dotted curves are the Fourier transforms of the neuronal response, namely, , for the two extreme parameters and 9 ms, respectively, in Table 1.

Synapses compete with each other, and those with indices k corresponding to larger values of FWk will be maximally potentiated by STDP. We thus consider the curves of for each training frequency and, in a winner-take-all fashion, we retain only the “best” synapse that corresponds to the maximum. In Figure 5A, the corresponding curves are plotted for three values of . In order to obtain effective frequency-dependent learning, curves for distinct training frequencies, should have a clear maximum for different synapse indices k. Varying dendritic and axonal delays in opposing fashion (i.e., the former increasing with k while the latter decreases) leads to strong selectivity, as this implies large delay differences dakddk across synapses. The relation between the training frequency and the best synapse is plotted in Figure 5B. Higher training frequencies select smaller synapse indices , corresponding to longer PSP time constants and dendritic delays.

Figure 5:

(A) Value of the learning integral in equation 3.3 plotted against the synapse index k for different training frequency : 30, 50, and 70 Hz, in order from darker to lighter gray. Synapses with larger indices correspond to larger PSP time constants and dendritic delays. The curves have been centered vertically. (B) Index of the “best” synapse as a function of the training frequency , corresponding to the respective maxima of the curves of in panel A. (C) Gain in amplitude of the trained neuron to the presentation of an input frequency f, given by the modulus of the neuronal response of equation 3.7 evaluated using the “best” synapse for each training frequency in panel B. The thick curves correspond to those in panel A with the same shading. The thin curves correspond to higher training frequency Hz with a span of 5 Hz. The dashed curve represents the initial frequency response before learning for homogeneous synapses. (D) Plot of the preferred frequency or BMF in equation 3.8, corresponding to the maxima of represented in panel C for training frequencies between 10 and 100 Hz. The dotted line indicates the perfect match . (E) Plot of the synaptic parameters obtained from our optimization procedure: time constants (black open circles), dendritic delays (gray crosses), and axonal delays (gray pluses). The temporal range is similar to that in Figure 3E, so the spatial range on the x-axis (arbitrary units here) corresponds to several hundred micrometers.

Figure 5:

(A) Value of the learning integral in equation 3.3 plotted against the synapse index k for different training frequency : 30, 50, and 70 Hz, in order from darker to lighter gray. Synapses with larger indices correspond to larger PSP time constants and dendritic delays. The curves have been centered vertically. (B) Index of the “best” synapse as a function of the training frequency , corresponding to the respective maxima of the curves of in panel A. (C) Gain in amplitude of the trained neuron to the presentation of an input frequency f, given by the modulus of the neuronal response of equation 3.7 evaluated using the “best” synapse for each training frequency in panel B. The thick curves correspond to those in panel A with the same shading. The thin curves correspond to higher training frequency Hz with a span of 5 Hz. The dashed curve represents the initial frequency response before learning for homogeneous synapses. (D) Plot of the preferred frequency or BMF in equation 3.8, corresponding to the maxima of represented in panel C for training frequencies between 10 and 100 Hz. The dotted line indicates the perfect match . (E) Plot of the synaptic parameters obtained from our optimization procedure: time constants (black open circles), dendritic delays (gray crosses), and axonal delays (gray pluses). The temporal range is similar to that in Figure 3E, so the spatial range on the x-axis (arbitrary units here) corresponds to several hundred micrometers.

From the indices , we obtain the frequency response curves corresponding to , which are plotted in Figure 5C. Their respective maxima give the BMF . As shown in Figure 5D, it is possible to find parameters such that the training frequency and BMF match closely in the range 20 to 90 Hz. In other words, we aim to find a suitable distribution of PSP time constants and delays that leads to the theoretical unsupervised scheme in which the neuronal bandpass filter tunes itself to its training frequency.

3.4.  Numerical Simulation of a Single Neuron.

The procedure in the previous section has led us to choose the parameters in Table 1, which we verify using numerical simulation. In order to find those parameters, we simply performed an exploration of randomly chosen sets of parameters within a given range (between 0 and 10 ms) and kept the best ones. We constrained the search to configurations where the excitatory synapses have faster time constants than the inhibitory synapses (see Figure 3). It was found that an equidistant distribution of the parameters between two extremal values (e.g., a=0.8 and b=3.8 ms for ddk in Table 1) requires more fine tuning. Instead, we use nonuniformly distributed parameters for the inhibitory synapses in order to give more synapses with short than large time constants (see Figure 5E). Rather than a+(ba)(k−1)/(MI−1) for , we use a+(ba)(k−1)1.6/(MI−1)1.6. The specific value 1.6 has been chosen for illustrative purposes only here. When the PSP time constants and dendritic delays increase almost linearly as in Figure 3D, this situation corresponds to a larger density of proximal synapses compared to distal ones.

Table 1:
Parameters Used in Numerical Simulation.
Excitatory synapses  
   Number of synapses ME 50 
   Dendritic delays dE 0.8 ms 
   Time constants  0.9 ms 
   Fixed weights  
Inhibitory synapses  
   Number of synapses MI 100 
   Dendritic delays ddk 0.8–3.8 ms 
   Axonal delays dak 4.8–3.3 ms 
   Time constants  1–9 ms 
   Initial weights  
Learning window  
   cP 20 
    10 ms 
   cD 
    30 ms 
Rate-based plasticity  
   win 1.5 
Excitatory synapses  
   Number of synapses ME 50 
   Dendritic delays dE 0.8 ms 
   Time constants  0.9 ms 
   Fixed weights  
Inhibitory synapses  
   Number of synapses MI 100 
   Dendritic delays ddk 0.8–3.8 ms 
   Axonal delays dak 4.8–3.3 ms 
   Time constants  1–9 ms 
   Initial weights  
Learning window  
   cP 20 
    10 ms 
   cD 
    30 ms 
Rate-based plasticity  
   win 1.5 

To maintain mathematical tractability in our prior calculations, we have made a number of assumptions that do not hold in simulations. First, the homeostatic stability is qualitatively satisfied, but the corresponding equilibrium value significantly differs from in equation 3.2. Two factors explain such discrepancies: the nonlinearity of the Poisson neuron for negative soma potential (i.e., when inhibition overpowers excitation) and the presence of spike-time correlations that are neglected in the derivation of . Because input-to-neuron correlations depend on the input frequency (see equation 2.10b), the homeostatic equilibrium value differs for different training frequency , meaning different levels of balance between inhibition and excitation. In our simulations, we observed a stronger response in firing rate for larger training frequencies , which arises because the effect of STDP vanishes for large frequencies (see Figure 4) and thus leads to a weaker potentiation of inhibitory weights. Second, in order to predict the frequency response of the trained neuron, we have considered the ideal winner-take-all situation where a single synapse is selected by STDP (i.e., unique values for the time constant and delays). Together, these two factors mean that the calculation in equation 3.6 only approximates the neuronal filtering after the STDP learning in the more realistic situation that is simulated. The theory, however, still provides sufficient information on the trends of our learning system to determine the PSP properties and delays such as to obtain the desired unsupervised learning outcome.

Figure 6A shows a typical evolution of the weights when the neuron is excited by a single training frequency (here, Hz). A subgroup of synapses with similar parameters is potentiated while the remainder is depressed in a winner-share-all fashion. With our choice of learning rate , a typical STDP update corresponds to of the total range, as weights are constrained between the bounds 0 and 0.1. Here the neuron is trained for 1000 s, which corresponds to about spikes per synapse. Despite the weight dynamics being significantly noisy, the functional weight structure remains relatively stable after 600 s. For different training frequencies, distinct subsets of synapses are selected, as shown in Figures 6B to 6D, for the training frequencies , 50, and 70 Hz; the arrow indicates the index of the best synapse predicted by the theory.

Figure 6:

(A) Evolution of the weights over 1000 s for a single trial f = 30 Hz. The thick lines represent the mean for four illustrative groups of two synapses with the same parameters; darker gray indicates shorter time constants; the second brighter is the most potentiated of the four. Synapses are potentiated differently depending on their parameters. (B–D) Weight distribution after learning for the training frequency equal to (B) 30, (C) 50, and (D) 70 Hz. Weights Jk are plotted against their respective PSP time constants . Each distribution corresponds to an average over 10 trials of 1000 s similar to panel A. In each panel, the arrow indicates the predicted winner in Figure 5B. The mismatch in panel D indicates that the final weight specialization slightly differs from the initial weight splitting from which is evaluated.

Figure 6:

(A) Evolution of the weights over 1000 s for a single trial f = 30 Hz. The thick lines represent the mean for four illustrative groups of two synapses with the same parameters; darker gray indicates shorter time constants; the second brighter is the most potentiated of the four. Synapses are potentiated differently depending on their parameters. (B–D) Weight distribution after learning for the training frequency equal to (B) 30, (C) 50, and (D) 70 Hz. Weights Jk are plotted against their respective PSP time constants . Each distribution corresponds to an average over 10 trials of 1000 s similar to panel A. In each panel, the arrow indicates the predicted winner in Figure 5B. The mismatch in panel D indicates that the final weight specialization slightly differs from the initial weight splitting from which is evaluated.

Now we measure the frequency response via the neuronal firing rate (averaged over 50 s) when presenting successively all test frequencies between 0 and 100 Hz (with steps of 5 Hz) while freezing the weight structure obtained after 1000 s of STDP learning. The corresponding curves averaged over 10 trials are represented in Figure 7A for the same frequencies as in Figure 6, namely, , 50, and 70 Hz. The error bars represent the standard deviation of the frequency responses, which indicate the high variability across the 10 trials. The summary for all training frequencies from 20 to 90 Hz is illustrated in Figure 7B, where the white circles represent the maximum for each curve similar to those in Figure 7A. The background color for the “column” corresponding to each training frequency corresponds to the intensity of the frequency response curve: A darker shading indicates a stronger response, whose maximum corresponds to the white circle; note that the curve for each has been rescaled independent of the other training frequencies. In addition to the natural averaging to predict the BMF represented by the circles, we evaluate another estimator for the BMF: we interpolate the frequency response for each trial using a polynomial of degree 4 and then determine the maximum. The error bars indicate the mean of those maxima averaged over all trials and their standard deviation. Our results satisfactorily agree with the theoretical predictions (dashed gray curve) for the range between 30 and 70 Hz, but they become poorer beyond this range. We stress that the neuronal system studied here is very noisy due to our model of inputs that does not contain any other temporal correlation apart from that arising from the oscillatory firing rates and to the stochastic nature of the Poisson neurons. The results presented here provide strong support for the hypothesis that STDP is capable of producing frequency selectivity in this context that assumes minimal spike-time information.

Figure 7:

(A) Frequency response curves of the trained neuron for , 50 and 70 Hz, in order from darker to lighter gray. Averaged response over 10 training trials with the standard deviation indicated by the error bars. (B) Matching between the training frequency and BMF for a single neuron trained with all frequencies between 20 and 90 Hz, with a step of 5 Hz. The background shading indicates, for each training frequency, the level of the neuronal firing rate compared with color coding ranging from the minimum (white) to the maximum (black), averaged over the 10 training trials. Two methods were used to evaluate the BMF. Circles: maximum of the curves in panel A that represent the trial-averaged response. Curve with error bars: trial-to-trial variability of maximum response evaluated using an interpolation of the smoothed response curve with a polynomial of degree 4 for each trial. The theoretical predictions (dashed gray curve) correspond to the crosses in Figure 5D.

Figure 7:

(A) Frequency response curves of the trained neuron for , 50 and 70 Hz, in order from darker to lighter gray. Averaged response over 10 training trials with the standard deviation indicated by the error bars. (B) Matching between the training frequency and BMF for a single neuron trained with all frequencies between 20 and 90 Hz, with a step of 5 Hz. The background shading indicates, for each training frequency, the level of the neuronal firing rate compared with color coding ranging from the minimum (white) to the maximum (black), averaged over the 10 training trials. Two methods were used to evaluate the BMF. Circles: maximum of the curves in panel A that represent the trial-averaged response. Curve with error bars: trial-to-trial variability of maximum response evaluated using an interpolation of the smoothed response curve with a polynomial of degree 4 for each trial. The theoretical predictions (dashed gray curve) correspond to the crosses in Figure 5D.

4.  Discussion

We have examined the unsupervised emergence of frequency selectivity when STDP trains a single neuron. Synaptic competition arises from the interaction of STDP, the inhomogeneous postsynaptic responses, and the periodicity of the input spike trains. In a simple configuration where the neuron has fixed excitatory and plastic inhibitory synapses (see Figure 8), this weight selection scheme causes the neuron to become maximally responsive to a particular input frequency, termed its BMF. Our mathematical framework allows the derivation of conditions on the synaptic parameters such that the BMF after learning matches the training frequency presented to the neuron, and this for a range of training frequencies (20–90 Hz).

Figure 8:

Schematic representation of the trained neuron. The soma (gray-filled circle) supports two dendrites (thick straight lines) that receive excitation from input axons. A single training frequency (gray sinusoidal wavelets) stimulates both inhibitory plastic synapses (open circles at the end of each axon) and a fixed inhibitory synapse (filled circle). (A) When a low frequency is presented, the more distal synapses are potentiated (thick line), whereas the more proximal ones are depressed (dashed line). (B) Conversely, a high input frequency leads to the selection of the more proximal synapses. (C) Equivalent phase delay of the inhibitory oscillation with respect to the excitatory signal. It is given for each synapse by the axonal delay multiplied by the training frequency: . The three curves correspond to (dashed-dotted), 50 (dashed), and 80 Hz (solid). The crosses indicate the phase delays for the selected synapses for all training frequencies in the range [10, 100] Hz.

Figure 8:

Schematic representation of the trained neuron. The soma (gray-filled circle) supports two dendrites (thick straight lines) that receive excitation from input axons. A single training frequency (gray sinusoidal wavelets) stimulates both inhibitory plastic synapses (open circles at the end of each axon) and a fixed inhibitory synapse (filled circle). (A) When a low frequency is presented, the more distal synapses are potentiated (thick line), whereas the more proximal ones are depressed (dashed line). (B) Conversely, a high input frequency leads to the selection of the more proximal synapses. (C) Equivalent phase delay of the inhibitory oscillation with respect to the excitatory signal. It is given for each synapse by the axonal delay multiplied by the training frequency: . The three curves correspond to (dashed-dotted), 50 (dashed), and 80 Hz (solid). The crosses indicate the phase delays for the selected synapses for all training frequencies in the range [10, 100] Hz.

Our results aim to shed light on the tuning of neuronal circuitry required to perform a frequency decomposition of input signals. Our theoretical contribution lies in analyzing the interaction between oscillatory spiking activity and STDP. First, we have shown how oscillatory spike trains induce temporal correlations and how STDP captures this short-time spiking information, in particular, the effect of shifting the STDP curve by means of delays (Morrison et al., 2008; Gilson et al., 2010). This has allowed us to determine conditions on the PSP time constants and delays under which STDP results in neuronal frequency selection. Note that excitatory synapses have faster time constants and delays than inhibitory synapses. With the STDP model used here that results in homeostasis, synapses compete with each other, which means that only the maximally potentiated synapses (when evaluating the spike-based terms involving the convolution with the learning window W) will be effectively potentiated. In the end, the synaptic structure obeys a detailed balance as described by Vogels et al. (2011) that is determined by the training frequency. Second, the analysis of the weight dynamics is similar to that used in other applications of STDP, such as auditory ITD processing (Leibold & van Hemmen, 2005; Leibold et al., 2002; Gerstner et al., 1996) or selection of the “effective” synaptic delay (Senn et al., 2002), but the desired frequency selectivity implies further constraints on the parameters compared to those previous studies. Although not formulated as an STDP rule, the Hebbian learning rule that Grigorév and Bibikov (2010) used favors excitatory synapses whose delays correspond to the periodicity of the presented signal. That configuration is the converse of our proposed scheme based on inhibitory STDP. Importantly, our version of STDP is completely local, as compared to many previous models. Relying on similar dynamics, another recent theoretical study investigated how to desynchronize a recurrent neuronal network using STDP on excitatory connections, in the context of deep-brain stimulation (Pfister & Tass, 2010). There the goal was to use periodic stimulation so that STDP weakens the recurrent feedback in the network (by shifting the fixed point of the synaptic dynamics). However, that study did not consider inhomogeneous synaptic responses, and although the resulting STDP dynamics was frequency dependent, the input frequency did not play a significant role in the competition between synapses there. Note also that beyond the application to frequency learning, the equations derived here can be straightforwardly adapted to study other situations where inhomogeneous synaptic properties interplay with STDP.

In our learning neuronal system, both dendritic delays and PSP time constants were assumed to increase with the distance to the soma (see Figure 5E), which is supported by experimental observations (Miles, Toth, Gulyas, Hajos, & Freund, 1996). Assuming a passive dendrite, we have shown that the temporal range assumed here is biologically plausible. Actually, the cable equation, 3.9, requires strong assumptions, such as an infinite length for the dendrite and constant diameter (Koch & Segev, 1998). A particular point is that as dendrites become thinner toward their extremity, the spatial constant decreases, meaning that shorter lengths may be sufficient compared to those calculated in Figure 3D in a more realistic configuration. We also did not take into account any specific ion channel or neurotransmitter. The solution presented in section 3.4 is not unique, so alternative location-dependent distributions of synaptic properties may be matched adequately. Nevertheless, the general upshot is that more proximal and more distal synapses specialize to high and low frequencies, respectively, as illustrated in Figures 8A and 8B. This relationship between the functional response of synapses (corresponding BMF) and their location on dendrites is the main prediction of our theory. It contrasts with experimental observations in the auditory cortex that synapses responding to frequencies above 200 Hz (i.e., related to the postcochlear tonotopic organization) do not exhibit a clustered organization on dendrites (Chen, Leischner, Rochefort, Nelken, & Konnerth, 2011).

In order that the postsynaptic neuron tunes its BMF to the stimulating frequency, axonal delays on the presynaptic side must be adjusted to the dendritic delays and PSP time constants on the postsynaptic side, as explained in section 3.3. (If, instead, we only request the postsynaptic neuron to specialize to some BMF, weaker constraints that potentiate clustered synapses on the dendritic tree may be sufficient.) It remains to be understood what mechanisms may lead to this coordination of dendritic and axonal delays. A possible candidate is the experimentally observed modification of delays during plasticity (Lin & Faber, 2002); a full study of the combined dynamics is left to subsequent work. Another interpretation of the axonal delay is a phase shift of the oscillatory signal at the inhibitory synapses with respect to that at the excitatory synapses. For our choice of parameters, these correspond to small phase delays for the inhibitory inputs that depend on the training frequency, as illustrated in Figure 8C. Finally, similar results can be obtained using our theory for a bimodal “asymmetric” STDP learning window, that is, potentiation when the effect of the presynaptic spike precedes that of the postsynaptic spike, and depression otherwise, which has also been experimentally observed for inhibitory synapses (Haas et al., 2006). However, it was found that these many opposing weight updates generate in practice additional noise in the synaptic dynamics, which dramatically weakens the weight specialization. Therefore, we present results only for the symmetric learning window in Figure 1.

A further step is to obtain a map of neurons similar to the single neuron trained here in order to perform a frequency decomposition of the modulation frequency conveyed by the input spike trains. By this, we mean a topographically organized specialization such that neighboring neurons have similar BMF. In this respect, our proposed scheme differs from the dynamics of some other STDP models (Izhikevich & Desai, 2003; Burkitt, Meffin, & Grayden, 2004; Pfister & Gerstner, 2006), whose dynamics also depends on the output frequency of the postsynaptic neuron. In those models, a higher training frequency would be systematically favored because shorter interspike intervals lead to stronger potentiation. This contrasts with our study based on the tuning of synaptic filtering, which allows low frequencies to be learned as well. A more complete model of the auditory pathway in the midbrain would also incorporate details of realistic inputs coming from the cochlea. The cochlea mechanically decomposes sound vibrations into bands of high frequencies, namely, from 0.5 to 20 kHz. Outgoing fibers thus convey high-frequency-specific spike trains that also comprise modulating frequencies (as studied in this letter) and excite specific subregions of the auditory midbrain, in this way forming a tonotopic map. Two types of correlations are thus superimposed in the input spike trains coming from the cochlea, but the correlations related to high frequencies do not interact with STDP as in our scheme (this would require submillisecond synaptic activation timescales). Therefore, our learning scheme appears plausible in a more elaborate model to obtain both BMF and tonotopic neuronal mapping. Experiments have shown how the neuronal response to high frequencies is shaped by the maturation of inhibitory circuitry (Dorrn et al., 2010) and modified following specific stimulation protocols (Dahmen, Hartley, & King, 2008). Our model predicts similar changes in the low-frequency range (e.g., shifting the BMF toward a different stimulating frequency).

Beyond neuronal identification of amplitude modulations within one cochlear frequency channel, our study has more general implications for brain areas that exhibit periodic spiking activity, such as the cortex and hippocampus. It suggests that the inhomogeneous postsynaptic responses and spike-based plasticity form a natural substrate for neurons to extract frequencies conveyed by spike trains. Along a similar line, a neuron was successfully trained by triplet-STDP to detect stimulus velocities of moving bars in a model visual field (Gjorgjieva, Clopath, Audet, & Pfister, 2011). The speed of the bar introduces delays in the inputs, and the synaptic competition is equivalent to selecting delays in order to shape the global postsynaptic response with a preference to a given velocity. In summary, the location-dependent properties of somatic PSPs in Figure 3, which can also be supplemented by location-dependent STDP as was observed for excitatory synapses (Froemke, Poo, & Dan, 2005; Letzkus et al., 2006; Froemke, Letzkus, Kampa, Hang, & Stuart, 2010), appears to be a genuine learning device to extract complex spiking information at a short timescale. Nonlinearities in the PSP integration (London & Häusser, 2005) may play a further role by shaping the input-to-neuron correlations. Together, this theoretical work stresses the importance of incorporating morphological features of dendrites for learning and computation with spikes.

Acknowledgments

We are grateful to the reviewers for comments that greatly improved the manuscript. M. B. benefited from a stay at Tomoki Fukai's Lab for Neural Circuit Theory (Riken Brain Science Institute, Japan).

References

Appleby
,
P. A.
, &
Elliott
,
T.
(
2006
).
Stable competitive dynamics emerge from multispike interactions in a stochastic model of spike-timing-dependent plasticity
.
Neural Comput.
,
18
,
2414
2464
.
Banerjee
,
A.
, &
Ellender
,
T. J.
(
2009
).
Oscillations in the developing cortex: A mechanism for establishing and synchronizing an early network?
J. Neurosci.
,
29
,
15029
15030
.
Bender
,
K. J.
, &
Trussell
,
L. O.
(
2011
).
Synaptic plasticity in inhibitory neurons of the auditory brainstem
.
Neuropharmacology
,
60
,
774
779
.
Bi
,
G. Q.
, &
Poo
,
M. M.
(
1998
).
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
.
J. Neurosci.
,
18
,
10464
10472
.
Bidet-Caulet
,
A.
,
Fischer
,
C.
,
Besle
,
J.
,
Aguera
,
P.-E.
,
Giard
,
M.-H.
, &
Bertrand
,
O.
(
2007
).
Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex
.
J. Neurosci.
,
27
,
9252
9261
.
Bienenstock
,
E. L.
,
Cooper
,
L. N.
, &
Munro
,
P. W.
(
1982
).
Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex
.
J. Neurosci.
,
2
,
32
48
.
Bürck
,
M.
, &
van Hemmen
,
J. L.
(
2009
).
Neuronal identification of signal periodicity by balanced inhibition
.
Biol. Cybern.
,
100
,
261
270
.
Burkitt
,
A. N.
,
Meffin
,
H.
,
Grayden
,
D. B.
(
2004
).
Spike-timing-dependent plasticity: The relationship to rate-based learning for models with weight dynamics determined by a stable fixed point
.
Neural Comput.
,
16
,
885
940
.
Caporale
,
N.
, &
Dan
,
Y.
(
2008
).
Spike-timing-dependent plasticity: A Hebbian learning rule
.
Annu. Rev. Neurosci.
,
31
,
24
46
.
Cateau
,
H.
,
Kitano
,
K.
, &
Fukai
,
T.
(
2008
).
Interplay between a phase response curve and spike-timing-dependent plasticity leading to wireless clustering
.
Phys. Rev. E
,
77
,
051909
.
Chen
,
X.
,
Leischner
,
U.
,
Rochefort
,
N. L.
,
Nelken
,
I.
, &
Konnerth
,
A.
(
2011
).
Functional mapping of single spines in cortical neurons in vivo
.
Nature
,
475
,
501
505
.
Dahmen
,
J. C.
,
Hartley
,
D. E.
, &
King
,
A. J.
(
2008
).
Stimulus-timing-dependent plasticity of cortical frequency representation
.
J. Neurosci.
,
28
,
13629
13639
.
Dan
,
Y.
, &
Poo
,
M. M.
(
2006
).
Spike timing-dependent plasticity: From synapse to perception
.
Physiol. Rev.
,
86
,
1033
1048
.
Dorrn
,
A. L.
,
Yuan
,
K.
,
Barker
,
A. J.
,
Schreiner
,
C. E.
, &
Froemke
,
R. C.
(
2010
).
Developmental sensory experience balances cortical excitation and inhibition
.
Nature
,
465
,
932
936
.
Eggert
,
J.
, &
van Hemmen
,
J. L.
(
2002
).
Activity-gating attentional networks
. In
J. C. E. Domany & J. L. van Hemmen
(Eds.),
Models of neural networks IV: Early vision and attention
(pp.
277
308
).
New York
:
Springer
.
Friedel
,
P.
,
Bürck
,
M.
, &
van Hemmen
,
J. L.
(
2007
).
Neuronal identification of acoustic signal periodicity
.
Biol. Cybern.
,
97
,
247
260
.
Fries
,
P.
,
Reynolds
,
J. H.
,
Rorie
,
A. E.
, &
Desimone
,
R.
(
2001
).
Modulation of oscillatory neuronal synchronization by selective visual attention
.
Science
,
291
,
1560
1563
.
Froemke
,
R. C.
, &
Dan
,
Y.
(
2002
).
Spike-timing-dependent synaptic modification induced by natural spike trains
.
Nature
,
416
,
433
438
.
Froemke
,
R. C.
,
Poo
,
M. M.
, &
Dan
,
Y.
(
2005
).
Spike-timing-dependent plasticity depends on dendritic location
.
Nature
,
434
,
221
225
.
Froemke
,
R. C.
,
Letzkus
,
J.
,
Kampa
,
B. M.
,
Hang
,
G. B.
, &
Stuart
,
G. J.
(
2010
).
Dendritic synapse location and neocortical spike-timing-dependent plasticity
.
Front. Syn. Neurosci.
,
2
,
29
.
Gerstner
,
W.
,
Kempter
,
R.
,
van Hemmen
,
J. L.
, &
Wagner
,
H.
(
1996
).
A neuronal learning rule for sub-millisecond temporal coding
.
Nature
,
383
,
76
78
.
Gilson
,
M.
,
Burkitt
,
A. N.
,
Grayden
,
D. B.
,
Thomas
,
D. A.
, &
van Hemmen
,
J. L.
(
2009
).
Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks I: Input selectivity—strengthening correlated input pathways
.
Biol. Cybern.
,
101
,
81
102
.
Gilson
,
M.
,
Burkitt
,
A. N.
,
Grayden
,
D. B.
,
Thomas
,
D. A.
, &
van Hemmen
,
J. L.
(
2010
).
Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: Self-organization schemes and weight dependence
.
Biol. Cybern.
,
103
,
365
386
.
Gilson
,
M.
, &
Fukai
,
T.
(
2011
).
Stability versus neuronal specialization for STDP: Long-tail weight distributions solve the dilemma
.
PLoS ONE
,
6
,
e25339
.
Gjorgjieva
,
J.
,
Clopath
,
C.
,
Audet
,
J.
, &
Pfister
,
J.-P.
(
2011
).
A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations
.
Proc. Natl. Acad. Sci. USA
,
108
,
19383
19388
.
Gonzalez Andino
,
S.L.G.
,
Michel
,
C. M.
,
Thut
,
G.
,
Landis
,
T.
, &
Grave de Peralta
,
R.
(
2005
).
Prediction of response speed by anticipatory high-frequency (gamma band) oscillations in the human brain
.
Hum. Brain Mapp.
,
24
,
50
58
.
Gray
,
C. M.
,
König
,
P.
,
Engel
,
A. K.
, &
Singer
,
W.
(
1989
).
Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties
.
Nature
,
338
,
334
337
.
Grigorév
,
D.
, &
Bibikov
,
N.
(
2010
).
Model of a neuron trained to extract periodicity
.
Acoust. Phys.
,
56
,
720
728
.
Grothe
,
B.
(
1994
).
Interaction of excitation and inhibition in processing of pure tone and amplitude-modulated stimuli in the medial superior olive of the mustached bat
.
J. Neurophysiol.
,
71
,
706
721
.
Gütig
,
R.
,
Aharonov
,
R.
,
Rotter
,
S.
, &
Sompolinsky
,
H.
(
2003
).
Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity
.
J. Neurosci.
,
23
,
3697
3714
.
Haas
,
J.
,
Nowotny
,
T.
, &
Abarbanel
,
H.
(
2006
).
Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex
.
J. Neurophysiol.
,
96
,
3305
3013
.
Highley
,
M. J.
, &
Contreras
,
D.
(
2006
).
Balanced excitation and inhibition determine spike timing during frequency adaptation
.
J. Neurosci.
,
26
,
448
457
.
Izhikevich
,
E. M.
(
2001
).
Resonate-and-fire neurons
.
Neural Networks
,
14
,
883
894
.
Izhikevich
,
E. M.
, &
Desai
,
N. S.
(
2003
).
Relating STDP to BCM
.
Neural Comput.
,
15
,
1511
1523
.
Joris
,
P. X.
,
Schreiner
,
C. E.
, &
Rees
,
A.
(
2004
).
Neural processing of amplitude-modulated sounds
.
Physiol. Rev.
,
84
,
541
577
.
Kempter
,
R.
,
Gerstner
,
W.
, &
van Hemmen
,
J. L.
(
1999
).
Hebbian learning and spiking neurons
.
Phys. Rev. E
,
59
,
4498
4514
.
Knudsen
,
E. I.
(
2007
).
Fundamental components of attention
.
Annu. Rev. Neurosci.
,
30
,
57
78
.
Koch
,
C.
, &
Segev
,
I.
(
1998
).
Methods in neural modeling: From ions to networks
(2nd ed.).
Cambridge, MA
:
MIT Press
.
Krebs
,
B.
,
Lesica
,
N. A.
, &
Grothe
,
B.
(
2008
).
The representation of amplitude modulations in the mammalian auditory midbrain
.
J. Neurophysiol.
,
100
,
1602
1609
.
Krishna
,
B.
, &
Semple
,
M.
(
2000
).
Auditory temporal processing: Response to sinusoidally amplitude-modulated tones in the inferior colliculus
.
J. Neurophysiol.
,
84
,
255
273
.
Langner
,
G.
(
1992
).
Periodicity coding in the auditory system
.
Hearing Res.
,
60
,
115
142
.
Langner
,
G.
, &
Schreiner
,
C.
(
1988
).
Periodicity coding in the inferior colliculus of the cat. I. Neuronal mechanisms
.
J. Neurophysiol.
,
60
,
1799
1822
.
Large
,
E. W.
, &
Crawford
,
J. D.
(
2002
).
Auditory temporal computation: Interval selectivity based on post-inhibitory rebound
.
J. Comput. Neurosci.
,
13
,
125
142
.
Las
,
L.
,
Stern
,
E. A.
, &
Nelken
,
I.
(
2005
).
Representation of tone in fluctuating maskers in the ascending auditory system
.
J. Neurosci.
,
25
,
1503
1513
.
Leibold
,
C.
, &
van Hemmen
,
J. L.
(
2005
).
Spiking neurons learning phase delays: How mammals may develop auditory time-difference sensitivity
.
Phys. Rev. Lett.
,
94
,
168102
.
Leibold
,
C.
,
Kempter
,
R.
, &
van Hemmen
,
J. L.
(
2002
).
How spiking neurons give rise to a temporal-feature map: From synaptic plasticity to axonal selection
.
Phys. Rev. E
,
65
,
051915
.
Letzkus
,
J.
,
Kampa
,
B.
, &
Stuart
,
G.
(
2006
).
Learning rules for spike timing-dependent plasticity depend on dendritic synapse location
.
J. Neurosci.
,
26
,
10420
10429
.
Licklider
,
J.
(
1951
).
A duplex theory of pitch perception
.
Experientia
,
7
,
128
134
.
Lin
,
J. W.
, &
Faber
,
D. S.
(
2002
).
Modulation of synaptic delay during synaptic plasticity
.
Trends Neurosci.
,
25
,
449
455
.
London
,
M.
, &
Häusser
,
M.
(
2005
).
Dendritic computation
.
Annu. Rev. Neurosci.
,
28
,
503
532
.
Lubenov
,
E. V.
, &
Siapas
,
A. G.
(
2008
).
Decoupling through synchrony in neuronal circuits with propagation delays
.
Neuron
,
58
,
118
131
.
Markram
,
H.
,
Lübke
,
J.
,
Frotscher
,
M.
, &
Sakmann
,
B.
(
1997
).
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
.
Science
,
275
,
213
215
.
Masquelier
,
T.
,
Guyonneau
,
R.
, &
Thorpe
,
S. J.
(
2008
).
Spike-timing-dependent plasticity finds the start of repeating patterns in continuous spike trains
.
PLoS One
,
3
,
e1377
.
Meddis
,
R.
, &
O'Mard
,
L.
(
2006
).
Virtual pitch in a computational physiological model
.
J. Acoust. Soc. Am.
,
120
,
3861
3869
.
Miles
,
R.
,
Toth
,
K.
,
Gulyas
,
A. I.
,
Hajos
,
N.
, &
Freund
,
T. F.
(
1996
).
Differences between somatic and dendritic inhibition in the hippocampus
.
Neuron
,
16
,
815
823
.
Morrison
,
A.
,
Diesmann
,
M.
, &
Gerstner
,
W.
(
2008
).
Phenomenological models of synaptic plasticity based on spike timing
.
Biol. Cybern.
,
98
,
459
478
.
Nelson
,
P. C.
, &
Carney
,
L. H.
(
2004
).
A phenomenological model of peripheral and central neural responses to amplitude-modulated tones
.
J. Acoust. Soc. Am.
,
116
,
2173
2186
.
Pfister
,
J-P.
, &
Gerstner
,
W.
(
2006
).
Triplets of spikes in a model of spike timing-dependent plasticity
.
J. Neurosci.
,
26
,
9673
9682
.
Pfister
,
J. P.
, &
Tass
,
P. A.
(
2010
).
STDP in oscillatory recurrent networks: Theoretical conditions for desynchronization and applications to deep brain stimulation
.
Front. Comput. Neurosci.
,
4
,
22
.
Rees
,
A.
, &
Møller
,
A.
(
1983
).
Responses of neurons in the inferior colliculus of the rat to AM and FM tones
.
Hearing Res.
,
10
,
301
330
.
Rees
,
A.
, &
Møller
,
A.
(
1987
).
Stimulus properties influencing the responses of inferior colliculus neurons to amplitude-modulated sounds
.
Hearing Res.
,
27
,
129
143
.
Rees
,
A.
, &
Palmer
,
A.
(
1989
).
Neuronal responses to amplitude-modulated and pure-tone stimuli in the guinea pig inferior colliculus, and their modification by broadband noise
.
J. Acoust. Soc. Am.
,
85
,
1978
1994
.
Roberts
,
P.
, &
Bell
,
C.
(
2000
).
Computational consequences of temporally asymmetric learning rules: II. Sensory image cancellation
.
J. Comput. Neurosci.
,
9
,
67
83
.
Roelfsema
,
P. R.
(
2006
).
Cortical algorithms for perceptual grouping
.
Annu. Rev. Neurosci.
,
29
,
203
227
.
Scarpetta
,
S.
,
Zhaoping
,
L.
, &
Hertz
,
J.
(
2002
).
Hebbian imprinting and retrieval in oscillatory neural networks
.
Neural Comput.
,
14
,
2371
2396
.
Schreiner
,
C.
, &
Langner
,
G.
(
1988
).
Periodicity coding in the inferior colliculus of the cat. II. Topograhical organization
.
J. Neurophysiol.
,
60
,
1823
1840
.
Sejnowski
,
T. J.
(
1977
).
Storing covariance with nonlinearly interacting neurons
.
J. Math. Biol.
,
4
,
303
321
.
Senn
,
W.
(
2002
).
Beyond spike timing: The role of nonlinear plasticity and unreliable synapses
.
Biol. Cybern.
,
87
,
344
355
.
Senn
,
W.
,
Schneider
,
M.
, &
Ruf
,
B.
(
2002
).
Activity-dependent development of axonal and dendritic delays, or, why synaptic transmission should be unreliable
.
Neural Comput.
,
14
,
583
619
.
Shinn-Cunningham
,
B. G.
, &
Wang
,
D.
(
2008
).
Influences of auditory object formation on phonemic restoration
.
J. Acoust. Soc. Am.
,
123
,
295
301
.
Sjöström
,
P. J.
,
Turrigiano
,
G. G.
, &
Nelson
,
S. B.
(
2001
).
Rate, timing, and cooperativity jointly determine cortical synaptic plasticity
.
Neuron
,
32
,
1149
1164
.
Song
,
S.
, &
Abbott
,
L. F.
(
2001
).
Cortical development and remapping through spike timing-dependent plasticity
.
Neuron
,
32
,
339
350
.
Song
,
S.
,
Miller
,
K. D.
, &
Abbott
,
L. F.
(
2000
).
Competitive Hebbian learning through spike-timing-dependent synaptic plasticity
.
Nat. Neurosci.
,
3
,
919
926
.
Speck-Hergenröder
,
J.
, &
Barth
,
F.
(
1987
).
Tuning of vibration sensitive neurons in the central nervous system of a wandering spider, Cupiennius salei Keys
.
J. Comp. Physiol. A
,
160
,
467
475
.
Tzounopoulos
,
T.
, &
Kraus
,
N.
(
2009
).
Learning to encode timing: Mechanisms of plasticity in the auditory brainstem
.
Neuron
,
62
,
463
469
.
Tzounopoulos
,
T.
,
Kim
,
Y.
,
Oertel
,
D.
, &
Trussell
,
L. O.
(
2004
).
Cell-specific, spike timing-dependent plasticities in the dorsal cochlear nucleus
.
Nature Neuroscience
,
7
,
719
725
.
van Hemmen
,
J. L.
(
2001
).
Theory of synaptic plasticity
. In
F. Moss & S. Gielen
(Eds.),
Handbook of biological physics, Vol. 4. Neuro-informatics, neural modelling
(pp.
771
823
).
Amsterdam
:
Elsevier
.
Vogels
,
T.
,
Sprekeler
,
H.
,
Zenke
,
F.
,
Clopath
,
C.
, &
Gerstner
,
W.
(
2011
).
Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks
.
Science
,
334
,
1569
1573
.
Woodin
,
M. A.
,
Ganguly
,
K.
, &
Poo
,
M. M.
(
2003
).
Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl-transporter activity
.
Neuron
,
39
,
807
820
.

Author notes

Matthieu Gilson and Moritz Bürck contributed equally to this letter.