Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-20 of 22
Terry Elliott
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (6): 1069–1143.
Published: 01 June 2020
FIGURES
| View All (14)
Abstract
View articletitled, First Passage Time Memory Lifetimes for Multistate, Filter-Based Synapses
View
PDF
for article titled, First Passage Time Memory Lifetimes for Multistate, Filter-Based Synapses
Models of associative memory with discrete state synapses learn new memories by forgetting old ones. In contrast to non-integrative models of synaptic plasticity, models with integrative, filter-based synapses exhibit an initial rise in the fidelity of recall of stored memories. This rise to a peak is driven by a transient process and is then followed by a return to equilibrium. In a series of papers, we have employed a first passage time (FPT) approach to define and study memory lifetimes, incrementally developing our methods, from both simple and complex binary-strength synapses to simple multistate synapses. Here, we complete this work by analyzing FPT memory lifetimes in multistate, filter-based synapses. To achieve this, we integrate out the internal filter states so that we can work with transitions only in synaptic strength. We then generalize results on polysynaptic generating functions from binary strength to multistate synapses, allowing us to examine the dynamics of synaptic strength changes in an ensemble of synapses rather than just a single synapse. To derive analytical results for FPT memory lifetimes, we partition the synaptic dynamics into two distinct phases: the first, pre-peak phase studied with a drift-only approximation, and the second, post-peak phase studied with approximations to the full strength transition probabilities. These approximations capture the underlying dynamics very well, as demonstrated by the extremely good agreement between results obtained by simulating our model and results obtained from the Fokker-Planck or integral equation approaches to FPT processes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (11): 2212–2251.
Published: 01 November 2019
FIGURES
| View All (11)
Abstract
View articletitled, Dynamic Integrative Synaptic Plasticity Explains the Spacing Effect in the Transition from Short- to Long-Term Memory
View
PDF
for article titled, Dynamic Integrative Synaptic Plasticity Explains the Spacing Effect in the Transition from Short- to Long-Term Memory
Repeated stimuli that are spaced apart in time promote the transition from short- to long-term memory, while massing repetitions together does not. Previously, we showed that a model of integrative synaptic plasticity, in which plasticity induction signals are integrated by a low-pass filter before plasticity is expressed, gives rise to a natural timescale at which to repeat stimuli, hinting at a partial account of this spacing effect. The account was only partial because the important role of neuromodulation was not considered. We now show that by extending the model to allow dynamic integrative synaptic plasticity, the model permits synapses to robustly discriminate between spaced and massed repetition protocols, suppressing the response to massed stimuli while maintaining that to spaced stimuli. This is achieved by dynamically coupling the filter decay rate to neuromodulatory signaling in a very simple model of the signaling cascades downstream from cAMP production. In particular, the model's parameters may be interpreted as corresponding to the duration and amplitude of the waves of activity in the MAPK pathway. We identify choices of parameters and repetition times for stimuli in this model that optimize the ability of synapses to discriminate between spaced and massed repetition protocols. The model is very robust to reasonable changes around these optimal parameters and times, but for large changes in parameters, the model predicts that massed and spaced stimuli cannot be distinguished or that the responses to both patterns are suppressed. A model of dynamic integrative synaptic plasticity therefore explains the spacing effect under normal conditions and also predicts its breakdown under abnormal conditions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (1): 8–67.
Published: 01 January 2019
FIGURES
| View All (11)
Abstract
View articletitled, First Passage Time Memory Lifetimes for Simple, Multistate Synapses: Beyond the Eigenvector Requirement
View
PDF
for article titled, First Passage Time Memory Lifetimes for Simple, Multistate Synapses: Beyond the Eigenvector Requirement
Models of associative memory with discrete-strength synapses are palimpsests, learning new memories by forgetting old ones. Memory lifetimes can be defined by the mean first passage time (MFPT) for a perceptron's activation to fall below firing threshold. By imposing the condition that the vector of possible strengths available to a synapse is a left eigenvector of the stochastic matrix governing transitions in strength, we previously derived results for MFPTs and first passage time (FPT) distributions in models with simple, multistate synapses. This condition permits jump moments to be computed via a 1-dimensional Fokker-Planck approach. Here, we study memory lifetimes in the absence of this condition. To do so, we must introduce additional variables, including the perceptron activation, that parameterize synaptic configurations, permitting Markovian dynamics in these variables to be formulated. FPT problems in these variables require solving multidimensional partial differential or integral equations. However, the FPT dynamics can be analytically well approximated by focusing on the slowest eigenmode in this higher-dimensional space. We may also obtain a much better approximation by restricting to the two dominant variables in this space, the restriction making numerical methods tractable. Analytical and numerical methods are in excellent agreement with simulation data, validating our methods. These methods prepare the ground for the study of FPT memory lifetimes with complex rather than simple, multistate synapses.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (12): 3219–3259.
Published: 01 December 2017
FIGURES
| View All (9)
Abstract
View articletitled, First Passage Time Memory Lifetimes for Simple, Multistate Synapses
View
PDF
for article titled, First Passage Time Memory Lifetimes for Simple, Multistate Synapses
Memory models based on synapses with discrete and bounded strengths store new memories by forgetting old ones. Memory lifetimes in such memory systems may be defined in a variety of ways. A mean first passage time (MFPT) definition overcomes much of the arbitrariness and many of the problems associated with the more usual signal-to-noise ratio (SNR) definition. We have previously computed MFPT lifetimes for simple, binary-strength synapses that lack internal, plasticity-related states. In simulation we have also seen that for multistate synapses, optimality conditions based on SNR lifetimes are absent with MFPT lifetimes, suggesting that such conditions may be artifactual. Here we extend our earlier work by computing the entire first passage time (FPT) distribution for simple, multistate synapses, from which all statistics, including the MFPT lifetime, may be extracted. For this, we develop a Fokker-Planck equation using the jump moments for perceptron activation. Two models are considered that satisfy a particular eigenvector condition that this approach requires. In these models, MFPT lifetimes do not exhibit optimality conditions, while in one but not the other, SNR lifetimes do exhibit optimality. Thus, not only are such optimality conditions artifacts of the SNR approach, but they are also strongly model dependent. By examining the variance in the FPT distribution, we may identify regions in which memory storage is subject to high variability, although MFPT lifetimes are nevertheless robustly positive. In such regions, SNR lifetimes are typically (defined to be) zero. FPT-defined memory lifetimes therefore provide an analytically superior approach and also have the virtue of being directly related to a neuron's firing properties.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (6): 1468–1527.
Published: 01 June 2017
FIGURES
| View All (86)
Abstract
View articletitled, Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses
View
PDF
for article titled, Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses
Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include “complex” models of synaptic plasticity in which synapses possess internal states governing the expression of synaptic plasticity. Integrate-and-express, filter-based models of synaptic plasticity propose that synapses act as low-pass filters, integrating plasticity induction signals before expressing synaptic plasticity. Such mechanisms enhance memory lifetimes, leading to an initial rise in the memory signal that is in radical contrast to other related, but nonintegrative, memory models. Because of the complexity of models with internal synaptic states, however, their dynamics can be more difficult to extract compared to “simple” models that lack internal states. Here, we show that by focusing only on processes that lead to changes in synaptic strength, we can integrate out internal synaptic states and effectively reduce complex synapses to simple synapses. For binary-strength synapses, these simplified dynamics then allow us to work directly in the transitions in perceptron activation induced by memory storage rather than in the underlying transitions in synaptic configurations. This permits us to write down master and Fokker-Planck equations that may be simplified under certain, well-defined approximations. These methods allow us to see that memory based on synaptic filters can be viewed as an initial transient that leads to memory signal rise, followed by the emergence of Ornstein-Uhlenbeck-like dynamics that return the system to equilibrium. We may use this approach to compute mean first passage time–defined memory lifetimes for complex models of memory storage.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (11): 2393–2460.
Published: 01 November 2016
FIGURES
| View All (73)
Abstract
View articletitled, Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes
View
PDF
for article titled, Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes
Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binary-strength synapses for associative memory storage, we have previously shown that such a filter-based model outperforms other, nonintegrative, “cascade”-type models of memory storage in most regions of biologically relevant parameter space. Here, we consider some natural extensions of our earlier filter model, including one specifically tailored to binary-strength synapses and one that demands a fixed, consecutive number of same-type induction signals rather than merely an excess before expressing synaptic plasticity. With these extensions, we show that filter-based models outperform nonintegrative models in all regions of biologically relevant parameter space except for a small sliver in which all models encode memories only weakly. In this sliver, which model is superior depends on the metric used to gauge memory lifetimes (whether a signal-to-noise ratio or a mean first passage time). After comparing and contrasting these various filter models, we discuss the multiple mechanisms and timescales that underlie both synaptic plasticity and memory phenomena and suggest that multiple, different filtering mechanisms may operate at single synapses.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (9): 1927–1984.
Published: 01 September 2016
FIGURES
| View All (117)
Abstract
View articletitled, The Enhanced Rise and Delayed Fall of Memory in a Model of Synaptic Integration: Extension to Discrete State Synapses
View
PDF
for article titled, The Enhanced Rise and Delayed Fall of Memory in a Model of Synaptic Integration: Extension to Discrete State Synapses
Integrate-and-express models of synaptic plasticity propose that synapses may act as low-pass filters, integrating synaptic plasticity induction signals in order to discern trends before expressing synaptic plasticity. We have previously shown that synaptic filtering strongly controls destabilizing fluctuations in developmental models. When applied to palimpsest memory systems that learn new memories by forgetting old ones, we have also shown that with binary-strength synapses, integrative synapses lead to an initial memory signal rise before its fall back to equilibrium. Such an initial rise is in dramatic contrast to nonintegrative synapses, in which the memory signal falls monotonically. We now extend our earlier analysis of palimpsest memories with synaptic filters to consider the more general case of discrete state, multilevel synapses. We derive exact results for the memory signal dynamics and then consider various simplifying approximations. We show that multilevel synapses enhance the initial rise in the memory signal and then delay its subsequent fall by inducing a plateau-like region in the memory signal. Such dynamics significantly increase memory lifetimes, defined by a signal-to-noise ratio (SNR). We derive expressions for optimal choices of synaptic parameters (filter size, number of strength states, number of synapses) that maximize SNR memory lifetimes. However, we find that with memory lifetimes defined via mean-first-passage times, such optimality conditions do not exist, suggesting that optimality may be an artifact of SNRs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (9): 1873–1923.
Published: 01 September 2014
FIGURES
| View All (26)
Abstract
View articletitled, Memory Nearly on a Spring: A Mean First Passage Time Approach to Memory Lifetimes
View
PDF
for article titled, Memory Nearly on a Spring: A Mean First Passage Time Approach to Memory Lifetimes
We study memory lifetimes in a perceptron-based framework with binary synapses, using the mean first passage time for the perceptron's total input to fall below firing threshold to define memory lifetimes. Working with the simplest memory-related model of synaptic plasticity, we may obtain exact results for memory lifetimes or, working in the continuum limit, good analytical approximations that afford either much qualitative insight or extremely good quantitative agreement. In one particular limit, we find that memory dynamics reduce to the well-understood Ornstein-Uhlenbeck process. We show that asymptotically, the lifetimes of memories grow logarithmically in the number of synapses when the perceptron's firing threshold is zero, reproducing standard results from signal-to-noise ratio analyses. However, this is only an asymptotically valid result, and we show that extending its application outside the range of its validity leads to a massive overestimate of the minimum number of synapses required for successful memory encoding. In the case that the perceptron's firing threshold is positive, we find the remarkable result that memory lifetimes are strictly bounded from above. Asymptotically, the dependence of memory lifetimes on the number of synapses drops out entirely, and this asymptotic result provides a strict upper bound on memory lifetimes away from this asymptotic regime. The classic logarithmic growth of memory lifetimes in the simplest, palimpsest memories is therefore untypical and nongeneric: memory lifetimes are typically strictly bounded from above.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (9): 1924–1972.
Published: 01 September 2014
FIGURES
| View All (100)
Abstract
View articletitled, Sparseness, Antisparseness and Anything in Between: The Operating Point of a Neuron Determines Its Computational Repertoire
View
PDF
for article titled, Sparseness, Antisparseness and Anything in Between: The Operating Point of a Neuron Determines Its Computational Repertoire
A recent model of intrinsic plasticity coupled to Hebbian synaptic plasticity proposes that adaptation of a neuron's threshold and gain in a sigmoidal response function to achieve a sparse, exponential output firing rate distribution facilitates the discovery of heavy-tailed or super- gaussian sources in the neuron's inputs. We show that the exponential output distribution is irrelevant to these dynamics and that, furthermore, while sparseness is sufficient, it is not necessary. The intrinsic plasticity mechanism drives the neuron's threshold large and positive, and we prove that in such a regime, the neuron will find supergaussian sources; equally, however, if the threshold is large and negative (an antisparse regime), it will also find supergaussian sources. Away from such extremes, the neuron can also discover subgaussian sources. By examining a neuron with a fixed sigmoidal nonlinearity and considering the synaptic strength fixed-point structure in the two-dimensional parameter space defined by the neuron's threshold and gain, we show that this space is carved up into sub- and supergaussian-input-finding regimes, possibly with regimes of simultaneous stability of sub- and supergaussian sources or regimes of instability of all sources; a single gaussian source may also be stabilized by the presence of a nongaussian source. A neuron's operating point (essentially its threshold and gain coupled with its input statistics) therefore critically determines its computational repertoire. Intrinsic plasticity mechanisms induce trajectories in this parameter space but do not fundamentally modify it. Unless the trajectories cross critical boundaries in this space, intrinsic plasticity is irrelevant and the neuron's nonlinearity may be frozen with identical receptive field refinement dynamics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (10): 2604–2654.
Published: 01 October 2012
FIGURES
| View All (13)
Abstract
View articletitled, The Rise and Fall of Memory in a Model of Synaptic Integration
View
PDF
for article titled, The Rise and Fall of Memory in a Model of Synaptic Integration
Plasticity-inducing stimuli must typically be presented many times before synaptic plasticity is expressed, perhaps because induction signals gradually accumulate before overt strength changes occur. We consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals before expressing plasticity. We find that the memory trace initially rises before reaching a maximum and then falling. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. In radical contrast, related but nonintegrative models exhibit only a highly problematic oblivescence. Synaptic integration mechanisms possess natural timescales, depending on the statistics of the induction signals. Together with neuromodulation, these timescales may therefore also begin to provide a natural account of the well-known spacing effect in the transition to late-phase plasticity. Finally, we propose experiments that could distinguish between integrative and nonintegrative synapses. Such experiments should further elucidate the synaptic signal processing mechanisms postulated by our model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (2): 455–522.
Published: 01 February 2012
FIGURES
| View All (57)
Abstract
View articletitled, Cross-Talk Induces Bifurcations in Nonlinear Models of Synaptic Plasticity
View
PDF
for article titled, Cross-Talk Induces Bifurcations in Nonlinear Models of Synaptic Plasticity
Linear models of synaptic plasticity provide a useful starting-point for examining the dynamics of neuronal development and learning, but their inherent problems are well known. Models of synaptic plasticity that embrace the demands of biological realism are therefore typically nonlinear. Viewed from a more abstract perspective, nonlinear models of synaptic plasticity are a subset of nonlinear dynamical systems. As such, they may therefore exhibit bifurcations under the variation of control parameters, including noise and errors in synaptic updates. One source of noise or error is the cross-talk that occurs during otherwise Hebbian plasticity. Under cross-talk, stimulation of a set of synapses can induce or modify plasticity in adjacent, unstimulated synapses. Here, we analyze two nonlinear models of developmental synaptic plasticity and a model of independent component analysis in the presence of a simple model of cross-talk. We show that cross-talk does indeed induce bifurcations in these models, entirely destroying their ability to acquire either developmentally or learning-related patterns of fixed points. Importantly, the critical level of cross-talk required to induce bifurcations in these models is very sensitive to the statistics of the afferents’ activities and the number of afferents synapsing on a postsynaptic cell. In particular, the critical level can be made arbitrarily small. Because bifurcations are inevitable in nonlinear models, our results likely apply to many nonlinear models of synaptic plasticity, although the precise details vary by model. Hence, many nonlinear models of synaptic plasticity are potentially fatally compromised by the toxic influence of cross-talk and other sources of noise and errors more generally. We conclude by arguing that biologically realistic models of synaptic plasticity must be robust against noise-induced bifurcations and that biological systems may have evolved strategies to circumvent their possible dangers.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 674–734.
Published: 01 March 2011
FIGURES
| View All (23)
Abstract
View articletitled, Stability Against Fluctuations: Scaling, Bifurcations, and Spontaneous Symmetry Breaking in Stochastic Models of Synaptic Plasticity
View
PDF
for article titled, Stability Against Fluctuations: Scaling, Bifurcations, and Spontaneous Symmetry Breaking in Stochastic Models of Synaptic Plasticity
In stochastic models of synaptic plasticity based on a random walk, the control of fluctuations is imperative. We have argued that synapses could act as low-pass filters, filtering plasticity induction steps before expressing a step change in synaptic strength. Earlier work showed, in simulation, that such a synaptic filter tames fluctuations very well, leading to patterns of synaptic connectivity that are stable for long periods of time. Here, we approach this problem analytically. We explicitly calculate the lifetime of meta-stable states of synaptic connectivity using a Fokker-Planck formalism in order to understand the dependence of this lifetime on both the plasticity step size and the filtering mechanism. We find that our analytical results agree very well with simulation results, despite having to make two approximations. Our analysis reveals, however, a deeper significance to the filtering mechanism and the plasticity step size. We show that a filter scales the step size into a smaller, effective step size. This scaling suggests that the step size may itself play the role of a temperature parameter, so that a filter cools the dynamics, thereby reducing the influence of fluctuations. Using the master equation, we explicitly demonstrate a bifurcation at a critical step size, confirming this interpretation. At this critical point, spontaneous symmetry breaking occurs in the class of stochastic models of synaptic plasticity that we consider.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (1): 124–159.
Published: 01 January 2011
FIGURES
| View All (178)
Abstract
View articletitled, The Mean Time to Express Synaptic Plasticity in Integrate-and-Express, Stochastic Models of Synaptic Plasticity Induction
View
PDF
for article titled, The Mean Time to Express Synaptic Plasticity in Integrate-and-Express, Stochastic Models of Synaptic Plasticity Induction
Stochastic models of synaptic plasticity propose that single synapses perform a directed random walk of fixed step sizes in synaptic strength, thereby embracing the view that the mechanisms of synaptic plasticity constitute a stochastic dynamical system. However, fluctuations in synaptic strength present a formidable challenge to such an approach. We have previously proposed that single synapses must interpose an integration and filtering mechanism between the induction of synaptic plasticity and the expression of synaptic plasticity in order to control fluctuations. We analyze a class of three such mechanisms in the presence of possibly non-Markovian plasticity induction processes, deriving expressions for the mean expression time in these models. One of these filtering mechanisms constitutes a discrete low-pass filter that could be implemented on a small collection of molecules at single synapses, such as CaMKII, and we analyze this discrete filter in some detail. After considering Markov induction processes, we examine our own stochastic model of spike-timing-dependent plasticity, for which the probability density functions of the induction of plasticity steps have previously been derived. We determine the dependence of the mean time to express a plasticity step on pre- and postsynaptic firing rates in this model, and we also consider, numerically, the long-term stability against fluctuations of patterns of neuronal connectivity that typically emerge during neuronal development.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (5): 1180–1230.
Published: 01 May 2010
FIGURES
| View All (8)
Abstract
View articletitled, A Non-Markovian Random Walk Underlies a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, A Non-Markovian Random Walk Underlies a Stochastic Model of Spike-Timing-Dependent Plasticity
A stochastic model of spike-timing-dependent plasticity (STDP) proposes that spike timing influences the probability but not the amplitude of synaptic strength change at single synapses. The classic, biphasic STDP profile emerges as a spatial average over many synapses presented with a single spike pair or as a temporal average over a single synapse presented with many spike pairs. We have previously shown that the model accounts for a variety of experimental data, including spike triplet results, and has a number of desirable theoretical properties, including being entirely self-stabilizing in all regions of parameter space. Our earlier analyses of the model have employed cumbersome spike-to-spike averaging arguments to derive results. Here, we show that the model can be reformulated as a non-Markovian random walk in synaptic strength, the step sizes being fixed as postulated. This change of perspective greatly simplifies earlier calculations by integrating out the proposed switch mechanism by which changes in strength are driven and instead concentrating on the changes in strength themselves. Moreover, this change of viewpoint is generative, facilitating further calculations that would be intractable, if not impossible, with earlier approaches. We prepare the machinery here for these later calculations but also briefly indicate how this machinery may be used by considering two particular applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (1): 244–272.
Published: 01 January 2010
FIGURES
| View All (6)
Abstract
View articletitled, Discrete States of Synaptic Strength in a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Discrete States of Synaptic Strength in a Stochastic Model of Spike-Timing-Dependent Plasticity
A stochastic model of spike-timing-dependent plasticity (STDP) postulates that single synapses presented with a single spike pair exhibit all-or-none quantal jumps in synaptic strength. The amplitudes of the jumps are independent of spiking timing, but their probabilities do depend on spiking timing. By making the amplitudes of both upward and downward transitions equal, synapses then occupy only a discrete set of states of synaptic strength. We explore the impact of a finite, discrete set of strength states on our model, finding three principal results. First, a finite set of strength states limits the capacity of a single synapse to express the standard, exponential STDP curve. We derive the expression for the expected change in synaptic strength in response to a standard, experimental spike pair protocol, finding a deviation from exponential behavior. We fit our prediction to recent data from single dendritic spine heads, finding results that are somewhat better than exponential fits. Second, we show that the fixed-point dynamics of our model regulate the upward and downward transition probabilities so that these are on average equal, leading to a uniform distribution of synaptic strength states. However, third, under long-term potentiation (LTP) and long-term depression (LTD) protocols, these probabilities are unequal, skewing the distribution away from uniformity. If the number of states of strength is at least of order 10, then we find that three effective states of synaptic strength appear, consistent with some experimental data on ternary-strength synapses. On this view, LTP and LTD protocols may therefore be saturating protocols.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (12): 3363–3407.
Published: 01 December 2009
FIGURES
| View All (10)
Abstract
View articletitled, Taming Fluctuations in a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Taming Fluctuations in a Stochastic Model of Spike-Timing-Dependent Plasticity
A stochastic model of spike-timing-dependent plasticity proposes that single synapses express fixed-amplitude jumps in strength, the amplitudes being independent of the spike time difference. However, the probability that a jump in strength occurs does depend on spike timing. Although the model has a number of desirable features, the stochasticity of response of a synapse introduces potentially large fluctuations into changes in synaptic strength. These can destabilize the segregated patterns of afferent connectivity characteristic of neuronal development. Previously we have taken these jumps to be small relative to overall synaptic strengths to control fluctuations, but doing so increases developmental timescales unacceptably. Here, we explore three alternative ways of taming fluctuations. First, a calculation of the variance for the change in synaptic strength shows that the mean change eventually dominates fluctuations, but on timescales that are too long. Second, it is possible that fluctuations in strength may cancel between synapses, but we show that correlations between synapses emasculate the law of large numbers. Finally, by separating plasticity induction and expression, we introduce a temporal window during which induction signals are low-pass-filtered before expression. In this way, fluctuations in strength are tamed, stabilizing segregated states of afferent connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (9): 2253–2307.
Published: 01 September 2008
Abstract
View articletitled, Temporal Dynamics of Rate-Based Synaptic Plasticity Rules in a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Temporal Dynamics of Rate-Based Synaptic Plasticity Rules in a Stochastic Model of Spike-Timing-Dependent Plasticity
In a recently proposed, stochastic model of spike-timing-dependent plasticity, we derived general expressions for the expected change in synaptic strength, Δ S n , induced by a typical sequence of precisely n spikes. We found that the rules Δ S n , n ≥ 3, exhibit regions of parameter space in which stable, competitive interactions between afferents are present, leading to the activity-dependent segregation of afferents on their targets. The rules Δ S n , however, allow an indefinite period of time to elapse for the occurrence of precisely n spikes, while most measurements of changes in synaptic strength are conducted over definite periods of time during which a potentially unknown number of spikes may occur. Here, therefore, we derive an expression, Δ S ( t ), for the expected change in synaptic strength of a synapse experiencing an average sequence of spikes of typical length occurring during a fixed period of time, t . We find that the resulting synaptic plasticity rule Δ S ( t ) exhibits a number of remarkable properties. It is an entirely self-stabilizing learning rule in all regions of parameter space. Further, its parameter space is carved up into three distinct, contiguous regions in which the exhibited synaptic interactions undergo different transitions as the time t is increased. In one region, the synaptic dynamics change from noncompetitive to competitive to entirely depressing. In a second region, the dynamics change from noncompetitive to competitive without the second transition to entirely depressing dynamics. In a third region, the dynamics are always noncompetitive. The locations of these regions are not fixed in parameter space but may be modified by changing the mean presynaptic firing rates. Thus, neurons may be moved among these three different regions and so exhibit different sets of synaptic dynamics depending on their mean firing rates.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (5): 1362–1399.
Published: 01 May 2007
Abstract
View articletitled, Multispike Interactions in a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Multispike Interactions in a Stochastic Model of Spike-Timing-Dependent Plasticity
Recently we presented a stochastic, ensemble-based model of spike-timing-dependent plasticity. In this model, single synapses do not exhibit plasticity depending on the exact timing of pre- and postsynaptic spikes, but spike-timing-dependent plasticity emerges only at the temporal or synaptic ensemble level. We showed that such a model reproduces a variety of experimental results in a natural way, without the introduction of various, ad hoc nonlinearities characteristic of some alternative models. Our previous study was restricted to an examination, analytically, of two-spike interactions, while higher-order, multispike interactions were only briefly examined numerically. Here we derive exact, analytical results for the general n -spike interaction functions in our model. Our results form the basis for a detailed examination, performed elsewhere, of the significant differences between these functions and the implications these differences have for the presence, or otherwise, of stable, competitive dynamics in our model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (10): 2414–2464.
Published: 01 October 2006
Abstract
View articletitled, Stable Competitive Dynamics Emerge from Multispike Interactions in a Stochastic Model of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Stable Competitive Dynamics Emerge from Multispike Interactions in a Stochastic Model of Spike-Timing-Dependent Plasticity
In earlier work we presented a stochastic model of spike-timing-dependent plasticity (STDP) in which STDP emerges only at the level of temporal or spatial synaptic ensembles. We derived the two-spike interaction function from this model and showed that it exhibits an STDP-like form. Here, we extend this work by examining the general n -spike interaction functions that may be derived from the model. A comparison between the two-spike interaction function and the higher-order interaction functions reveals profound differences. In particular, we show that the two-spike interaction function cannot support stable, competitive synaptic plasticity, such as that seen during neuronal development, without including modifications designed specifically to stabilize its behavior. In contrast, we show that all the higher-order interaction functions exhibit a fixed-point structure consistent with the presence of competitive synaptic dynamics. This difference originates in the unification of our proposed “switch” mechanism for synaptic plasticity, coupling synaptic depression and synaptic potentiation processes together. While three or more spikes are required to probe this coupling, two spikes can never do so. We conclude that this coupling is critical to the presence of competitive dynamics and that multispike interactions are therefore vital to understanding synaptic competition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (11): 2316–2336.
Published: 01 November 2005
Abstract
View articletitled, Synaptic and Temporal Ensemble Interpretation of Spike-Timing-Dependent Plasticity
View
PDF
for article titled, Synaptic and Temporal Ensemble Interpretation of Spike-Timing-Dependent Plasticity
We postulate that a simple, three-state synaptic switch governs changes in synaptic strength at individual synapses. Under this switch rule, we show that a variety of experimental results on timing-dependent plasticity can emerge from temporal and spatial averaging over multiple synapses and multiple spike pairings. In particular, we show that a critical window for the interaction of pre- and postsynaptic spikes emerges as an ensemble property of the collective system, with individual synapses exhibiting only a minimal form of spike coincidence detection. In addition, we show that a Bienenstock-Cooper-Munro—like, rate-based plasticity rule emerges directly from such a model. This demonstrates that two apparently separate forms of neuronal plasticity can emerge from a much simpler rule governing the plasticity of individual synapses.
1