Abstract

Memory models based on synapses with discrete and bounded strengths store new memories by forgetting old ones. Memory lifetimes in such memory systems may be defined in a variety of ways. A mean first passage time (MFPT) definition overcomes much of the arbitrariness and many of the problems associated with the more usual signal-to-noise ratio (SNR) definition. We have previously computed MFPT lifetimes for simple, binary-strength synapses that lack internal, plasticity-related states. In simulation we have also seen that for multistate synapses, optimality conditions based on SNR lifetimes are absent with MFPT lifetimes, suggesting that such conditions may be artifactual. Here we extend our earlier work by computing the entire first passage time (FPT) distribution for simple, multistate synapses, from which all statistics, including the MFPT lifetime, may be extracted. For this, we develop a Fokker-Planck equation using the jump moments for perceptron activation. Two models are considered that satisfy a particular eigenvector condition that this approach requires. In these models, MFPT lifetimes do not exhibit optimality conditions, while in one but not the other, SNR lifetimes do exhibit optimality. Thus, not only are such optimality conditions artifacts of the SNR approach, but they are also strongly model dependent. By examining the variance in the FPT distribution, we may identify regions in which memory storage is subject to high variability, although MFPT lifetimes are nevertheless robustly positive. In such regions, SNR lifetimes are typically (defined to be) zero. FPT-defined memory lifetimes therefore provide an analytically superior approach and also have the virtue of being directly related to a neuron's firing properties.

1  Introduction

Imposing limits on synaptic strengths turns an otherwise catastrophically forgetting Hopfield (1982) network into a “palimpsest” memory that learns new memories by forgetting old ones (Nadal, Toulouse, Changeux, & Dehaene, 1986; Parisi, 1986). Models of palimpsest memory with discrete, multistate synapses using feedforward or recurrent networks have become the subject of intensive study in recent years (Tsodyks, 1990; Amit & Fusi, 1994; Fusi, Drew, & Abbott, 2005, Leibold & Kempter, 2006, 2008; Rubin & Fusi, 2007; Barrett & van Rossum, 2008; Huang & Amit, 2010, 2011; Elliott & Lagogiannis, 2012; Lahiri & Ganguli, 2013; Elliott, 2016a, 2016b). Such models may be based on “simple” synapses that lack internal, plasticity-related states, or “complex” synapses that possess internal states that may affect the expression of synaptic plasticity.

To be viable models of biological memory, memories in palimpsest models must be sufficiently long-lived. Several approaches to defining palimpsest memory lifetimes exist, including the signal-to-noise ratio (SNR) (Tsodyks, 1990) and equivalent so-called ideal observer variants (Fusi et al., 2005; Lahiri & Ganguli, 2013; see Elliott, 2016b, for a discussion of their complete equivalence); signal detection theory (Leibold & Kempter, 2006, 2008); and retrieval probabilities (Huang & Amit, 2010, 2011). In a feedforward setting with a single perceptron for simplicity, we have also considered the mean first passage time (MFPT) for the perceptron's activation to fall below firing threshold (Elliott, 2014). An MFPT approach to memory lifetimes overcomes many of the difficulties of an SNR approach and shows that the latter is asymptotically valid only in the limit of a large number of synapses (Elliott, 2014). We have also observed in simulation that conditions on the number of states of synaptic strength that appear to optimize SNR memory lifetimes are not respected by MFPT lifetimes, suggesting that such optimality conditions are artifacts of the SNR approach (Elliott, 2016a).

We may obtain exact analytical results for MFPT lifetimes for any synaptic model, but the results are essentially useless for explicit computations. For the specific case of simple, binary-strength synapses, we may reduce the difficulty of the calculations by considering transitions in the perceptron's activation at successive memory storage steps (Elliott, 2014). This allows us to derive approximation methods and reduce the dynamics of memory decay to an Ornstein-Uhlenbeck (OU) process (Uhlenbeck & Ornstein, 1930). It is also possible to make some progress in understanding MFPT memory lifetimes for complex synapses with binary strengths by integrating out the internal states and working directly in the transitions in synapses' strengths (Elliott, 2017). For general, multistate synapses however, whether simple or complex, we cannot work directly in the transitions in the perceptron's activation, as discussed below. Here, we show that for simple synapses, we can obtain the entire first passage time (FPT) distribution from a Fokker-Planck equation when the vector of strengths available to a synapse is an eigenvector of the stochastic matrix governing changes in synapses' strengths. Provided that the actual vector of possible synaptic strengths is sufficiently close to an eigenvector, our results give good approximations, so this eigenvector requirement is not too restrictive.

Our letter is organized as follows. In section 2 we define our general formalism and review the derivation of analytical results for MFPTs for simple, binary-strength synapses. In section 3 for simple, multistate synapses we set up a Fokker-Planck equation, derive the required jump moments, and then obtain the FPT distribution. In section 4 we consider two different synaptic models respecting the eigenvector requirement. In section 5 we derive SNR memory lifetimes for the purposes of comparison with MFPT memory lifetimes. We examine our results in section 6, comparing analytical and simulation results and considering the differences between SNR and MFPT memory lifetimes, but also considering the variance in FPT-defined lifetimes. Finally, in section 7, we briefly discuss our approach.

2  General Formalism and Previous Results

We first summarize our general approach to studying memory lifetimes in a feedforward, perceptron-based formulation. We then discuss the simplest possible model of synaptic plasticity for palimpsest memory. We finally briefly review our previous analysis of MFPT memory lifetimes for simple, binary-strength synapses. Full details may be found elsewhere (Elliott, 2014).

2.1  Perceptron Memory

A single perceptron with synapses of strengths , , at time s, and input vector with components has normalized total input or activation or unthresholded output defined by
formula
2.1
We are concerned only with whether is above the perceptron's firing threshold, defined as . The synaptic strengths take values from a discrete set. For binary-strength synapses, these values are taken to be . For multistate synapses with discrete levels of strength, so for , we will consider different possible choices of this set of values.

The perceptron sequentially stores memories , indexed by with components . These memories may be presented as a discrete time process or, more realistically for biological memory storage, as a continuous time process, which we take to be a Poisson process of rate . The first memory is always presented at time  s, where we use this formal device of  s rather than  s so that we may refer to the time immediately after the storage of as time  s. The components take binary values with probabilities , with . Any particular memory is deemed to be stored at time provided that the perceptron's activation upon re-presentation of the memory exceeds threshold, . As we will assume that , the perceptron's output is required to be positive for memory storage. The component is therefore the plasticity induction signal to synapse upon storage of memory . Consistent with our previous work, we set , so that potentiation () and depression () processes are balanced.

To assess memory lifetimes, we track the fidelity of recall of the first memory as the later memories , , are stored. The storage of these later memories leads to changes in synaptic strengths that may affect the recall of . We refer to memory as the tracked memory, and we define
formula
2.2
and refer to as the tracked memory signal. As the memories are stochastic in nature and the Poisson times at which they are stored are random variables, the memory signal is a random variable governed by a probability distribution. Its mean and variance,
formula
2.3a
formula
2.3b
are used to define the SNR , and the SNR memory lifetime of any particular model is typically defined as the solution of . Some variants of the SNR approach use rather than in the denominator of , but this approach is less well justified from a statistical point of view (Elliott, 2016b).

The SNR definition of memory lifetime suffers from a number of difficulties that we have previously described (Elliott, 2014). First, there is some arbitrariness in defining via ; we could use any other positive number on the right-hand side instead. Second, the SNR considers only the variance as a possible source of fluctuations that may render the memory signal indistinguishable from its equilibrium value. Third, SNR memory lifetimes differ depending on whether memories are stored as a discrete time process or a continuous time process. Fourth, because the SNR mixes different signal statistics, it is not a quantity that can be read out directly from a neuron's membrane potential, and so it is not a quantity of immediate relevance to the system whose memory dynamics are being studied.

2.2  Stochastic Updater Synapses

The simplest possible model of synaptic plasticity for memory storage is based on a simple binary-strength synapse that expresses with probability a change in synaptic strength (if possible) when the synapse experiences a plasticity induction signal (Tsodyks, 1990). We refer to such a synapse as a “stochastic updater.” The strength of synapse is a random variable. For a binary-strength synapse, the probability distribution of a synapse's strength is represented by a two-dimensional vector, where the first (respectively, second) entry of the vector is the probability that (respectively, ). The stochastic transitions in a synapse's strength in response to plasticity induction signals are represented by stochastic or transition matrices given by
formula
2.4
for , respectively. Because we average over the sequence of memories rather than consider any particular realization, the relevant transition matrix for the storage of the non-tracked, memories is
formula
2.5
As , any synapse's strength state asymptotes to the equilibrium distribution defined by the eigenvector associated with the unit eigenvalue of , , where the superscript T denotes the transpose. The tracked memory is stored against the background of this equilibrium distribution at  s. For synapses experiencing (respectively, ), their states at  s are governed by the probability distribution (respectively, ). Because we average over the initial memory , any synapse is initially in a state that is an equiprobable mixture of the two distributions .
At some future time , the distribution of strengths of synapse is given by , depending on the sign of , where is the identity matrix. Computing these two distributions explicitly and defining , we obtain
formula
2.6
regardless of the sign of , so that all of the variables are identically distributed. Because the tracked memory signal in equation 2.2 is just a (normalized) sum over these tilded strength variables, it is therefore just a (normalized) sum over identically distributed random variables. For balanced potentiation and depression processes, , the mixture of states governed by the two distributions therefore collapses in terms of their contribution to the evolution of . This result is in fact quite general for synaptic plasticity processes that treat potentiation and depression completely symmetrically (Elliott, 2016b) and holds not only for , binary-strength stochastic updater synapses but also for their generalization below to multistate, synapses. We therefore need not consider the initial synaptic state immediately after the storage of to be a mixture of the two distributions but can instead consider, say, only and work directly with the variables rather than their tilded forms , in effect simply setting for all synapses. This dramatic simplification is possible only for balanced and symmetric processes.

2.3  MFPTs for Binary Stochastic Updater Synapses

To overcome the shortcomings in the SNR approach discussed above, we consider the FPT for the perceptron's activation to fall below threshold (Elliott, 2014). For any particular realization of the sequence of memories , will first fall (to or) below threshold at some time . We average over all possible realizations of the memories to obtain the MFPT, and this defines the MFPT memory lifetime . The MFPT memory lifetime overcomes all the shortcomings of the SNR memory lifetime (Elliott, 2014).

To calculate the MFPT memory lifetime for stochastic updater synapses, we observe that the tracked memory signal is a (normalized) sum over variables taking values of either or for binary-strength synapses. Its value is therefore uniquely determined by the number of these variables taking value : if of them take value , then . We may use this observation to compute the transition probability for the perceptron activation between successive memory storage steps (Elliott, 2014). Let denote the perceptron activation immediately after the storage of memory . The initial distribution immediately after the storage of is
formula
2.7
where denotes a binomial coefficient. The transition probability between successive values of the activation is
formula
2.8
where the usual conventions regarding binomial coefficients apply. Using these transitions in perceptron activation, we derived an expression for the MFPT for the activation to fall (to or) below from an initial activation . Letting denote this MFPT, we have
formula
2.9
We may move to a continuum limit for when is large enough, in excess of around 100. In this limit, the two distributions in equations 2.7 and 2.8 may be replaced with gaussian distributions with matched (conditional) means and variances. In this limit, equation 2.9 becomes
formula
2.10
where is the continuum gaussian kernel corresponding to equation 2.8. Assuming that we can solve the equation for , then , where denotes an average over the initial distribution for .
MFPT equations of the form in equation 2.10 are rarely soluble except for a handful of particular kernels. Previously we replaced the gaussian kernel with a formal expansion using the Dirac delta function ,
formula
2.11
where the primes denote differentiation with respect to the argument and we write . This formal kernel has the same conditional mean and variance as equation 2.8. Equation 2.10 then becomes the differential equation
formula
2.12
for and the solution for . For small enough, equation 2.12 becomes
formula
2.13
which is the equation governing the MFPT for the OU process. We defer discussion of the solutions of these equations to the next section.

3  Fokker-Planck Approach to FPT Distribution

The ability to work directly in the transitions in the perceptron's activation with each memory storage event and essentially ignore the details of the underlying transitions in all synapses' strengths is critical to our derivation of MFPT results for binary-strength synapses. In this way, we need consider only transition matrices that are rather than in size. For binary-strength synapses, this is possible because the number of synapses with (tilded) strength uniquely determines the perceptron's activation and, conversely, the perceptron's activation uniquely determines the number of such synapses. For , however, although the configuration of synaptic strengths uniquely determines the perceptron's activation, the perceptron's activation does not in general uniquely (even up to trivial permutation symmetries) determine the configuration of synaptic strengths. For example, for and with , any pair of synapses may have (tilded) strengths of and (in any order), or both may have strengths of 0: both of these strength configurations contribute identically to the perceptron's activation. This degeneracy only increases as increases. To determine the statistics of the FPT process for the perceptron's activation for general , we therefore cannot directly use the transitions in the perceptron's activation and must find a different method.

3.1  Fokker-Planck Formulation

Let denote the transition probability from initial activation at time (here ) to a final activation at time . The Fokker-Planck or forward Chapman-Kolmogorov equation is then
formula
3.1
while the adjoint or backward Chapman-Kolmogorov equation is
formula
3.2
The functions and are the infinitesimal jump moments,
formula
3.3
with and . Because the transitions in the perceptron's activation are subject to potentially large jump processes in which the activation can jump across the firing threshold , the use of the Fokker-Planck equation constitutes a diffusion limit approximation.
If we impose the absorbing boundary condition on the solution of the Fokker-Planck equation, the density for the system to escape from the interval for the first time at time from an initial state (at time ) in this interval is given by
formula
3.4
Using the backward equation, we obtain
formula
3.5
Since , we have
formula
3.6
for . Comparing equation 3.6 to equation 2.12 for , we see that they are structurally identical, indicating that the use of the kernel constitutes a diffusion approximation in which jump processes have been neglected. In addition to the MFPT, we can also obtain all the FPT statistics from equation 3.5. Laplace-transforming this equation, with denoting the Laplace transform of with transformed variable , and using the fact that for , we have
formula
3.7
This equation is solved subject to the two boundary conditions and , and then we take the limit in order to remove the influence of the second boundary at . As the moment generating function (MGF) of a density is just its Laplace transform up to the sign of , is just the MGF of the FPT distribution. To be able to determine the FPT distribution and all its moments, we need the jump moments and .

3.2  Determination of Jump Moments

With as before and defining , we can obtain the evolution of these moments from the Fokker-Planck equation using
formula
3.8a
formula
3.8b
If we can derive these two evolution equations via another method, we can deduce the form of the jump moments and .
Since , we have that
formula
3.9a
formula
3.9b
where we have used the fact that all the are identically distributed to single out any particular pair of synapses, here just and . We can also simply set and so compute the expectation values using only strength rather than tilded strength variables. It is a property of the synaptic plasticity models considered below that for any choice of , in equation 3.9b is independent of time. We denote by the -dimensional vector of possible synaptic strengths available to a multistate synapse so that . These components are ordered weakest to strongest. Then we will show below that
formula
3.10
where we use the final form as convenient shorthand notation.
The evolution of the quantities on the right-hand sides of equation 3.9 involves only the dynamics of a single synapse via the mean and the joint dynamics of a pair of synapses via the correlation function . For general , let the general transition matrix for a synapse's strength be , which for will generalize the particular form of in equation 2.5. Let denote the probability distribution of any single synapse's strength, and let denote the joint probability distribution of any pair of synapses' strengths. Since and , we obtain
formula
3.11a
formula
3.11b
which follow directly from the evolution equations for and .
For , is a left eigenvector of in equation 2.5. For , the right-hand sides of equation 3.11 therefore simplify, generating a closed system of equations for and from which and follow. For general , unless is a left eigenvector of , the right-hand sides of equation 3.11 do not close. To make progress, we must assume that is a left eigenvector of ,
formula
3.12
where is the eigenvalue of associated with its left eigenvector . In the following section, we construct models of synaptic plasticity satisfying this eigenvector requirement.
With the exact eigenvector requirement on and , we obtain
formula
3.13a
formula
3.13b
with explicit solutions
formula
3.14a
formula
3.14b
where the initial mean memory signal immediately after the storage of depends on the details of the model of synaptic plasticity. We will write throughout for convenience. By comparing equations 3.8 and 3.13, we can read off the expectation values of the jump moments,
formula
3.15a
formula
3.15b
from which we finally deduce that
formula
3.16a
formula
3.16b
For , and , so these jump moments reduce identically to the coefficients of the MFPT equation in equation 2.12 for binary synapses.
Although the eigenvector requirement in equation 3.12 may appear to be very strong, in general even if is not an exact left eigenvector of but is sufficiently close to one, say , then we would expect to obtain a good approximation by using as an approximate eigenvector of . If a symmetric has a complete set of orthonormal eigenvectors , with, say, and is close to , then we can write
formula
3.17
If the contribution from the first term involving dominates the contributions from the other eigenvectors, then we can write with , where is the eigenvalue associated with the closest eigenvector . In general, then, provided that has an eigenvector close enough to the actual vector of possible synaptic strengths , we would expect to obtain good quantitative agreement between our analytical results below, where we assume that is an exact left eigenvector of , and numerical or simulation-based results, for which we may relax this assumption.

3.3  Extraction of FPT Distribution

Although it is possible to obtain the FPT distribution for the full forms of and in equation 3.16, we may consider a simpler form for and obtain extremely good agreement with the full results. Specifically, we write only to first order in , so that
formula
3.18
This is equivalent to considering an OU limit; for , , so this is just the small limit. Below we frequently refer to dynamics for large enough . By this, we only mean large compared, say, to 100, but not so large that the simpler form for is invalidated. Biologically, large means of order or ; larger values are irrelevant. In this OU limit, equation 3.7 becomes
formula
3.19
The parameter enters in two ways. First is through a rescaling of the rate via as will depend on . Both and rescale the time ; we define and write . Second is through a rescaling of via the quantity to generate an effective number of synapses . We can then rewrite equation 3.19 as
formula
3.20
The solution of this equation, subject to the boundary conditions at and and taking , is
formula
3.21
where is a Hermite polynomial of possibly noninteger order.
In general, we cannot invert equation 3.21, although we can expand as a power series in to obtain the moments of the FPT distribution. However, for the particular case of , we can explicitly write down the solution of the original Fokker-Planck equation in the OU limit satisfying the absorbing boundary condition . If is the standard OU solution of the Fokker-Planck equation in the absence of an absorbing boundary, then a standard image construction gives with as the solution for satisfying the boundary condition. From equation 3.4, we then obtain
formula
3.22
as an explicit form of the FPT distribution for .
Expanding equation 3.21 to second order in , we obtain expressions for the lowest-order statistics of the FPT distribution. Defining via
formula
3.23
where is the imaginary error function and is a hypergeometric function, the MFPT is
formula
3.24
We have derived this form before for (Elliott, 2014), but equation 3.24 generalizes the result to . For the mean squared FPT, and thence the variance denoted by , we can obtain exact results, but they are in general very messy and we do not reproduce them here. However, for large enough, the results for both and simplify dramatically. They differ qualitatively between and because as , so is a special case tuned precisely to match the asymptotic mean memory signal (Elliott, 2014). For large enough and for , we obtain
formula
3.25a
formula
3.25b
while for ,
formula
3.26a
formula
3.26b
where is Euler's constant. For , the behavior of the MFPT is logarithmic in for large enough , but for , the -dependence drops out. The variance in the FPT for approaches zero as increases; for , it approaches a nonzero constant. Equations 3.25a and 3.26a generalize our previous results for , but we have not derived results for before.
The results above are averaged over all realizations of the later memories , , but they have a fixed initial value . For the FPT distribution also averaged over the tracked memory , we must evaluate , where we average over the initial distribution of with . From equation 3.14, this distribution has mean and variance , and for large enough, the distribution is gaussian. For the models of synaptic plasticity that we consider below, we can typically assume that . For , for example, this is just the requirement that . Thus, it is convenient, although not necessary, to make the approximation . We write
formula
3.27
where we have defined the scaled forms and . For , it does not appear to be possible to evaluate this integral in terms of known functions, so we must resort to numerical or approximation methods in this case. For , the integral can be evaluated, but we may also explicitly average equation 3.22 over the distribution of with , giving
formula
3.28
where is the error function. We may then obtain and the variance in the FTP, which we denote by or , where we use this latter as a convenient shorthand for , where is the second moment of the FPT distribution for a definite value of . The full results for are fairly simple but unenlightening. However, for either small or large , retaining just the first few terms in the expansions, they reduce to
formula
3.29
and
formula
3.30
These statistics averaged over for large coincide with those for a definite value of for large enough in equation 3.26 when we replace by . This reflects the fact that when is large enough, can be replaced by its mean field form .

4  Simple Synapses Satisfying an Eigenvector Constraint

We now construct two models of synaptic plasticity satisfying the requirement . In the first, we pick to be an eigenvector of where is a generalized form of the transition matrix given in equation 2.5 for . In the second, we modify so that it has as an eigenvector an arrangement of synaptic strengths that is uniformly spaced.

4.1  Modifying

The simplest generalization of the stochastic updater synapse is one that expresses plasticity with fixed probability regardless of its strength state (unless saturated at its upper or lower value). The matrices in equation 2.4 then become the matrices
formula
4.1a
formula
4.1b
where and refer to the upper and lower diagonals, respectively. The superposed matrix can then be written as
formula
4.2
where the matrix is
formula
4.3
We use the symbol (for constant) because its defining off-diagonal elements are all the same constant. The spectrum of is standard (e.g., Elliott, 2016a), so we just state its eigenvalues,
formula
4.4
and its orthonormal eigenvectors with components , ,
formula
4.5
These eigenvectors of are of course also eigenvectors of , with eigenvalues , and as is symmetric, its left and right eigenvectors are identical. The eigenvector with eigenvalue unity, corresponding to , is the equilibrium eigenvector. Defining the vector , a -dimensional vector, the equilibrium distribution is just .
For a multistate synapse, it is standard to consider a uniformly spaced sequence of synaptic strengths. We define the vector (for linear) to have components
formula
4.6
which are uniformly spaced in . Except for and , however, is not an eigenvector of . We require instead an eigenvector of whose components monotonically increase (with a change in sign if necessary) and are antisymmetrically arranged around zero. The requisite eigenvector is , and we define the vector (for sigmoidal or sinusoidal) to have components
formula
4.7
For and , . Viewed from the middle of the strength range, for this arrangement is sinusoidal, effecting saturation-like dynamics at the lower and upper ends of its range. In many respects, such dynamics may be considered to be more desirable than uniformly spaced strengths.

For this standard form of , we therefore set , and we have . We find that . To compute the initial signal , we require , in which only the first and last components are modified compared to . Since , we have the initial signal . We note that because of the structure of , the initial signal is whether the strength vector is or .

We must verify that , independent of . First, we write , where is antisymmetric, or for any . We then observe that because , if the vector is antisymmetric, then so is the vector . Thus, the distribution of any synapse's strengths at time can be written as with antisymmetric. Then
formula
4.8
This confirms the stated result in equation 3.10 for this model.
Finally, we consider relaxing the requirement that by examining the overlap between the eigenvectors of and the linear strength vector . Specifically, we compute the overlap for relative to that for , corresponding to . We obtain
formula
where the limit is taken for . In this limit, the maximum relative overlap occurs for , giving a factor of 1/9, or about 11%, and the total relative overlap over all gives , or about 23%. These relatively small contributions to the expansion in equation 3.17 suggest that using instead of the exact eigenvector should incur an error of at most around 25%. In fact, we find that the error is typically much smaller.

4.2  Modifying

Above we retained the standard form of and modified , setting . Now we consider retaining and instead modifying . We write , where is the parameter that controls the overall probability that any given change in synaptic strength is expressed. Then , where we write . We require a matrix that treats potentiation and depression processes symmetrically; that has as a right eigenvector so that the equilibrium distribution of synaptic strengths is uniform and that also has the vector as a left eigenvector. Writing
formula
4.9a
formula
4.9b
where , we see that the structures of ensure that potentiation and depression are treated symmetrically. The vector is always a left eigenvector of , and the easiest way to ensure that it is also a right eigenvector is for to be symmetric, requiring . The simplest nonconstant form for is therefore for some constant , and forces . By setting
formula
4.10
for , we may confirm that is also an eigenvector of . The overall normalization is chosen so that for and since for these special cases. The off-diagonal elements of are arranged quadratically, hence our use of the symbol (for quadratic), in contrast to above.
Let the eigenvalues of be , , with associated unnormalized (but orthogonal) eigenvectors having components . We write for the generating function of these components and then derive an equation for . Because of the natural boundaries at and at which , we may extend the sum defining over all . After lengthy but straightforward algebra, the eigenvalue equation for the eigenvalues of the scaled matrix can then be written as the differential equation
formula
4.11
where primes denote differentiation with respect to . Demanding a terminating power series solution determines the eigenvalues as and hence
formula
4.12
and the corresponding power series solution is
formula
4.13
Clearly so that , and explicitly evaluating , we find , or , so that is indeed an eigenvector of , as advertised. Its eigenvalue is .
Because is a stochastic matrix, its elements must be nonnegative. The diagonal elements of take the form for . We therefore require
formula
4.14
For any given choice of , we must have , restricting the number of states of strength available to a synapse; conversely, for any given choice of , cannot exceed an upper limit. From a biological perspective, we can circumvent this bound by imposing a nonlinearity on the matrices so that the elements are replaced by . This is equivalent to potentiation or depression being inevitable in certain synaptic strength states. Mathematically, determining the spectrum of with such a nonlinearity would in general be difficult, so for simplicity, we restrict to the above bound on for convenience, but with the understanding that in principle, there is no obstacle to larger values.
To calculate the initial signal , we require
formula
4.15
Because , we have , and so
formula
4.16
since . Unlike for , here for , does depend on whether the strength vector is or . For , in equation 4.16 would become . The antisymmetry of in also immediately establishes by the same arguments as above and as required by equation 3.10.
Finally, we examine the relative overlap between the sinusoidal strength vector and the normalized eigenvectors , , of this modified form of , where . We cannot obtain useful expressions for general even (for odd the overlap vanishes), so we state results only for small even , for which the overlap is greatest. For large , we obtain
formula
These evaluate to approximately , , and , respectively, and for , we get approximately . For , the relative overlap is around 12%, for under 0.5%, and all other values are negligible. Again, then, we expect our analytical results for to provide good quantitative agreement with simulation results obtained using .

4.3  Summary of Both Plasticity Models

In Table 1, we assemble for convenience the key quantities in the two models of synaptic plasticity above that satisfy the eigenvector condition . In Figure 1, we explicitly illustrate the key properties of the vectors and and the matrices and for the particular choice, states of synaptic strength. The saturation-like behavior of is apparent compared to , although in practice, these two vectors are quite similar. The quadratic behavior of the off-diagonal elements of is transparent, showing that the expression of synaptic plasticity has greatest overall probability for synaptic strengths that are of intermediate sizes, while those at the extremes of the interval have the lowest overall probability. In contrast, for the probability of the expression of plasticity is independent of synaptic strength.

Table 1:
Summary of Key Quantities for the Two Models of Synaptic Plasticity Satisfying an Eigenvector Constraint.
   
   
   
   
   
   
   
   
   
   

Note: Here refers to an upper limit on .

Figure 1:

Illustration of major features of synaptic strengths and transition matrices, for . (A) Synaptic strengths and plotted against . (B) Off-diagonal elements of and , enumerated down the off-diagonal indexed by .

Figure 1:

Illustration of major features of synaptic strengths and transition matrices, for . (A) Synaptic strengths and plotted against . (B) Off-diagonal elements of and , enumerated down the off-diagonal indexed by .

5  and for General

Consider any symmetric stochastic matrix that treats potentiation and depression processes symmetrically and has a complete set of orthonormal eigenvectors with associated eigenvalues . Then because
formula
5.1a
formula
5.1b
the spectral decomposition of allows us to write
formula
5.2a
formula
5.2b
Using and , we may write down . We now consider the two explicit forms for above for general , and then we may write down SNR memory lifetimes for a variety of models.

5.1  Results for

For this form of , we have , where both and are eigenvectors of . The sums over eigenvectors in equation 5.2 therefore collapse to just sums involving for and for , regardless of . However, always, since we assume that the vector of possible synaptic strengths is antisymmetric, while is symmetric. Hence, the sums collapse to only . We are then left with
formula
5.3a
formula
5.3b
for any, where . We have written the expression for in a form so that we transparently recover in Table 1 as the initial memory signal when . Strikingly, only a single eigenmode contributes to these statistics, regardless of the vector of possible synaptic strengths . This eigenmode is, moreover, the most slowly decaying mode. This remarkable behavior is entirely due to the very special form of the synaptic configuration immediately after the storage of .

5.2  Results for

The spectral decomposition of is given explicitly in equations 4.4 and 4.5. Because , where is not an eigenvector of except for and , the sums over eigenvectors in equation 5.2 do not collapse for general . However, for the specific choice , the sums do collapse down to just , corresponding to . In general, however, since we are interested in memory lifetimes, we are interested specifically in the large-time behavior of and . We may therefore simplify by considering an approximation that includes just the most slowly decaying eigenmode, which also corresponds to the mode and thus . Asymptotically, this approximation becomes exact. We then obtain
formula
5.4a
formula
5.4b
where and where we use the symbol “” to indicate that we have equality for (exact equality for all times ) and asymptotic equality otherwise (asymptotic equality only at large times).

5.3  SNR Memory Lifetimes

We use these results to obtain SNR memory lifetimes for either choice of and for either choice of , giving four combinations. As is the solution of , in general it must be obtained numerically, but for large , we may approximate . We write equations 5.3a and 5.4a in the common form , where depends on the choice of and . We have exact equality in this equation for three combinations, for which , and asymptotic equality for the remaining combination. We then obtain
formula
5.5
This result is identical, up to additive constants, to the asymptotic form for in equation 3.26a for with , where for three combinations. In Table 2, we give the explicit results for for all four combinations, in the full form for any