Abstract
The neuronal system exhibits the remarkable ability to dynamically store and organize incoming information into a web of memory representations (items), which is essential for the generation of complex behaviors. Central to memory function is that such memory items must be (1) discriminated from each other, (2) associated to each other, or (3) brought into a sequential order. However, how these three basic mechanisms are robustly implemented in an input-dependent manner by the underlying complex neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework, which provides a direct link between different synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, to create a network that emulates the above mechanisms. Combining correlation-based synaptic plasticity and homeostatic synaptic scaling, we demonstrate that these mechanisms enable the reliable formation of sequences and associations between two memory items still missing the capability for discrimination. We show that this shortcoming can be removed by additionally considering inhibitory synaptic plasticity. Thus, the here-presented framework provides a new, functionally motivated link between different known synaptic mechanisms leading to the self-organization of fundamental memory mechanisms.
Author Summary
Higher-order animals are permanently exposed to a variety of environmental inputs that have to be processed and stored such that the animal can react appropriately. Thereby, the ongoing challenge for the neuronal system is to continuously store novel and meaningful stimuli and, dependent on their content, to integrate them into the existing web of knowledge or memories. The smallest organizational entity of such a web of memories is described by the functional relation of two interconnected memories: they can be either unrelated (discrimination), mutually related (association), or unidirectionally related (sequence). However, the neuronal and synaptic dynamics underlying the formation of such structures are mainly unknown. To investigate possible links between physiological mechanisms and the organization of memories, in this work, we develop a general mathematical framework enabling an analytical approach. Thereby, we show that the well-known mechanisms of synaptic plasticity and homeostatic scaling in conjunction with inhibitory synaptic plasticity enables the reliable formation of all basic relations between two memories. This work provides a further step in the understanding of the complex dynamics underlying the organization of knowledge in neural systems.
INTRODUCTION
Learning and memorizing various pieces of information from the environment are vital functions for the survival of living beings. In addition, the corresponding neuronal system has to learn the environmental relations between these different pieces. For this, the neuronal system has to form memory representations of the information and to organize them accordingly. However, the neuronal and synaptic dynamics determining the organization of these representations are widely unknown.
The synaptic-plasticity-and-memory hypothesis relates the formation of memory representations to the underlying neuronal and synaptic mechanisms (Martin, Grimwood, & Morris, 2000; Martin & Morris, 2002). Namely, a to-be-learned piece of information activates via an environmental stimulus a certain population of neurons triggering synaptic plasticity.Synaptic plasticity, in turn, changes the weights of the synapses between the activated neurons such that these neurons become strongly interconnected and form a memory representation—so-called Hebbian cell assembly (CA)—of the presented information (Hebb, 1949; Palm, 1981; Buzsaki, 2010; Palm, Knoblauch, Hauser, & Schütz, 2014). Besides the formation of a memory representation, the newly learned piece of information is also related to already stored information (Hebb, 1949; Wickelgren, 1999; Tse et al., 2007, 2011). Thereby, the relations or functional organizations between different memory representations can be organized in three different, fundamental ways: they can be unrelated (discrimination), mutually related (association), or unidirectionally related (sequence). However, although the link between the formation of a single memory representation and the underlying neuronal and synaptic mechanisms is already well established (Garagnani, Wennekers, & Pulvermüller, 2009; Tetzlaff, Kolodziejski, Timme, Tsodyks, & Wörgötter, 2013; Litwin-Kumar & Doiron, 2014; Zenke, Agnes, & Gerstner, 2015), it is largely unknown which mechanisms enable the self-organized formation of relations between memory representations.
In this theoretical study, we have developed the first theoretical framework enabling one to analyze the ability of diverse neuronal and synaptic mechanisms to form memory representations and, in addition, to form the different types of memory-relations. Thereby, our analysis indicates that the interaction of correlation-based synaptic plasticity with homeostatic synaptic scaling is not sufficient to form all types of memory relations, although it enables the formation of individual memory representations (Tetzlaff et al., 2013; Tetzlaff, Dasgupta, Kulvicius, & Wörgötter, 2015). However, our analysis shows that, if the average level of inhibition within the memory representations is significantly lower than the average level in the remaining network, the neuronal system is able, on the one hand, to form memory representations and, on the other hand, to organize them into the fundamental types of memory relations in an input-dependent, self-organized manner.
Several theoretical studies (Tetzlaff et al., 2013; Litwin-Kumar & Doiron, 2014; Zenke et al., 2015; Tetzlaff et al., 2015; Chenkov, Sprekeler, & Kempter, 2017) investigated the formation of individual memory representations in neuronal systems indicating correlation-based synaptic plasticity as essential mechanism. In addition, homeostatic plasticity, as synaptic scaling (Turrigiano, Leslie, Desai, Rutherford, & Nelson, 1998), is required to keep the system in an adequate dynamic regime (Dayan & Abbott, 2001; Tetzlaff, Kolodziejski, Timme, & Wörgötter, 2011; Zenke, Hennequin, & Gerstner, 2013). Further studies indicate that synaptic plasticity and homeostatic plasticity also yield the formation of sequences of representations (Chenkov et al., 2017; Lazar, Pipa, & Triesch, 2009; Tully, Lindn, Hennig, & Lansner, 2016). However, it remains unclear whether the interaction of synaptic and homeostatic plasticity also enables the formation of further memory relations as described above. Interestingly, several theoretical studies (Wickelgren, 1999; Palm, 1982; Byrne & Huyck, 2010) indicate that a neural system with the ability to form all described memory relations has an algorithmic advantage to process the stored information. Furthermore, the neuronal dynamics resulting from interconnected memory representations match experimental results on the psychological (Romani, Pinkoviezky, Rubin, & Tsodyks, 2013) and single-neuron level (Griniasty, Tsodyks, & Amit, 1993; Amit, Brunel, & Tsodyks, 1994). However, these studies consider neural systems after completed learning; thus, it is unclear how neuronal systems form the required relations between memory representations in a self-organized manner.
We consider a neuronal network model with plastic excitatory connections, which are governed by the interaction of correlation-based and homeostatic plasticity. As already shown in previous studies, this interaction enables the self-organized formation of individual memory representations (Tetzlaff et al., 2013, 2015). Similar to these studies, we use methods from the scientific field of nonlinear dynamics (Glendinning, 1994; Izhikevich, 2007) to derive the underlying mechanisms yielding the self-organized formation of the relations between memory representations. Thus, we analyze the ability of the plastic network to form different types of relations between two memory representations—namely, discrimination, sequences, and association. Please note that this is a high-dimensional problem of the order of N2 (given N neurons). To reduce complexity, standard approaches such as mean-field analysis are not feasible, as they obliterate the different memory representations involved. Thus, we developed a new theoretical framework by considering the mean equilibrium states of the relevant system variables and by comparing them to constraints given for the different memory-relations. Thereby, we map the constraints on the long-term average activity level of the neuronal populations involved, reducing the problem to a two-dimensional one, which can be analyzed graphically and analytically. By this framework, we optimized the parameters of the system and identified that correlation-based and homeostatic plasticity do not suffice to form all three types of memory relation. Instead, if the average inhibitory level within the memory representations is below control level, memory representations can be formed, maintained, and related to each other. In addition, we show that the required state can also be reached in a self-organized, dynamic way by the interplay between excitatory and inhibitory synaptic plasticity. Thus, the here-presented results provide a next step to understanding the complex dynamics underlying the formation of memory relations in neuronal networks.
RESULTS
In our work, we analyze the ability of two neuronal populations p ∈{1, 2} to become memory representations and, in parallel, to reliably build up different functional organizations such as discrimination, sequence, and association (Table 1). In general, the external input to population p should trigger synaptic changes within the population such that it becomes a memory representation of its specific input. Individual input events can have different amplitude, duration, and probability of occurrence (Figure 1Ai); however, synaptic changes are slow compared with the presentation of single input events such that the average over all input events determines the formation of a memory representation (Figure 1Aii). Thus, throughout this study, we consider the average input stimulation a population receives, whereby a reduced number of input events and/or reduced amplitudes and shorter durations map to a lower average input (compare Figure 1, A with B).
Memory representation . | . | . | . | . | functional organization . | abbreviation . | color code . |
---|---|---|---|---|---|---|---|
✗ | − | − | none | nm | grey | ||
✓ | discrimination | disc | blue | ||||
sequence 12 | s12 | yellow | |||||
sequence 21 | s21 | green | |||||
association | asc | red | |||||
bistable | various | bs | pink |
Memory representation . | . | . | . | . | functional organization . | abbreviation . | color code . |
---|---|---|---|---|---|---|---|
✗ | − | − | none | nm | grey | ||
✓ | discrimination | disc | blue | ||||
sequence 12 | s12 | yellow | |||||
sequence 21 | s21 | green | |||||
association | asc | red | |||||
bistable | various | bs | pink |
Given two populations of neurons, dependent on the input properties, connections between the populations should also be altered to form the neuronal substrate underlying the diverse functional organizations described above. In accordance to the synaptic-plasticity-and-memory hypothesis (Martin et al., 2000; Hebb, 1949), we define a neuronal population as being a memory representation if its neurons are strongly interconnected. In other words, the average excitatory synaptic strength between all neurons within the population has to be larger than the average inhibitory synaptic strength. Thus, because of the dominant excitation, neuronal activity within the population will be amplified. We define the relation between two memory representations in a similar manner based on the relation of excitation and inhibition between the corresponding neuronal populations: in general, if the average excitatory synaptic strength from one population to the other is larger than the average inhibitory synaptic strength, an increased level of activity in the former population triggers an increased activation in the latter. This can be different for both directions such that, for instance, the net connection from population 1 to 2 can be excitatory and inhibitory from 2 to 1. This case is defined as a sequence from 1 to 2. Similarly, an association is present if both connections are excitatory-dominated, and a discrimination consists of both directions being zero or inhibition-dominated.
To analyze the self-organized formation of memory representations and their functional organization, we consider a plastic recurrent neuronal network model 𝒩 consisting of rate-coded neurons being interconnected via plastic excitatory and static inhibitory connections (Figure 2A). Within the recurrent network are two distinct populations of neurons (p ∈{1, 2}; black and yellow dots, respectively) within each the neurons receive the same external input (red layer ε). All remaining neurons are summarized as background neurons ℬ (blue) such that the neuronal network can be described as the interaction of three different neuronal populations.
Thus, an external input to populations 1 and 2 alters neural activities within the corresponding populations and, furthermore, triggers changes in the corresponding synaptic weights (see Figure 2C for an example). In the first phase, all neurons of the network receive a noisy input (Figure 2C, panel i) such that neural activities (panel ii) and synaptic weights (panels iii and iv) are at base level. At t = 10, both populations 1 and 2 receive a strong external input (panel i). In more detail, each neuron in a specific population receives an input from 10 input neurons each modeled by its own Ornstein-Uhlenbeck process (grey lines; yellow and black line indicate the average). The mean of these processes is the same for all input neurons transmitting to one population ( for pop. 1 and for pop. 2). After a brief transition phase, the system reaches a new equilibrium state. Here, for both populations the intrapopulation synapses are stronger than the average inhibitory synaptic weights (; panel iii), indicating the formation of two memory representations. Furthermore, the excitatory synapses connecting both populations are adapted and also become stronger than the average inhibition level (panel iv). This implies that both populations or memory representations are strongly linked with each other; thus, an association has been formed. Therefore, given a certain stimulus, the equilibrium state of the synaptic weights determines the functional organization of the corresponding memory representations.
Memory Representation and Functional Organization
As the impact of single synapses on the overall network dynamics is small, we will consider in the following the equilibrium states of the average synaptic weights of inter- and intrapopulation synapses (indicated by 〈x〉). Thus, these synaptic states determine whether a neuronal population is a memory representation, and how several of these representations are functionally organized (discrimination, sequence, or association).
Given that both neuronal populations p ∈{1, 2} are a stable memory representation (Figure 3Ai, left panel, white area), they can form different functional organizations (discrimination, sequences, or association). Thereby, the average interpopulation synaptic weights ( p, p′ ∈ {1, 2}, p ≠ p′) define the different functional organizations, dependent on their relation to the average inhibitory synaptic weight strength (Table 1). Thus, for two interconnected memories 1 and 2, we can define four different functional organizations with different weight-dependent conditions (Figure 3Ai, right panel):
- ▪ discrimination: both average interpopulation synaptic weights are weaker than the inhibitory weights (blue, disc)(3)
- ▪ sequence 21: average interpopulation synaptic weight from memory 1 to memory 2 is stronger than inhibitory weights, while the interpopulation synaptic weight from 2 to 1 is weaker (green, s21)(4)
- ▪ sequence 12: interpopulation synaptic weight from memory 1 to memory 2 is weaker than the inhibitory weights, while the interpopulation synaptic weight from 2 to 1 is stronger (yellow, s12)(5)
- ▪ association: both average interpopulation synaptic weights are stronger than the average inhibitory synaptic weight (red, asc)(6)
Full-network analysis.
Assessing under which input condition the plastic neuronal network is able to form memory representations and diverse functional organizations, the whole set of differential equations, which represents a mathematical problem of the order of n2 (with n neurons), has to be solved numerically for each input condition (, ). Thereby, each simulation runs until the system reaches its equilibrium state. In this equilibrium state, excitatory synaptic weights are analyzed and compared with the inhibitory synaptic weights (Figure 3Aii, right panels) enabling a classification according to the functional organizations (Figure 3Aiii). This classification can be mapped to the inputs providing the resulting functional organization dependent on the specific external inputs (Figure 3Di,ii). Note, for better comparison with the population model (see the next section), the results (Figure 3Di) are mapped to the population input space defined below (Figure 3Dii, Equation 7). The whole analysis is computationally expensive and, furthermore, it does not provide additional insights into the relation between the synaptic dynamics and the ability to form diverse functional organizations. Thus, in the following, we provide a different approach to solve this complex, high-dimensional mathematical problem.
Population model at equilibrium.
Activity-dependent constraints of memory representation and functional organization.
As we consider the interaction of two interconnected neuronal populations 1 and 2, we receive four distinct activity regimes enabling the formation of two memory representations (Figure 3Ci). These regimes are defined by all possible combinations of lower𝔉 and upper𝔉 in both dimensions of 𝔉1 and 𝔉2. In other words, these activity regimes are separated by the no memory phase (nm) in both dimensions (Figure 3Ci, grey regimes).
- Discrimination.
When both activities 𝔉1 and 𝔉2 are below the respective separatrix S21 and S12 (Figure 3Ci, blue area), the system is in an discriminatory functional organization.
- Sequence.
The system establishes a sequence from memory 1 to memory 2, when the activity 𝔉1 is above the corresponding separatrix S21 while the activity 𝔉2 stays below separatrix S12 (Figure 3Ci, green area, s21) and vice versa for a sequence from memory 2 to memory 1 (yellow area, s12).
- Association.
Both memories are organized in an associational entity when both neuronal activities 𝔉1 and 𝔉2 are above their respective separatrix (Figure 3Ci, red area).
Functional organizations in activity-space.
To obtain which functional organization the system forms for a given external input, we have to calculate the input-dependent average population activities 𝔉1 and 𝔉2 in the equilibrium state. For this, for each pair of inputs ℑ1 and ℑ2, we derive the fixed point conditions for both populations (Equation 43) dependent on the activity of the other population (Figure 3Cii; 𝔉1FP(𝔉2), black curve; 𝔉2FP(𝔉1), yellow curve). The intersection between both fixed point conditions (𝔉1FP = 𝔉2FP) determines the fixed point of the whole system (green dot). The relation of the corresponding activities 𝔉1 and 𝔉2 of this intersection to the separatrices determines the functional organization (Figure 3Ciii). This can be expressed in the input space (Figure 3Diii). Thus, the interaction of synaptic plasticity and scaling enables the formation of sequences in both directions and associations. Furthermore, there is a regime of input values in which no memory representation is formed.
Comparing the analytical results from the population model (Figure 3Diii) with the results from the full network analysis (Figure 3Dii) indicates that the population model matches the full network quite well. Especially, the inherent property of a system to form different functional organizations is precisely predicted by the population model. Remarkably, already the mapping of the weight-dependent conditions on the activity space (Figure 3Ci) provides sufficient information to assess the possible organizations of memories for a given system (not requiring the evaluation of the system’s fixed points).
Synaptic-plasticity-induced formation of associations.
Proof. Assume that the upper bound of lower𝔉 is smaller than the lower bound 𝔉min for the synaptic plasticity dominated activity regime (Equation 18). It follows that the condition is true for and by this lower𝔉 ∉ Asp.
Thus, to assure that the activity regime lower𝔉 cannot be reached by the system, we have to change the mapping between neuronal activity and inputs such that no reasonable input pair ℑ1, ℑ2 yields population activities within lower𝔉. This activity-input mapping is mainly determined by the inflexion point ϵ of the activity function (Equations 33 and 43). Here, we specify the inflexion point in units of nϵ (ϵ =nϵumax) with nϵ being the number of maximally active presynaptic neurons (with maximally strong synapses; see Methods). First, we analyze the resulting population activities 𝔉p and corresponding functional organizations for different nϵ given no external inputs (ℑ1 = ℑ2= 0; Figure 4Bi). For nϵ > 12, the population activities are below 𝔉min, which triggers up-scaling yielding a scaling-induced formation of an association. Please note that the system analyzed beforehand (Figure 3) has nϵ = 20. For nϵ < 9, neurons are too easy to excite such that activities are independent of the input nearby the maximum, yielding the functional organization of association. For 9 ≤ n ϵ ≤ 16, the system is in the no- memory state. Thus, to prevent the input-independent association of two interconnected neuronal populations, we consider the inflexion point to be in the regime 9 ≤ nϵ ≤ 16. Thereby, nϵopt = 12 yields activities nearby the lower minimum activity level 𝔉min defined above. The same analysis for maximal external input stimulation (ℑ1 = ℑ2 = 1; Figure 4Bii) shows that for nϵopt = 12 the system can nearly reach its maximal firing rate of 𝔉p = 1 such that the whole activity space [𝔉min, 1] can be reached by the system. Please note that for nϵ > 24, the system cannot reach high activity levels, and for nϵ > 27 the system is not able to form memory representations, although it is maximally stimulated by the external input. Furthermore, as can be expected from Figure 4A, the value of nϵopt depends on the target firing rate (Figure 4Biii).
In the following (Figure 4C–E), we will consider nϵ = 12 and , which implies that an association is only be formed by synaptic dynamics dominated by correlation-based synaptic plasticity. The activity regime yielding scaling-dominated learning (hatched area in Figure 4Ci) is theoretically possible; however, the adapted activity-input mapping assures that this regime cannot be reached for given external inputs (Figure 4Cii and D). In the resulting system, low inputs ℑ1, ℑ2 lead to a no-memory state (grey), while in a small regime sequences are formed (yellow and green). Thereby, the sequence is formed from the population receiving a stronger input to the population receiving the weaker input. If both inputs are strong, an association between the memory representations is being built (red). Note that there is a small bimodal regime with two long-term equilibrium states both being an association (pink; see two exemplary cross sections in Figure 4E).
Parameter-dependency of functional organizations.
After optimizing the activity-input mapping by nϵ such that the formation of diverse functional organizations is dominated by synaptic plasticity, in the following, we will analyze which kind of functional organizations can be formed by the system dependent on the different system parameters. Thereby, we will focus on the target activity and the average level of inhibition .
Theorem 2 Correlation-based synaptic plasticity in combination with an activity-dependent postsynaptic synaptic scaling term lacks the formation of functionally unrelated memories (discrimination).
Thus, a neuronal system with correlation-based synaptic plasticity and a postsynaptic-activity-dependent synaptic scaling is not able to form two excitatory relations in between two memory representations that are weaker than the average inhibition. This analysis reveals that applying such a learning rule globally for the neuronal network dynamics is not sufficient to distinguish the two different processes of memory formation and discrimination. Thus, it seems that this learning rule has to be augmented by at least one additional adaptive process that decouples these two processes.
Local Inhibition Enables the Functional Organization of Discrimination
The ability to form a discriminatory relation between memory representations is functionally very important for a neuronal system, as it implies that not all memories, which are anatomically connected with each other, have to be functionally connected with each other. Thus, to overcome the lack of discriminatory functional organizations of memories, we have to “decouple” the discrimination condition from the memory condition (see above).
For this, we introduce a different inhibitory synaptic weight strength () within the neuronal populations compared with the inhibitory synaptic weight strength for all other connections (, Figure 6A). In other words, the parameter is different for the discrimination condition as for the memory condition (which is now ). To quantify the influence of this new parameter on the potential to form two discriminated memory representations, we calculate the size of the activity space leading to discrimination (Figure 6B, left). In general, if inhibition within the populations is weaker than for all other connections (), the system can form memories being in a discrimination (Figure 6B, right). Please note that the other functional organizations are still maintained such that all different types can be obtained (Figure 6C). Interestingly, the state of discrimination is being formed if the inputs presented to both populations are weak. A weak input means that, among others, the probability of occurrence is low, which implies that the chance of both inputs being presented simultaneously is very low (Figure 1). In other words, if both inputs are only accidentally shown simultaneously, the neuronal system should discriminate their memory representations. Vice versa, if the inputs are often shown together (as for high input levels), the system should associate the representations as the interplay between synaptic plasticity, scaling, and inhibition does (Figure 6C).
According to these conditions, the inhibitory synaptic weights converge either to
- ▪
an up-state (θu), if the sum of neuronal activities is smaller than its threshold (∑F < θF) and/or the difference in the pre- and postsynaptic activities is above its tolerance range (ΔF > δF), or
- ▪
a down-state (θd), if the sum of neuronal activities is larger than its threshold (∑F > θF) and the difference in the pre- and postsynaptic activities is smaller than its tolerance range (ΔF < δF).
This type of inhibitory synaptic plasticity (Equation 44) together with plastic excitatory synapses governed by the interaction of correlation-based synaptic and homeostatic plasticity enables the reliable formation of memory representations and, in addition, provides the system the ability to form all basic functional organizations. In other words, our analyses indicate that a self-organized neural network can form all types of functional organizations if the interaction of synaptic plasticity and scaling are complemented by further adaptive processes.
Generalization of the Interaction Between Multiple Memory Representations
The analyses shown before are focused on the functional organization between two memory representations. However, given the results from these analyses, we can infer which types of functional organizations can be formed between three memory representations (Figure 8A). For this, we have to consider the space of possible functional organizations for different levels of activities 𝔉p and 𝔉p′, p, p′∈{1, 2, 3} between two neuronal populations (e.g., resulting from the interaction between synaptic plasticity and scaling and different levels of inhibition as shown in Figure 6Cii, left). This space implies that if two populations are in a specific functional organization, the activity levels of the corresponding populations are determined to specific intervals that in turn, constrain the functional organization between these populations and a third one. In other words, if we constrain the activity level of population 1 by the external input onto, without loss of generality, the interval 𝔉1 ∈ [0.65, 0.8], the spaces of functional organizations between population 1 and 2 (Figure 8Ai) and between population 1 and 3 (Figure 8Aii) are limited onto specific regimes such that only a subset of functional organizations can be realized. As long as we do not constrain the activity levels of population 2 and 3, these two populations are able to form all types of functional organizations (Figure 8Aii). If we also constrain the activity level of, for example, the second population (𝔉2 ∈ [0.5, 0.75]), the functional organization between population 1 and 2 is basically specified (association) and the space of functional organizations between population 2 and 3 is limited. Now, if also the activity level of the third population is constrained (𝔉3 ∈ [0.75, 1] in Figure 8C and 𝔉3 ∈ [0.3, 0.55] in D), all three possible interactions between the three memory representations are defined. By the above described procedure, we can infer which functional organizations between three memory representations can be reliably formed (see Figure 8E for examples given in C and D). Numerical simulations are required to confirm these results. However, we expect that, by applying procedures as described above, the here-developed framework can be extended to investigate the ability of diverse plasticity mechanisms to form different types of webs of memories.
DISCUSSION
General Framework
In the present work, we have developed a mathematical framework to investigate the ability of adaptive neural networks to form in a dynamic, input-dependent manner diverse functional organizations of interconnected memories. In contrast to previous studies focusing only on a subset of possible functional organizations (Chenkov et al., 2017; Tully et al., 2016; Griniasty et al., 1993; Abbott & Blum, 1996; Leibold & Kempter, 2006; Herrera-Aguilar, Larralde, & Aldana, 2012), we consider here all possible organizations between two memory representations. Thereby, we define the functional organizations dependent on the relation between the excitatory and inhibitory synaptic weights of the neuronal network. By introducing a population description, we are able to transfer the resulting high-dimensional problem to a low-dimensional problem considering average synaptic weights and activities of the neuronal populations involved. In addition, by considering the long-term equilibrium dynamics, we could further reduce the system complexity with the input stimulation being a system parameter. Finally, we could map the resulting dynamics onto the two-dimensional activity space which is sufficient to solve this complex problem of memory interactions (Figure 3). Thus, we gain an easily accessible understanding of the possible states the system can reach as well as of the underlying principles arising from the considered plasticity mechanisms and their limitations. Given the generality of the complete framework, it can be commonly used to investigate the effect of diverse plasticity mechanisms on the formation and interaction between memory representations.
Analysis of the Interplay Between Synaptic Plasticity and Synaptic Scaling
Given this general mathematical framework, we analyzed the effect of the interplay of correlation-based synaptic plasticity with homeostatic synaptic scaling on the formation of functional organizations of memory. This type of interplay is a quite general formulation of synaptic dynamics (Tetzlaff et al., 2011; Abbott & Nelson, 2000), which is sufficient to form individual memory representations (Tetzlaff et al., 2013, 2015). We have shown that these types of mechanisms provide a neural network the ability to form several types of functional organization of memory representations such as sequences and associations (Figures 4 and5). Furthermore, our method shows that correlation-based plasticity with scaling does not enable the formation of two stable memory representations being in a discriminated state.
This shortcoming is due to the purely correlation-based formulation of synaptic plasticity and, by this, mathematically couples the condition for memory formation with the condition of discrimination. Interestingly, the correlation-independent dynamics triggered by synaptic scaling are not sufficient to decouple the conditions. However, these dynamics enable the formation of sequences providing a further functional role of synaptic scaling besides synaptic stabilization (Tetzlaff et al., 2011; Zenke et al., 2013; Zenke, Gerstner, & Ganguli, 2017) and homeostatic regulation of neuronal activities (Turrigiano & Nelson, 2004; Abbott & Nelson, 2000).
On the basis of our results, we expect that similar mathematical models of synaptic dynamics, which consist of correlation-based plasticity and a homeostatic term dependent on the postsynaptic activity level (e.g., Oja’s rule (Oja, 1982) or BCM rule (Bienenstock, Cooper, & Munro, 1982)) are also not able to form memory representations in a discriminated state. Thus, a further factor determining the synaptic dynamics of the network is required to enable the functional organization of discrimination.
Local Variations of Inhibition
We have shown that local variations in the level of inhibition could serve as such a factor enabling the discrimination between memory representations and other functional organizations (Figure 6). Thereby, the average inhibitory synaptic strength within a memory representation has to be weaker than all other inhibitory synaptic weights. This is in contrast to the idea of an inhibition, which balances the strong excitation within interconnected groups of neurons (Litwin-Kumar & Doiron, 2014; Vogels, Sprekeler, Zenke, Clopath, & Gerstner, 2011). However, despite the local differences in the balance of inhibition and excitation, the network-wide levels of excitation and inhibition can still be in a balanced state (van Vreeswijk & Sompolinsky, 1998; Denève & Machens, 2016). Furthermore, this type of inhibitory weight structure could emerge from an anti-Hebbian-like inhibitory plasticity rule as discovered in the memory-related hippocampus (Woodin, Ganguly, & Poo, 2003).
Possible Extensions of Synaptic Dynamics
Besides inhibition, other mechanisms could be the additional factor yielding all functional organizations. For instance, spike-timing-dependent plasticity (STDP; Gerstner, Kempter, van Hemmen, & Wagner, 1996; Bi & Poo, 1998; Markram, Gerstner, & Sjöström, 2011) adapts the synaptic weights according to the correlation of pre- and postsynaptic spiking dynamics. By considering detailed models of this mechanism (van Rossum, Bi, & Turrigiano, 2000; Song, Miller, & Abbott, 2000; Shouval, Bear, & Cooper, 2002; Graupner & Brunel, 2012), the influence of time-dependent properties of the stimuli, as correlations, on the formation of functional organizations of multiple memory representations can be investigated. Previous studies already indicated that STDP together with other plasticity mechanisms can reliably form memory representations (Litwin-Kumar & Doiron, 2014; Zenke et al., 2015); however, the interaction between such memory representations and the ability to form diverse functional organizations given the mechanism of STDP remains unclear. For a more detailed understanding, the theoretical framework presented in this study could be used. For this, the framework has to be adapted such that it takes the dynamics of STDP and spikes into account. This requires differential equations describing the dynamics of populations of neurons of a certain size, given synaptic plasticity. In a recent study (Schwalger, Deger, & Gerstner, 2017), the authors derive a mathematical model of populations of a certain number of spiking neurons, which also considers the dynamics of short-term synaptic plasticity (Schmutz, Gerstner, & Schwalger, 2018). However, a mathematical model describing the dynamics of a population of spiking neurons (of fixed size) with STDP, which would be essential to extend the here-presented framework by time-dependent properties of stimuli, is still missing.
This methodical gap could be at least partially circumvented by extending the rate-dependent synaptic plasticity model. For instance, spike-timing-dependent triggered LTD (Bi & Poo, 1998; van Rossum et al., 2000), in contrast to firing rate-dependent LTD (Bienenstock et al., 1982; Sjöström, Turrigiano, & Nelson, 2001; Malenka & Bear, 2004), could be a measure of uncorrelated spike trains decoupling the memory from the discrimination condition. In more detail, the LTP part of STDP can be interpreted as a measure of the probability that the pre- and postsynaptic neurons fire correlated spikes during a small time window (Dayan & Abbott, 2001), described here in the rate-model by the correlation-based LTP-term, whereas the amount of uncorrelated spike pairs triggering LTD could be described in the here-used rate-model by the difference between the pre- and postsynaptic firing rates. We expect that considering such a difference term would be sufficient to enable the formation of memory representations(by correlation-based LTP) in a discrimination state (by non-correlation-based LTD). This has to be verified in subsequent studies.
However, given a population model incorporating detailed dynamics of correlation-based synaptic plasticity, the here-presented framework can be extended to investigate the influence of more complex stimulus protocols on the formation of diverse functional organizations. In a more realistic scenario, different stimuli could be presented in a probabilistic manner determined by different sources (hidden causes). The detection of several independent hidden sources from a stream of stimuli is a complex problem humans or cell cultures are able to solve (Mesgarani & Chang, 2012; Isomura, Kotani, & Jimbo, 2015). Several theoretical studies mainly focusing on neuronal networks with a feed-forward structure indicate that the dynamics of synaptic plasticity enables the solving of such types of problems (Dayan & Abbott, 2001; Bell & Sejnowski, 1995; Hyvärinen & Oja, 2000; Isomura & Toyoizumi, 2016; Pehlevan, Mohan, & Chklovskii, 2017; Isomura & Toyoizumi, 2018). Thus, we suppose that plastic feed-forward and recurrent connections enable the neuronal system to detect hidden sources by the feed-forward dynamics and forms memory representations of these by the recurrent synapses. Given relations between these sources, the synapses connecting the memory representations could represent the strength of these relations. Of course, if the number of sources increases, due to the increase in combinations of different functional organizations (Figure 8), the here-derived theoretical framework has to be adjusted. For instance, by measuring the mutual information or transfer entropy (Brunel & Nadal, 1998; MacKay, 2003; Vicente, Wibral, Lindner, & Pipa, 2011) conditioned on the input between representations, one could extract the formed functional organization given different stimulus protocols.
Beyond the scope of reliably forming memory representations of environmental stimuli, it is still unclear how to maintain these representations for a long duration (Dudai, 2004, 2012). Similar to previous studies (Tetzlaff et al., 2013, 2015), also here the interplay between synaptic plasticity and scaling yields the slow decay of synaptic weight structures after withdrawing the stimuli (see Supporting Information Figure S1 for different rates of synaptic dynamics; Herpich & Tetzlaff, 2019). However, as long as the average synaptic weight remains larger than control, the corresponding neurons and synapses resemble a memory representation of the stimulus. Prolonging the lifetime of such a memory can be done by the diverse mechanisms of consolidation as synaptic (Frey & Morris, 1997; Clopath, Ziegler, Vasilaki, Büsing, & Gerstner, 2008; Redondo & Morris, 2011; Li, Kulvicius, & Tetzlaff, 2016) or sleep-induced consolidation (Tetzlaff et al., 2013; Diekelmann & Born, 2010; Nere, Hashmi, Cirelli, & Tononi, 2013). We expect that similar mechanisms can also consolidate the intermemory synaptic weights maintaining the whole functional organization. However, under which conditions memory representations as well as the interconnections are consolidated requires further experimental and theoretical studies.
Please note that there is a multitude of studies indicating the existence of additional factors influencing synaptic plasticity. For instance, neuromodulatory transmitters, such as acetylcholine, noradrenaline, serotonin, and dopamine, can serve as a third factor (Frémaux & Gerstner, 2016; Gu, 2002). However, with the mathematical framework developed in this study, it is now possible to investigate in more detail the effect of such factors on the formation, maintenance, and organization of memory representations in neuronal circuits. Furthermore, given the understanding of the organization between two memory representations, now one can extend this framework to investigate the self-organized formation of webs of memories and the emergence of complex behavior.
MATERIALS AND METHODS
Neuronal Network Model
We consider a recurrent neuronal network model consisting of a set 𝒩 of n rate coded neurons (𝒩 ≔ ℕn {1, …, n}, Figure 2A, dots). The neurons are interconnected via an all-to-all connectivity for the excitatory as well as for the inhibitory connections. Note that, if not stated otherwise, the inhibitory connections are constant while the excitatory synapses are plastic. Within the recurrent network, we define two distinct subsets of neurons 𝒫1 and 𝒫2 as neural population 1 (black dots) and neural population 2 (yellow dots). For simplicity, both neural populations have the same number of neurons (|𝒫1| = |𝒫2| = n𝒫). Furthermore, we assume no overlap between both neuronal populations (𝒫1 ∩ 𝒫2 = ∅). Neurons that are not part of neuronal population 𝒫1 or 𝒫2 are summarized as background neurons (ℬ ≔ 𝒩 ∖(𝒫1 ∪ 𝒫2)), with size |ℬ | = n −2n𝒫). Thus, we can describe the neuronal network model as the interaction of three different neuronal populations 𝒫 ∈ {𝒫1, 𝒫2, ℬ }≕𝔅. All neurons i of a neuronal population p receive a population-p-specific input stimulation defined by Fpex via n𝒫ex different neurons k connected via constant excitatory synapses ωex. All these input neurons k are summarized to a population-p-specific input εp ∈ ε. Furthermore, each single neuron k ∈ εp of the external input layer provides an external input stimulus of average strength Ikex ≔ Fkex ωex onto the interconnected neurons of the neuronal network 𝒩, where Fkex is defined by the population-p-specific stimulation parameter Fpex (see Equation 37). Note, we set n𝒫ex equal to n𝒫 to consider the same order of magnitude for input populations as for the populations themselves.
Neuron model.
Synaptic plasticity and synaptic scaling.
Constraining parameters of the activity function.
In the following, we give an interpretation for the parameters like the inflexion point and steepness of the sigmoidal shaped activity function (Equation 28). To specify the inflexion point of the neuronal activities, we define the maximal evoked membrane potential (umax) of a neuron i by only one incoming synapse (ωij) to the postsynaptic neuron j. Therefore, we set the pre- and postsynaptic neuronal activities to the maximal activity level of (Fj = Fi =Fmax) and, by this, calculate the fixed synaptic weight, using Equation 32, and define it as the maximal synaptic weight (). Equation 27 specifies the maximal network internal (∑Ikex = 0) evoked membrane potential of . Using this quality of umax, we interpret the inflexion ϵ of a neuron i as the number (nϵ) of such maximally wired presynaptic neurons. This leads to ϵ =nϵumax. For the determination of the precise value for nϵ = 12 see the Results section. To specify the steepness of the neuronal activity function, we have to consider their maximal and minimal possible evocable membrane potential and choose a steepness parameter β because of two different constraints: (1) the activity for the minimal membrane potential has to take on higher values as the target firing rate FT to prevent unstable weight dynamics, and (2) for a maximal evoked membrane potential the neurons have to take on the maximal firing rate of Fmax. One specific parameter for the steepness of the activity function that fulfill these two conditions is β = 0.00035mV−1 for all neurons.
Normalized Neuronal Network Model
Numerical Simulation and Stimulation Protocol
Network model . | Neuron model . | Syn. plast. & syn. scal. . | |||
---|---|---|---|---|---|
parameter | value | parameter | value | parameter | value |
n | 100 neurons | τ | 1s−1 | μ | |
10 neurons | R | 0.1nΩ | γ | ||
nℬ | Fmax | 100[Hz] | 0.05[Fmax] | ||
10 neurons | β | ||||
1 | |||||
0.5 |
Network model . | Neuron model . | Syn. plast. & syn. scal. . | |||
---|---|---|---|---|---|
parameter | value | parameter | value | parameter | value |
n | 100 neurons | τ | 1s−1 | μ | |
10 neurons | R | 0.1nΩ | γ | ||
nℬ | Fmax | 100[Hz] | 0.05[Fmax] | ||
10 neurons | β | ||||
1 | |||||
0.5 |
Analysis of the System’s Equilibrium State
Inhibitory Synaptic Plasticity
In the last section of this study, we introduce inhibitory synaptic plasticity. This specific plasticity rule depends on a threshold (θF) for the sum of pre- and postsynaptic activity levels () and on a tolerance range (δF) for the difference in the pre- and postsynaptic firing rates () leading the inhibitory synaptic weight () converge either to an up- (θu) or down-state (θd). The synaptic weight converge to:
- •
an up-state (θu), if the sum of neuronal activities is smaller than its threshold (∑F < θF) and/or the difference in the pre- and postsynaptic activities increases its tolerance range (Δ F > δ F),
- •
a down-state (θd), if the sum of neuronal activities is greater than its threshold (∑ F> θF) and the difference in the pre- and postsynaptic activities is smaller than its tolerance range (ΔF < δF).
AUTHOR CONTRIBUTIONS
Juliane Herpich: Conceptualization; Formal analysis; Investigation; Methodology; Visualization; Writing - Original Draft. Christian Tetzlaff: Conceptualization; Funding acquisition; Methodology; Project administration; Supervision; Validation; Writing - Review & Editing.
FUNDING INFORMATION
Christian Tetzlaff, H2020 Future and Emerging Technologies (http://dx.doi.org/10.13039/100010664), Award ID: 732266. Christian Tetzlaff, Deutsche Forschungsgemeinschaft (http://dx.doi.org/10.13039/501100001659), Award ID: SFB-1286, Project C1.
ACKNOWLEDGMENTS
The authors thank the International Max Planck Research School for Physics of Biological and Complex Systems, Niedersächsisches Voraband University of Göttingen, for a Stipend to Juliane Herpich.
TECHNICAL TERMS
- Synaptic plasticity:
General term for all kind of different biological mechanisms adapting the weights of synapses. Often they depend on neuronal activities.
- Cell assembly:
A group of neurons being strongly interconnected and essentially active together.
- Correlation-based synaptic plasticity:
Synaptic plasticity mechanisms adapting synaptic weights depending on the correlation of the pre- and postsynaptic neuronal activities.
- Homeostatic plasticity:
Synaptic plasticity mechanism adapting the synaptic weights such that neuronal systems maintain a desired average activity level.
- Synaptic weights:
The average transmission efficacy of a synapse quantified as a single number, which can be adapted by synaptic plasticity.
REFERENCES
Author notes
Competing Interests: The authors have declared that no competing interests exist.