Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disruptions on local circuitry after mTBI are poorly understood. Our group recently reported how mTBI in neuronal networks affects the functional wiring of neural circuits and how neuronal inactivation influences the synchrony of coupled microcircuits. Here, we utilized a computational neural network model to investigate the circuit-level effects of N-methyl D-aspartate receptor dysfunction. The initial increase in activity in injured neurons spreads to downstream neurons, but this increase was partially reduced by restructuring the network with spike-timing-dependent plasticity. As a model of network-based learning, we also investigated how injury alters pattern acquisition, recall, and maintenance of a conditioned response to stimulus. Although pattern acquisition and maintenance were impaired in injured networks, the greatest deficits arose in recall of previously trained patterns. These results demonstrate how one specific mechanism of cellular-level damage in mTBI affects the overall function of a neural network and point to the importance of reversing cellular-level changes to recover important properties of learning and memory in a microcircuit.
Over recent years, the number of reported traumatic brain injuries (TBI) requiring medical attention increased 54% from 1.6 million annually in 2006 to 2.5 million in 2014 (Cancelliere, Coronado, Taylor, & Xu, 2017; Taylor, Bell, Breiding, & Xu, 2017). Reasons for this increase include greater awareness of the consequences and symptoms of TBI, as well as better detection and diagnosis criteria (Bazarian et al., 2018; Jeter et al., 2013). Mild TBI (mTBI) is the most pervasive form of traumatic brain injury, comprising 75% to 80% of emergency room TBI patients (Bazarian et al., 2005; Rutland-Brown, Langlois, Thomas, & Xi, 2006). While most patients show full recovery, approximately 15% of mTBI patients experience persistent cognitive deficits and behavioral changes from even a single injury (DeKosky & Asken, 2017; Gavett, Stern, & McKee, 2011; Johnson et al., 2013).
The increase in incidence coincides with a growth in studies emphasizing the importance of understanding subtle differences produced by mTBI, or concussion (Hawryluk & Manley, 2015; Mullally, 2017). The proximal event in each mTBI is either a primary impact or impulsive loading condition in which the head accelerates or decelerates quickly and inertial forces deform the brain tissue. The primary event is followed by acute and secondary injury sequelae, which range from early synaptic changes and metabolic effects to persisting inflammation (Kumar & Loane, 2012). One acute synaptic change is altered conductive properties of the N-methyl D-aspartate receptor (NMDAR), an ionotropic glutamate receptor (Paoletti, Bellone, & Zhou, 2013). NMDARs comprise several subunits that give rise to the functional character of individual receptors (Paoletti et al., 2013). In the adult brain, the two most important subunits are GluN2A and GluN2B, which are specifically associated with neurological pathologies (Paoletti et al., 2013).
In the context of TBI, initial studies showed how NMDARs responded to stretch injury by modifying the current-voltage relationship (Zhang, Rzigalinski, Ellis, & Satin, 1996). Other work has confirmed that the GluN2B subunit of the NMDAR confers this mechanical sensitivity, at least in part through the coupling of the receptor to the neuronal cytoskeleton (Singh et al., 2012). The primary consequence of the mechanical forces during impact is altering the binding efficiency of the internal magnesium ion within the channel pore of the NMDAR. As a result, the partial loss of the Mg block after mechanical injury leads to an acute phase where ion flux through the NMDARs is enhanced, creating a condition where normal balance in signaling across the receptor is altered (Zhang et al., 1996). In the most damaging form, sustained firing patterns within neurons could lead to excess glutamatergic-receptor activation, cell death due to excitotoxicity, or overactivation of the neuron (Lau & Tymianski, 2010; Werner & Engelhard, 2007).
Although this is a known injury mechanism of mTBI, how NMDAR damage contributes to functional impairments in neural circuitry is not well defined. By isolating individual cellular mechanisms and measuring properties that are inaccessible experimentally, computational methods provide an opportunity to examine many consequences of traumatic injury. Neural networks use simplified neuron activity models to assess the interactions that lead to higher-level processing (Izhikevich, 2004, 2006). Recent studies have analyzed the role that plasticity mechanisms may have after deafferentation, leading to waves of activity resembling posttraumatic epilepsy and the effect of neurodegeneration on rhythmic oscillations (Schumm, Gabrieli, & Meaney, 2020; Volman, Bazhenov, & Sejnowski, 2011). Collectively, these studies examined higher-level, dynamical descriptions of network activity but did not explore changes in the learning capacity of the network. Other work has found that biological neural networks are dynamic and respond to new stimuli by altering their connectivity pattern (Buzsáki & Draguhn, 2004; Ismail, Fatemi, & Johnston, 2017; Rajasethupathy, Ferenczi, & Deisseroth, 2016). Learning new activation patterns is a key feature of such networks, yet it remains relatively unexplored in models of the effects of disease on microcircuit function. Although NMDAR-dependent mechanisms of modifying neural circuitry have been characterized, the effect of alterations to NMDAR properties on developing new patterns in networks remains unclear (Bliss & Collingridge, 1993; Citri & Malenka, 2008; Kleim & Jones, 2008; Turrigiano, 2012). One group assessed pattern separation in a model of dendritic atrophy in the dentate gyrus, but this model did not include plasticity-dependent remodeling of the circuitry (Chavlis, Petrantonakis, & Poirazi, 2017). Since NMDARs are both important for memory and susceptible to traumatic injury, learning in networks is likely impaired after injury (Paoletti et al., 2013; Ruppin & Reggia, 1995). However, these effects have not been systematically investigated in computational neural networks previously.
Here we used a model of neural activity to determine the effects of NMDAR injury on circuit function and structure. We simulate the effects of mTBI by incorporating pathophysiological properties of the NMDAR—namely, the partial loss of the magnesium receptor block. We show that mild injury immediately increases activation of injured neurons, an effect that propagates to nearby uninjured neurons in the circuit. Spike-timing-dependent plasticity mitigates the functional effects of damage by isolating injured circuitry from the rest of the network. Finally, we assessed the ability of networks to acquire and recall conditioned responses to a stimulus after injury. Injury after training was the most detrimental to learning, causing large deficits in the ability to recall trained patterns. Mild traumatic damage during training or extinction also impaired network capacity to acquire new patterns or retain learned responses, respectively. In total, we show that the integrity of the NMDAR is important to the baseline function and learning capacities of small-scale neural circuits.
2 Materials and Methods
2.1 Computational Model
2.2 NMDAR Modeling and Simulating Injury
Stretch injury in vitro alters the Mg block of the mechanically sensitive NMDAR-NR2B receptors (Patel, Ventre, Geddes-Klein, Singh, & Meaney, 2014; Singh et al., 2012; Zhang et al., 1996). To model this reduced affinity, we lowered the local concentration of Mg ions in the dendrites of injured excitatory neurons, mimicking experimental data of stretch injury (Patel et al., 2014; Zhang et al., 1996). Our initial and injured concentrations were 2 mM and 0.01 mM for “normal” and “injured” receptors, respectively (Jahr & Stevens, 1990b; Zhang et al., 1996). This altered the I-V relationship of these receptors from baseline (see Figure 1C). AMPA receptors retained greater peak amplitude over both NMDAR subtypes, but injury elevated the overall response of GluN2B-containing NMDAR (see Figure 1D). This resulted in an NMDAR-N2B dominant charge transfer, the discrete integral of the ionic flux through the receptor, at injured synapses (see Figure 1E).
2.3 Fundamental Functional and Structural Analysis
To assess the impact of NMDAR-mediated damage to network activity, we measured the average neuron firing rate over 6 minute simulation periods, computed for all neurons in the network. To assess network structure, the aggregate input and output strengths were computed for each neuron and normalized by the maximum for each.
2.4 Training and Testing a Conditioned Response to External Stimulation
The ability to learn a response from repeated stimulus is a key function of biological neuronal networks. While the specific ways in which networks develop and store complex patterns are unclear, we have some indications about the potential outcomes and mechanisms for developing these patterns. STDP has long been touted as the primary mechanism for Hebbian learning, where paired responses increase the strength of the underling connection (Bi & Poo, 1998; Song et al., 2000). As a result, output neurons develop a more consistent response given a specific stimulus. There are also data that suggest how neurons learn to respond to a given stimulus. For instance, neurons of the visual cortex develop increased firing to patterns in the visual field, and auditory cortex neurons develop similar responses to tonal patterns (Bracci, Ritchie, & de Beeck, 2017; Connor & Knierim, 2017; Plack, Barker, & Hall, 2014; Victor, Conte, & Chubb, 2017). We took these two known features of learning to develop an unsupervised learning paradigm in which stimulation causes a learned increase in downstream activity. Although researchers have used many methods to understand potential learning mechanisms in the brain, our method makes no assumptions about the underlying circuitry and operates at the neuron scale (Izhikevich, 2006; Moser, Rowland, & Moser, 2015; Rebola, Carta, & Mulle, 2017; Richards & Frankland, 2017; Rolls, 2018). It is also computationally simplistic enough to apply and assess rapidly in networks of different topologies, unlike past algorithms (Izhikevich, 2006).
To summarize, our overall learning paradigm consisted of several steps. During training, the network acquired a conditioned response to a patterned input. To test recall, we recorded the response to the input stimulus and compared it to baseline (before training). To evaluate the maintenance of trained patterns, we considered an extinction protocol during which the network adapted with random noise and no training stimulus. We then tested again to determine whether the conditioned response remained. We consider each of these steps in further detail next.
To develop a conditioned response to periodic input within our networks, we initialized each network and allowed the connectivity and activity to stabilize with STDP and random 1 Hz gamma-distributed () external noise for 2 hours. We then selected a group of 10 excitatory input neurons that are stimulated simultaneously at 1 Hz in addition to background noisy firing. We primarily used this single-input protocol, but in a subset of simulations, we also tested multiple inputs by training the network on three unique patterns interleaved with 333 msec between each stimulation. The simulation ran for an additional 2 hours to let the network connectivity ingrain learned responses to the exogenous stimulation. There was no effect of altering the number of input neurons on our results.
After training with the periodic stimulus, we assessed whether neurons in the network had developed conditioned responses. For this testing period, we used the network topology after training and again applied the 1 Hz stimulation to the same input neurons for 360 independent trials (6 minutes of simulation activity). We tracked the 100 millisecond (msec) period of activity after each stimulus trial to capture both on-target and off-target responses of the network to stimulation. We normalized the conditioned activity by the initial response to input-neuron stimulation before training. This normalization accounted for neurons that had generally high activity rates that were not a result of training. Because this approach is unsupervised, there were no assumptions about the amount of activation or the specific neurons that would respond to stimulation. Therefore, we defined the neurons with the greatest increase in activity after training as output neurons, accounting for 4% of the overall excitatory network. The output neurons constitute the on-target response of the network to stimulation. All other excitatory neurons were categorized as a nonresponsive hidden layer, where a response to stimulation is considered off-target (see Figure 4A). Output metrics included the relative change of output and hidden-layer neuron activation in the training epoch. We also measured the signal-to-noise ratio as the ratio of the on-target response (i.e., output neuron activation) to the off-target response (i.e., hidden-layer activation).
After training, we wanted to assess maintenance of learned responses in the absence of the training stimulus. While the previous testing protocol indicates whether we were able to encode and retrieve a response to input stimulation, it does not determine if that learned response depends on constant application of the training stimulus or is robust and persistent. To test whether the trained networks retain the conditioned input-output relationship after training, we simulated an extinction period by running an additional 2 hours of simulation time with 1 Hz random noise but without the training stimulus. After the extinction period, we tested the networks again with 1 Hz stimulation to input neurons and compared output and hidden-layer activity relative to baseline.
2.5 Injury during Learning: Testing Pattern Recall, Acquisition, and Maintenance
To test the effect of injury on the ability of the network to encode, recall, and maintain a conditioned response, we implemented injury at specific steps in the learning paradigm. Injury occurred in the hidden-layer neurons (not input or output neurons) by altering the Mg concentration in the incoming synaptic connections of selected cells. We limited injury to hidden-layer neurons to investigate signal propagation through the hidden layer and mitigate the confounding effects of directly damaging output neurons. We tested two levels of injury (25% or 50% of hidden-layer neurons) as compared to control (0% injury).
Injury paradigm 1: Injury during testing to evaluate recall. To evaluate the ability of the network to recall previously learned patterns, we trained the network as in the initial learning paradigm but injured cells during assessment. (See Figure 4.)
Injury paradigm 2: Injury during training to evaluate acquisition. To test the pattern acquisition of damaged networks, we injured and trained them with input stimulation for 2 hours. To test, we replaced damaged receptors, restoring all cells to healthy function. While such a manipulation is not possible in vitro or in vivo, we sought to isolate the effect of injury during training only. (See Figure 6.)
Injury paradigm 3: Injury during extinction to evaluate maintenance. We assessed the ability of the network to retain trained patterns after a period of extinction without stimulus. We injured the networks after training and assessed any alterations in the total recall of the network after 2 hours. (See Figure 7.)
2.6 Statistical Analysis and Tools
Statistical testing included one-way ANOVA with Tukey-Kramer post hoc for comparison between injury levels or testing periods of the learning paradigm. For comparison between injured and uninjured subpopulations of our networks for structural network analysis, we used two-sample t-tests with Bonferroni correction for multiple comparisons. The model was implemented in the C programming language with network setup and analysis performed in Matlab (MathWorks). All simulations and analysis were conducted on an AMD Ryzen 7 1700X processor with 32 GB of system memory.
3.1 Effect of NMDAR Injury on Network Dynamics
In response to mechanical trauma, the NMDAR in neurons exhibits a partial loss of the magnesium block (Patel et al., 2014; Zhang et al., 1996). Accordingly, we simulated the effect of mechanical trauma with a partial loss of this receptor Mg block in a fraction of the neurons in the network. As expected, the network activity rate immediately increased when either 25% or 50% of the excitatory neurons in the network were injured (see Figure 2).
We next isolated each population within our network (excitatory uninjured, excitatory injured, and inhibitory) and assessed activity after injury. A higher activity rate was associated with injured neurons (69.41 9.89 Hz and 103.28 3.85 Hz, at 25% and 50% injury levels) relative to the uninjured excitatory neurons (16.64 2.64 Hz and 24.92 1.69 Hz, at 25% and 50% injury). This difference is apparent in the raster plots of raw activity (see Figure 2A). Due to recurrent connectivity with the injured subpopulation, both uninjured excitatory and inhibitory populations also had elevated firing rates (see Figure 2B). That is, the increased firing of excitatory neurons with damaged NMDAR elevates the excitatory drive onto both uninjured excitatory and inhibitory neurons, thereby increasing the firing rates of those populations as well.
After STDP for 2 hours after injury, all neuronal populations showed a significant reduction in their average firing rates at both injury levels (69.41 9.89 Hz and 103.28 3.85 Hz to 49.74 2.59 Hz and 64.33 2.48 Hz, 0.001 for injured excitatory neurons, 16.64 2.64 Hz and 24.92 1.69 Hz to 6.87 0.34 Hz and 5.34 0.39 Hz, 0.001 for uninjured excitatory neurons, and 23.73 Hz 1.67 and 32.55 0.67 Hz to 16.51 0.30 Hz and 23.50 0.27 Hz, 0.001 for inhibitory neurons at 25% and 50% injury, respectively). The decrease was most apparent in the injured population, where reweighting reduced average firing rate by up to 40% at the highest damage level. Shifts in synaptic weights between injured and uninjured populations likely drives the change in activity of injured excitatory neurons. (See Figure S3 in the supplemental material.) Uniquely, the firing rate of uninjured excitatory neurons was not significantly different than baseline at 50% injury (5.34 0.39 Hz versus 5.38 0.24 Hz, ). This was not the case for other populations. Of note, activity in inhibitory neurons decreased, but not to the extent of the other populations (see Figure 2B).
3.2 NMDAR Dysfunction Alters Network Structure after Injury
After characterizing the network activity post-injury, we determined whether structural differences appeared in injured neurons. The injured and uninjured populations were not significantly different from each other initially, as the injured neurons were selected randomly from the total pool of excitatory neurons in the network (see Figure 3B). However, after plasticity, both groups were significantly different from each other and from their initial baseline connectivity. Injured neurons, which receive more total charge from upstream action potentials, increased their input strength and decreased their output strength relative to baseline (0.61 0.12 from 0.51 0.17 and 0.27 0.21 from 0.45 0.28, for input strength and output strength, respectively; see Figure 3). Interestingly, uninjured neurons had significantly lower normalized input strength and higher output strength (0.43 0.17 and 0.56 0.24) relative to baseline (0.49 0.18 and 0.47 0.27; ). In general, injured neurons increased their input strength and decreased their output strength relative to excitatory neurons prior to injury, while uninjured neurons showed the opposite effect.
3.3 Developing Learned Patterns in Baseline and Injured Neural Networks
To this point, we have studied neural networks stimulated with random thalamic input, per previous studies (Izhikevich, 2003; Izhikevich & Edelman, 2008; Wiles et al., 2017). However, in vivo neural networks constantly receive extrinsic stimulation from peripheral sensory neurons. STDP is one mechanism of Hebbian learning, so we designed a paradigm to assess the ability of the network to acquire and recall a conditioned response to a regular stimulus. Our unsupervised approach stimulates a group of input neurons with a regular patterned input. These input neurons pass the signal to the rest of the network, which consists of output neurons and hidden-layer neurons. The output neurons represent the desired, targeted response, and the remaining neurons comprise the hidden layer, which represents the off-target response (see Figure 4A). We then compared the response of the output and hidden-layer neurons to the stimulus before and after training.
In response to stimulating input neurons, output neurons significantly increased their firing rates. In contrast, there was no significant increase in relative firing rate in the hidden-layer neurons (1.0 0 versus 0.97 0.04; ; see Figure 4B; we report these and all subsequent firing rates as normalized to that of control networks before stimulation). Networks adapted without the training stimulus for another 2 hours of simulation time to examine the persistence of the conditioned output. After this period, we tested the firing rate of output neurons under the same stimulus conditions and found that it significantly decreased. However, their firing rate still exceeded that of a network without exposure to an extrinsic training stimulus (1.28 0.11 versus 1.47 0.10 versus 1.0 0; 0.001). Across all conditions, hidden-layer neurons exhibited no significant change in average firing rate relative to control networks not receiving any stimulation (0.96 0.04 versus 1.0 0; ; see Figure 4B).
After establishing the ability to condition networks with an input stimulation and create a reliable and persistent increase in the firing rate of output neurons, we next considered how traumatic injury affects this process. We used networks that were conditioned to the input stimulation and applied an NMDAR-based injury in a fraction of the hidden-layer neurons. (We used the same approach of reducing the effective Mg concentration as implemented and analyzed in Figure 2.) Injuring hidden-layer neurons produced a significant increase in activation of output neurons (2.84 0.42 versus 4.45 0.59 versus. 1.47 0.10; 0.001; see Figure 4D), suggesting the network became more sensitive to the input stimulus. However, the hidden-layer neurons also had a significantly higher firing rate, an effect that did not occur in uninjured networks (6.71 0.45 versus 16.12 0.56 versus 0.97 0.04; 0.001; see Figure 4E). To evaluate whether the output neurons continued to produce a distinct activity pattern, we computed the ratio of output neuron firing rate to hidden-layer neuron firing rate. We found the ratio was highest for uninjured networks and significantly lower for both 25% and 50% injury (1.51 0.06 versus 0.42 0.04 versus 0.28 0.03; 0.001; see Figure 4F). This result suggests a loss of the ability to preserve conditioned output stimulus in injured networks.
3.4 Impaired Circuitry-Based Learning after Injury
A third question we addressed was how injury affects the persistence of output-neuron response in conditioned networks. After training, we injured neurons and allowed injured networks to adapt over the subsequent two hours (see Figure 7A). Similar to simulations where the injury occurred immediately prior to the conditioning stimulus, the output and hidden-layer neurons showed a significant increase in activity relative to uninjured networks (1.94 0.11 and 1.59 0.27 versus 1.28 0.38 and 0.96 0.043; 0.001; see Figures 7B and 7C). The ratio of output-neuron activity to hidden-layer activity was lower after injury than in uninjured networks (1.33 0.07), but it was not different between injury levels (1.21 0.12 versus 1.22 0.09; ; see Figure 7D).
In this work, we have assessed the impact of NMDA receptor dysfunction on circuit structure and function to understand how the learning capacity of neural circuits adapts after mild traumatic injury. Building on past models of neural dynamics and injury, we assessed functional and structural changes after injury in randomly activated networks and developed a learning paradigm to determine how altered activity from injury can inhibit learning acquisition and recall in the model circuit. Corroborating existing studies, we find that the partial loss of the magnesium block of NMDARs after trauma significantly increases the activity of injured neurons and that this increased activity is transmitted to downstream, uninjured circuitry (Patel et al., 2014; Zhang et al., 1996). Although STDP mitigates these activity changes, there were lasting dynamical differences compared to uninjured networks. Finally, injury most affected the ability to recall conditioned responses, with lesser impairments in pattern acquisition and maintenance after extinction. Together, these results show that the reported changes in physiological properties of the NMDAR after mild mechanical injury can inhibit the proper functioning of network circuitry during the onset of injury and can alter the long-term structure of the network.
This study has three primary limitations: (1) the generalized neuron model and topology, (2) lack of receptor turnover and replacement mechanisms, and (3) unsupervised learning. We address each in turn.
First, we used a computationally simple, generalized model for both neurons and topology in our system. Although we made efforts to create biologically realistic neuron activity and circuit designs, our results may not be entirely generalizable to unique, specialized systems. The Izhikevich neuron model was developed as a computationally efficient, adaptive integrate-and-fire model that can reproduce spike timing of a variety of neuron types and has been widely used in simulating neural activity at the network scale (Izhikevich, 2003, 2004; Izhikevich & Edelman, 2008; Vertes, Alexander-Bloch, & Bullmore, 2014; Vertes & Duke, 2010; Wiles et al., 2017). This neuron model has also been specifically validated with STDP (Izhikevich, 2003). For these reasons, we do not believe modifying the neuron model would significantly change the broad conclusions of our work. There remains an opportunity to rapidly extend our approaches to more specialized microcircuits, better representing specific areas of the brain.
A second limitation involves NMDAR physiology in that this study did not incorporate receptor replacement mechanisms. Furthermore, we used an instantaneous switch for NMDAR injury such that excitatory neurons had either normal, functional receptors or injured, dysfunctional NDMARs. Our experimental work demonstrates that mild neuronal injury in vitro does lead to a rapid change in the conductance properties of the receptor (Patel et al., 2014; Zhang et al., 1996). Therefore, instantaneous conversion from normal to pathological NMDARs appears reasonable. Experimental work also shows a transient change in NMDAR subunits over days following TBI in vivo, suggesting that the replacement of damaged NMDA receptors occurs gradually over the days following injury (Biegon et al., 2004). In a subset of simulations not included here, we tested instantaneous replacement of damaged receptors with healthy ones and found that the activity returned to baseline after rewiring with STDP. Further research is necessary to characterize the time course of receptor replacement to better assess this recovery mechanism in our network model. It is also worth noting that these studies focused on NMDAR damage in excitatory neurons only; however, inhibitory neurons also have NMDARs (De Marco García, Karayannis, & Fishell, 2011). It would be interesting to explore NMDAR injury in inhibitory or mixed populations in future work.
Finally, the third limitation is our unsupervised learning paradigm. The general approach was to define learning as a trained adaptation of the network following patterned stimulation in a subset of neurons. The network response was unsupervised, with responsive output neurons selected after training. This approach ensured that output neurons showed greater response to the training stimulus than the hidden-layer neurons did, but it did not define correct or incorrect responses within the circuitry a priori. Of course, there are other algorithms to study learning in computational neural networks. For example, polychronization interrogates the network topology by stimulating each set of three neurons to determine which are able to create a polychronous chain of activity (Izhikevich, 2006). Others have used this methodology to understand the development of specific network topologies and the ability of these networks to retain specific patterns (Vertes et al., 2014; Vertes & Duke, 2010). However, polychronization assumes particular knowledge about the underlying circuit and the existing patterns, as well as the ability to resolve them individually. Alternatively, other studies explored pattern separation within a computational model of dentate circuitry (Chavlis et al., 2017), revealing the challenges of resolving patterns with similar inputs without sparse coding within specialized cell types (Olshausen & Field, 2004). As we used generalized topologies, we designed our paradigm without assumptions about the underlying topology. Future studies might assess the capacity to develop multiple, separable output responses in generalized networks.
Our work demonstrates that one primary consequence of trauma-induced alterations in NMDAR channel conductance is a generalized increase in activity of both injured and uninjured neurons. Moreover, the relative enhancement of ion flux from GluN2B-containing NMDARs is consistent with past experimental work (Patel et al., 2014) and, similar to previous findings by Ferrario and colleagues (Ferrario, Ndukwe, Ren, Satin, & Goforth, 2013), the predicted immediate effect of injury was to produce more pronounced oscillations through increased calcium influx from GluN2B-NMDARs. Interestingly, neurons with greater contributions of N2B-NMDAR actively disconnect from the functional network (Patel et al., 2014), suggesting that the mechanosensitive property of the NMDAR may also change the functional integration of a neuron with its neighbors. With STDP, the high activity rate of injured neurons may contribute to a similar decoupling of injured neurons from the structural network, as evidenced by the lower output strength of injured compared to uninjured neurons. Injured neurons quickly developed a unique structural phenotype characterized by large, aggregate input from the neighboring neuronal population and weak output to downstream neurons. These results further suggest that injury can partition the network between injured and uninjured populations.
Our network model represents a small subnetwork of the brain, but many subnetworks and the communication between them constitute macroscale brain networks. Broadly speaking, coordination among subnetworks is important to critical cognitive processes, many of which are impaired following traumatic brain injury (Clayton, Yeung, & Cohen Kadosh, 2015; Corbetta, 2012; Hanslmayr, Staresina, & Bowman, 2016; Kinnunen et al., 2011; Nakamura, Hillary, & Biswal, 2009; Pandit et al., 2013). Based on the functional changes observed after simulated mild traumatic injury and the way that elevated firing rates propagated from directly damaged neurons to uninjured circuitry, we anticipate a more widespread disturbance of the coordinated signaling throughout the brain. A recent study using the deletion or inactivation of neurons in a simple coupled network of oscillating microcircuits shows a clear impairment in synchronization after neurodegeneration (Schumm et al., 2020). As neurodegeneration decreases the average firing rate but mild NMDAR injury increases activity, one possibility is that NMDAR injury may increase synchrony between networks. To this end, synchronization in networks of different frequency and the impact of adaptive restructuring mechanisms on network order have been recently investigated in oscillator models (Papadopoulos, Kim, Kurths, & Bassett, 2017). Although the result of NMDAR injury suggests an increase in coherence among brain regions, this injury may also make it more difficult for different brain regions to switch their functional coupling, an effect seen in the default mode network after TBI (Bonnelle et al., 2011; Jilka et al., 2014; Johnson et al., 2012). Together, our work suggests that NMDAR-based injury could increase synchronization but may impair the flexibility of these networks after brain injury.
These studies explored the intersection of learning and adaptation to injury via STDP, finding that the mechanism helped restore network function after injury and thereby expanding the role of STDP beyond Hebbian learning (Bi & Poo, 1998; Meliza & Dan, 2006). Despite the inability of STDP to fully repair the damaged circuitry and restore function, there were marked directional improvements in network dynamics. Therefore, STDP appears to provide a regulatory role in network dynamics much like classical homeostatic plasticity (HSP) (Turrigiano, 2012; Turrigiano & Nelson, 2004). HSP dictates that each neuron has an optimal firing rate that is regulated internally by uniformly adjusting receptor count at the dendritic spines; STDP requires pairs of neurons with correlated inputs to increase synaptic strength. Through this mechanism, the network rewires its topology to mitigate the impact of damage. An additional notable difference between HSP and STDP is the timescale over which they operate (Zenke, Gerstner, & Ganguli, 2017); STDP acts on a much shorter time frame than does HSP (Zenke et al., 2017). Therefore, other homeostatic mechanisms may further mitigate the lingering effects of NMDAR damage at chronic time points after injury. In past work focusing on the role of homeostatic synaptic scaling after trauma, it was shown that HSP can lead to persistent bursting in the network, perhaps providing a cellular substrate for epileptic-like seizures in the cortex (Volman et al., 2011). Although we did not explicitly account for HSP, past work and our simulations suggest that STDP and HSP may provide complementary roles in injured circuits. Specifically, one recovery mechanism (HSP) might lead to a risk of developing pathology, and a second mechanism (STDP) quickly adjusts the network to avoid such uncontrollable bursting. It is intriguing to consider how these two mechanisms may interact in future work, potentially cooperating to repair an injured circuit by re-establishing the dynamics of a health one.
The ability to respond and adapt to incoming stimulus and store these patterns for future use is a key component of biological networks. Here, we took a minimalistic approach in the implementation of an unsupervised learning paradigm to assess how injury affects network learning and recall. Currently, there is limited consensus on the ways to train and assess biological models, which inherently results from an incomplete understanding of how neurons store information (Josselyn, Köhler, & Frankland, 2015; Sharpee, 2017; Titley, Brunel, & Hansel, 2017; Vertes & Duke, 2010). Our method is based in rate coding, where sensitivity to firing rates is the primary form of information storage and transmission. Rate coding is critical to our understanding of receptive fields of place cells, muscle activation, and time perception (Enoka & Duchateau, 2017; Kraus, Robinson, White, Eichenbaum, & Hasselmo, 2013; MacDonald, Lepage, Eden, & Eichenbaum, 2011; O'Keefe & Dostrovsky, 1971; Pastalkova, Itskov, Amarasingham, & Buzsáki, 2008). Alternatively, new studies have investigated temporal coding, or the storage of information within precise spike timing as a more efficient and faster coding method in brain structures (Buzsáki, Llinas, Singer, Berthoz, & Christen, 1994; Terada, Sakurai, Nakahara, & Fujisawa, 2017; Vertes & Duke, 2010). Implementing temporal coding methods relies on precise timing and structures within the network, which must be known in advance to be trained (Izhikevich, 2006; Vertes & Duke, 2010).
We found that the most significant learning impairments after injury occurred in recalling previously trained patterns. In this case, injured networks had distinct increases in their rate-coded response to stimulus in the hidden layer. As in untrained networks with NMDAR dysfunction, injured trained networks indiscriminately increased firing in both the damaged population and the connected circuitry. Therefore, the increased response in injured hidden-layer neurons masked the elevated, trained output. This implies that networks that rely on rate coding are predisposed to deficits in previously trained responses after mild injury. Supporting this idea, there is a marked decrease in task-specific cells found in the hippocampus of rats after injury (Eakin & Miller, 2012; Munyon, Eakin, Sweet, & Miller, 2014). Coupled with decreased performance in previously trained behavioral tasks (Chen, Mao, Yang, Abel, & Meaney, 2014), this could indicate poor separation between on- and off-target activation patterns.
Together, these simulations demonstrate how stretch-induced alterations in NMDAR physiology propagate to information processing in the circuit. Despite a significant increase in neuronal activity that occurs with this injury mechanism, STDP can reduce the effect of injury by altering the network structure around the damaged neuron population. Finally, this work shows impaired learning capacity in injured networks with specific deficits in pattern acquisition, recall, and maintenance.
|Property .||Excitatory Neurons .||Inhibitory Neurons .|
|GABA strength||NA||10 1|
|Delay per unit length||0|
|Property .||Excitatory Neurons .||Inhibitory Neurons .|
|GABA strength||NA||10 1|
|Delay per unit length||0|
Notes: NA not applicable. The strength is output strength for each neuron type, so inhibitory neurons are represented with GABA only while excitatory neurons have values for AMPA, NMDA-N2A, and NMDA-N2B.
This study was funded by the Paul G. Allen Frontiers Group grant 12347 and by the National Institutes of Health, grant number RO1 NS088176.