Abstract
Systems neuroscience is in a headlong rush to record from as many neurons at the same time as possible. As the brain computes and codes using neuron populations, it is hoped these data will uncover the fundamentals of neural computation. But with hundreds, thousands, or more simultaneously recorded neurons come the inescapable problems of visualizing, describing, and quantifying their interactions. Here I argue that network science provides a set of scalable, analytical tools that already solve these problems. By treating neurons as nodes and their interactions as links, a single network can visualize and describe an arbitrarily large recording. I show that with this description we can quantify the effects of manipulating a neural circuit, track changes in population dynamics over time, and quantitatively define theoretical concepts of neural populations such as cell assemblies. Using network science as a core part of analyzing population recordings will thus provide both qualitative and quantitative advances to our understanding of neural computation.
INTRODUCTION
Neurons use spikes to communicate (Rieke, Warland, de Ruyter van Stevninck, & Bialek, 1999). From this communication arises coding and computation within the brain; and so arises all thought, perception, and deed. Understanding neural circuits thus hinges critically on understanding spikes across populations of neurons (Pouget, Beck, Ma, & Latham, 2013; Wohrer, Humphries, & Machens, 2013; Yuste, 2015).
This idea has driven a technological arms race in systems neuroscience to record from as many individual neurons at the same time as physically possible (Stevenson & Kording, 2011). Current technology, ranging from imaging of fluorescent calcium-binding proteins (Chen et al., 2013; Dupre & Yuste, 2017; S. Peron, Chen, & Svoboda, 2015; S. P. Peron, Freeman, Iyer, Guo, & Svoboda, 2015) and voltage-sensitive dyes (Briggman, Abarbanel, & Kristan 2005; Bruno, Frost, & Humphries, 2015; Frady, Kapoor, Horvitz, & Kristan, 2016) to large scale multielectrode arrays and silicon probes (Buzsáki, 2004; Jun et al., 2017), now allows us to simultaneously capture the activity of hundreds of neurons in a range of brain systems. These systems include such diverse systems as invertebrate locomotion, through zebrafish oculomotor control, to executive functions in primate prefrontal cortex. With the data captured, the key questions for any system become: How do we describe these spike data? How should we visualize them? And how do we discover the coding and computations therein?
Here I argue that network science provides a set of tools ideally suited to both describe the data and discover new ideas within it. Networks are simply a collection of nodes and links: nodes representing objects, and links representing the interactions between those objects. This representation can encapsulate a wide array of systems, from email traffic within a company, through the social groups of dolphins, to word co-occurrence frequencies in a novel (Newman, 2003). By abstracting these complex systems to a network description, we can describe their topology, compare them, and deconstruct them into their component parts. Moreover, we gain access to a range of null models for testing hypotheses about a network’s structure and about how it changes. I will demonstrate all these ideas below.
First, an important distinction. Networks capture interactions as links, but these links do not necessarily imply physical connections. In some cases, such as the network of router-level connections of the Internet or a power grid, the interaction network follows exactly a physical network. In somes cases, such as a Facebook social network, there is no physical connection between the nodes. In other cases, of which neuroscience is a prime example, the interactions between nodes are shaped and constrained by the underlying physical connections, but are not bound to them. We shall touch on this issue of distinguishing interactions from physical connections throughout.
DESCRIBING MULTINEURON DATA AS A NETWORK
A network description of multineuron recording data rests on two ideas: The nodes are the neu rons, and the links are the interactions between the neurons (Figure 1A). Strictly speaking, the nodes are the isolated time series of neural activity, whether spike trains, calcium fluorescence, or voltage-dye expression (with the usual caveats applied to the accuracy of spike-sorting for electrodes or image segmentation and stability for imaging; Harris, Quiroga, Freeman, & Smith, 2016). An immediate advantage of a network formalism is that it separates the details of choosing the interaction from the network topology itself—whatever measure of interaction we chose, the same topological analyses can be applied.
We are free to choose any measure of pairwise interaction we like; and indeed that choice depends on what questions we want to ask of the data. Typical choices include cosine similarity or a rectified correlation coefficient, as these linear measures are familiar, easy to interpret, and not data-intensive. But with sufficient data we could also use nonlinear measurements of interaction including forms of mutual information (Bettencourt, Stephens, Ham, & Gross, 2007; Singh & Lesica, 2010) and transfer entropy (Nigam et al., 2016; Schreiber, 2000; Thivierge, 2014). We could fit an Ising model, so estimating “direct” interactions while factoring out other inputs (S. Yu, Huang, Singer, & Nikolic, 2008). We could even fit a model to each neuron for the generation of its activity time series, such as a generalized linear model (Pillow et al., 2008; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005), and use the fitted weights of the inputs from all other neurons as the interaction values in a network (Gerhard, Pipa, Lima, Neuenschwander, & Gerstner, 2011). In addition, there is a large selection of interaction measures specific for spike trains (e.g., Lyttle & Fellous, 2011; van Rossum, 2001; Victor & Purpura, 1996), whose use in defining interaction networks has yet to be well explored. And we should always be mindful that measures of pairwise interaction alone cannot distinguish between correlations caused by common input from unrecorded neurons and correlations caused by some direct contact between the recorded neurons.
Whatever measure of interaction we use, the important distinction is between whether the interaction measurement is undirected (e.g., the correlation coefficient) or directed (e.g., transfer entropy), and so whether we end up with an undirected or directed network as a result (throughout this paper I consider only symmetric measures of interaction, and hence undirected networks). And we end up with a weighted network (Newman, 2004). While much of network science, and its use in neuroscience, is focused on binary networks whose links indicate only whether an interaction between two nodes exists, any measurement of interaction gives us a weight for each link (Figure 1B). Thresholding the weights to construct a binary network inevitably loses information (Humphries, 2011; Zanin et al., 2012). Consequently, multineuron recording data are best captured in a weighted network.
This weighted network of interactions between neurons need not map to any physical network of connections between neurons. The synaptic connections between neurons in a circuit shape and constrain the dynamics of those neurons, which we capture as population activity in multineuron recordings. But interactions can change independently of the physical network, both because the firing of a single neuron requires inputs from many other neurons, and because physical connections can be modulated on fast timescales, such as short-term plasticity temporarily enhancing or depressing the strength of a synapse. Nonetheless, because physical connections between neurons constrain their dynamics, so sustained changes in interactions on timescales of minutes and hours are evidence of some physical change to the underlying circuit (Baeg et al., 2007; Carrillo-Reid, Yang, Bando, Peterka, & Yuste, 2016; Grewe et al., 2017; Laubach, Wessberg, & Nicolelis, 2000; Yamada et al., 2017).
The use of network science to describe interactions between neural elements has been growing in cognitive neuroscience for a decade, and widely used to analyze EEG, MEG, and fMRI time series data (Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Bassett & Bullmore, 2016; Bullmore & Sporns, 2009). Neuroimaging has long used the unfortunate term “functional networks,” with its connotations of causality and purpose, to describe the network of pairwise correlations between time series of neural activity. To avoid any semantic confusion, and distinguish the networks of interactions from the underlying physical network, I will describe the network of single neuron interactions here as a “dynamical” network.
What can we do with such dynamical networks of neurons? In the following I show how with them we can quantify circuit-wide changes following perturbations and manipulations; we can track changes in dynamics over time; and we can quantitatively define qualitative theories of computational concepts.
CAPTURING POPULATION DYNAMICS AND THEIR CHANGES BY MANIPULATIONS
Applying network science to large-scale recordings of neural systems allows us to capture their complex dynamics in a compact form. The existing toolbox of network science gives us a plethora of options for quantifying the structure of a dynamical network. We may simply quantify its degree and strength distributions (Figure 1C), revealing dominant neurons (Dann, Michaels, Schaffelhofer, & Scherberger, 2016; Nigam et al., 2016). We can assess the local clustering of the dynamical network, the proportion of a neuron’s linked neighbours that are also strongly linked to each other (Watts & Strogatz, 1998; Figure 1D), revealing the locking of dynamics among neurons (Bettencourt et al., 2007; Sadovsky & MacLean, 2013). We can compute the efficiency of a network (Latora & Marchiori, 2001), a measure of how easily a network can be traversed (Figure 1E), revealing how cohesive the dynamics of a population are—the higher the efficiency, the more structured the interactions amongst the entire population (Thivierge, 2014). We may define structural measures relative to a null model, such as quantifying how much of a small-world the dynamical network is (Dann et al., 2016; Gerhard et al., 2011; S. Yu et al., 2008). Our choice of quantifying measures depends on the aspects of dynamics we are most interested in capturing.
Having compactly described the dynamics, we are well placed to then characterize the effects of manipulating that system. Manipulations of a neural system will likely cause system-wide changes in its dynamics. Such changes may be the fast, acute effect of optogenetic stimulation (Boyden, 2015; Deisseroth, 2015; Miesenböck, 2009); the sluggish but acute effects of drugs (Vincent, Tauskela, Mealing, & Thivierge, 2013); or the chronic effects of neurological damage (Otchy et al., 2015). All these manipulations potentially change the interactions between neurons, disrupting normal computation. By comparing the dynamical networks before and after the manipulation, one could easily capture the changes in the relationships between neurons.
There have been few studies examining this idea. Srinivas, Jain, Saurav, & Sikdar (2007) used dynamical networks to quantify the changes to network-wide activity in hippocampus caused by the glutamate-injury model of epilepsy, suggesting a dramatic drop in network clustering in the epilepsy model. Vincent et al. (2013) used dynamical networks to quantify the potential neuroprotective effects of drug preconditioning in rat cortex in vitro, finding increased clustering and increased efficiency in the network over days, implying the drugs enriched the synaptic connections between groups of neurons. Quantifying manipulations using network science is an underexplored application, rich in potential.
TRACKING THE EVOLUTION OF DYNAMICS
Neural activity is inherently nonstationary, with population activity moving between different states on a range of timescales, from shifting global dynamics on timescales of seconds (Zagha & McCormick, 2014), to changes wrought by learning on timescales of minutes and hours (Benchenane et al., 2010; Huber et al., 2012). For a tractable understanding of these complex changes, ideally we would like a way describe the entire population’s dynamics with as few parameters as possible. A recent example of such an approach is population coupling, the correlation over time between a single neuron’s firing rate and the population average rate (Okun et al., 2015). But with dynamical networks we can use the same set of tools above, and more, to easily track changes to the population activity in time.
Figure 2 illustrates the idea of tracking nonstationary activity with data from a study by Peyrache, Khamassi, Benchenane, Wiener, & Battaglia (2009). Rats were required to learn rules in a Y-maze to obtain reward. I use here a single session in which a rat learned the rule “go to the cued arm” (Figure 2A); 52 simultaneously recorded neurons from medial prefrontal cortex were active in every trial of this session. As the rat learned the rule in this session, and activity in medial prefrontal cortex is known to represent changes in behavioral strategy (Durstewitz, Vittoz, Floresco, & Seamans, 2010; Karlsson, Tervo, & Karpova, 2012; Powell & Redish, 2016), we might reasonably expect the population activity to evolve during rule learning. Visualizing trial-by-trial changes using dynamical networks (built as in Figure 1A) shows a stabilization of the interactions between neurons over trials (Figure 2B). Quantifying this by correlating weight matrices on consecutive trials (Figure 2C) confirms there was a rapid stabilization of neuron interactions at the start of this learning session. Plotting the total weight or total number of links in the network over trials (Figure 2D) shows that this stabilization of the dynamical network was not a simple consequence of a global stabilization of the interactions between neurons. These analyses thus track potentially learning-induced changes in the population activity of prefrontal cortex.
We can also use these data to illustrate the benefits we accrue from the null models in network science. These models define the space of possible networks obtained by some stochastic process. Classically, the null model of choice was the Erdos-Renyi random network, which assumes a uniform probability of a link falling between any pair of nodes. As few if any real-world networks can be described this way, more detailed null models are now available. One common example is the configuration model (Chung & Lu, 2002; Fosdick, Larremore, Nishimura, & Ugander, 2016), in which we assume connections between nodes are made proportional to the number of links they already have. This model, applied to neural time series, is a null model for testing whether the existence of interactions between a pair of neurons is simply a result of those neurons having many interactions. Other null model networks include the exponential random graph model (Robins, Pattisona, Kalisha, & Lushera, 2007), or the stochastic block model and its variants (Newman & Martin, 2014). In general, network null models allow us to test whether features of our dynamical networks exceed those expected by stochastic variation alone.
We use the example of determining whether there is a change in the clustering of interactions between neurons over this example learning session. Figure 2E plots the average clustering coefficient for the dynamical networks, and we can see that it varies across trials. We can compare this to a suitable null model; here I use a null model that conserves node strength, but randomly reassigns the set of weights between nodes (Rubinov & Sporns, 2011). Plotting the average clustering coefficient for this null model on each trial shows that the clustering in the data-derived dynamical networks is well in excess of that predicted by the null model: the interactions between groups of three neurons are more dense than predicted by just their total interactions with all neurons.
But the null model also shows that the average local clustering does not change over learning. The ratio of the data and model clustering coefficients is approximately constant (Figure 2F), showing that trial-by-trial variation in clustering is largely accounted for by variations in the overall interactions between neurons (one source of these might be finite-size effects in estimating the interactions on trials of different durations). So we can conclude that changes over behavioral learning in this population of neurons reflected a local reorganization (Figure 2B) and stabilization (Figure 2C) of interactions, but which did not change the population-wide distribution of clustering.
The rich potential for tracking dynamics with the readily available metrics of network science has not yet been tapped. As just demonstrated, with dynamical networks we can track trial-by-trial or event-by-event changes in population dynamics. For long recordings of spontaneous activity, building dynamical networks in time windows slid over the recorded data allows us to track hidden shifts underlying global dynamics (Humphries, 2011). On slower timescales, we can track changes during development of neural systems, either using ex vivo slices (Dehorter et al., 2011) or in vitro cultures (Downes et al., 2012; M. S. Schroeter, Charlesworth, Kitzbichler, Paulsen & Bullmore, 2015). These studies of development have all shown how maturing neuronal networks move from seemingly randomly distributed inter actions between neurons to a structured set of interactions, potentially driven by changes to the underlying connections between them.
Other tools from network science could be readily repurposed to track neural population dynamics. The growing field of network comparison uses distributions of network properties to classify networks (Guimera, Sales-Pardo, & Amaral, 2007; Onnela et al., 2012; Przulj, 2007; Wegner, Ospina-Forero, Gaunt, Deane, & Reinert, 2017). A particularly promising basis for comparison is the distributions of motifs (or graphlets) in the networks (Przulj, 2007). Repurposed to track changes in dynamical networks, by comparing motif distributions between time points, these would provide tangible evidence of changes to the information flow in a neural system.
Ongoing developments in temporal networks (Holme, 2015) and network-based approaches to change-point detection algorithms (Barnett & Onnela, 2016; Darst et al., 2016; Peel & Clauset, 2014) also promise powerful yet tractable ways to track neural population dynamics. Temporal networks in particular offer a ranges of formalisms for tracking changes through time (Holme, 2015). In one approach, interaction networks for each slice of time are coupled by links between the same node in adjacent time slices; this allows testing for how groups of nodes evolve over time, constrained by their groups in each slice of time (Bassett et al., 2011; Mucha, Richardson, Macon, Porter, & Onnela, 2010). A range of null models are available for testing the evolution of networks in this time-slice representation (Bassett et al., 2013). But such a representation requires coarse-graining of time to capture the interactions between all nodes in each time slice. An alternative approach is to define a network per small time step, comprising just the interactions that exist at each time step (Holme, 2015; Thompson, Brantefors, & Fransson, 2017), and then introduce the idea of reachability: that one node is reachable from another if they both link to an intermediate node on different time steps. With this representation, standard network measures such as path-lengths, clustering, and motifs can be easily generalized to include time (Thompson et al., 2017). Thus, a network description of multineuron activity need not just be a frozen snapshot of interactions, but can be extended to account for changes in time.
NETWORK THEORY QUANTITATIVELY DEFINES COMPUTATIONAL CONCEPTS OF NEURAL POPULATIONS
The mathematical framework of networks can also provide precise quantitative definitions of important but qualitative theories about neural populations. A striking example is the theory of neural ensembles (Harris, 2005). An ensemble is qualitatively defined as a set of neurons that are consistently coactive (Harris, 2005), thereby indicating they code or compute the same thing. This qualitative definition leaves open key quantitative questions: What defines coactive, and what defines consistent?
The network science concept of modularity provides answers to these questions. Many networks are modular, organized into distinct groups: social networks of friendship groups, or collaboration networks of scientists. Consequently, the problem of finding modules within networks in an unsupervised way is an extraordinarily fecund research field (Fortunato & Hric, 2016). Most approaches to finding modules are based on the idea of finding the division of the network that maximizes its modularity Q = {number of links within a module} − {expected number of such links} (Newman, 2006). Maximizing Q thus finds a division of a network in which the modules are densely linked within themselves, and weakly linked between them.
Applied to dynamical networks, modularity defines neural ensembles (Billeh, Schaub, Anastasiou, Barahona, & Koch, 2014; Bruno et al., 2015; Humphries, 2011): groups of neurons that are more coactive with each other than with any other neurons in the population, given the choice of pairwise interaction used. Figure 3 demonstrates this idea using an example recording of 94 neurons from the motor circuit of the sea slug Aplysia during fictive locomotion (Bruno et al., 2015). The weight matrix and network views in Figure 3A clearly indicate some structure within the dynamical network. Applying an unsupervised module-detection algorithm finds a high modularity division of the dynamical network (Figure 3B). When we plot the 94 spike trains grouped by their modules in the dynamical network, the presence of multiple ensembles is clear (Figure 3C).
With this modularity-based approach, we can also easily check how robust these ensembles are to the choice of timescale of coactivity. When computing pairwise interactions, we often have a choice of temporal precision, such as bin size or Gaussian width (Figure 1A): Choosing small values emphasizes spike-time precision; large values emphasize covarying firing rates. As shown in Figure 3D, we can also use Q to look for timescales at which the population dynamics are most structured (Humphries, 2011): This view suggests a clear peak timescale at which the ensembles are structured. Nonetheless, we can also see a consistent set of modules at all timescales: The weight matrix W at the smallest and largest Gaussian width are similar (Figure 3E); and the majority of neurons are placed in the same group at every timescale (Figure 3F). Modularity not only defines ensembles, but also lets us quantify their timescales and find consistent structure across timescales.
The complexity of the population activity will determine whether a consistent set of ensembles appears across timescales, or whether there are different ensembles at different timescales (see Humphries, 2011, for more examples). We can see this when running the same module-detection analysis on a session from the medial prefrontal cortex data (Figure 3G–I). For this cortical data there are modules present at every timescale, but no consistent timescale at which the neural activity is most structured (Figure 3G–H). Consequently, there is not a consistent set of modules across timescales (Figure 3I).
Such multiscale structure is potentially a consequence of the order-of-magnitude distribution in firing rates (Dann et al., 2016; Wohrer et al., 2013), for which more work is needed on suitable measures of interaction. It may also indicate that some neurons are members of more than one ensemble, which are active at different times during the recording. Consequently, these neurons’ correlations with others will depend on the timescale examined. Examining the detected modules for nodes that participate in more than one module (Guimera & Amaral, 2005; Guimera et al., 2007) may reveal these shared neurons. Clearly, such multiscale structure means that tracking changes in the structure of population activity should be done at a range of timescales, and comparisons made based on similar timescales.
As a final step, we can now quantitatively define a Hebbian cell assembly (Holtmaat & Caroni, 2016). By definition, a cell assembly is an ensemble of neurons that become coactive because of changes to synaptic connections into and between them during learning (Carrillo-Reid et al., 2016). Thus, by combining the ideas of tracking dynamical networks and of module detection, we can test for the formation of assemblies: If we find dynamical network modules that appear during the course of learning, then we have identified potential cell assemblies.
OUTLOOK
The dynamics of neural populations are emergent properties of the wiring within their microcir cuits. We can of course use network science to describe physical networks of the microcircuit too (Humphries, Gurney, & Prescott, 2006; Lee et al., 2016; M. Schroeter, Paulsen, & Bullmore, 2017), gaining insight into the mapping from wiring to dynamics. But dynamical networks need not map to any circuit. Indeed while dynamical networks are constrained by their underlying physical connections, they can change faster than their corresponding physical networks. A clear example is with the actions of neuromodulators—these can increase or decrease the effective strength of connections between neurons and the responsiveness of individual neurons (Nadim & Bucher, 2014), so changing the dynamical network without changing the underlying physical network. More broadly, rapid, global changes in brain state can shift the dynamics of a neural population (Zagha & McCormick, 2014). Thus, dynamical networks describing the simultaneous activity of multiple neurons capture the moment-to-moment changes in population dynamics.
There are of course other analysis frameworks for visualizing and describing the activity of large neural populations. The detection of neural ensembles is an unsupervised cluster ing problem, for which a number of neuroscience-specific solutions exist (Feldt, Waddell, Hetrick, Berke, & Zachowski, 2009; Fellous, Tiesinga, Thomas, & Sejnowski, 2004; Lopes-dos-Santos, Conde-Ocazionez, Nicolelis, Ribeiro, & Tort, 2011; Russo & Durstewitz, 2017). Some advantages of network science here are that the detection of ensembles is but one application of the same representation of the population activity; that a range of null models is available for testing hypotheses of clustering; and that the limitations of module detection are well established, allowing comparatively safe interpretation of the results (Fortunato & Hric, 2016; Good, de Montjoye, & Clauset, 2010). More generally, analyses of neural population recordings have used dimension-reduction approaches in order to visualize and describe the dynamics of the population (Cunningham & Yu, 2014; Pang, Lansdell, & Fairhall, 2016). As discussed in Box 1, both network and dimension-reduction approaches offer powerful, complementary views of complex neural dynamics.
Dimension-reduction approaches to neural population recordings aim to find a compact description of the population’s activity using many fewer variables than neurons (Pang et al., 2016). Typical approaches include principal components analysis (PCA) and factor analysis, both of which aim to find a small set of dimensions in which the population activity can be described with minimal loss of information (Ahrens et al., 2012; Bartho, Curto, Luczak, Marguet, & Harris, 2009; Briggman et al., 2005; Bruno et al., 2015; Kato et al., 2015; Levi, Varona, Arshavsky, Rabinovich, & Selverston, 2005; Mazor & Laurent, 2005; Wohrer et al., 2013). More complex variants of these standard approaches can cope with widely varying timescales in cortical activity (B. M. Yu et al., 2009), or aim to decompose multiplexed encodings of stimulus variables by the population’s activity into different dimensions (Kobak et al., 2016).
Both network and standard dimension-reduction approaches have in common the starting point of a pairwise interaction matrix. PCA, for example, traditionally uses the covariance matrix as its starting point. Consequently, both approaches assume that the relationships between neurons are static over the duration of the data from which the matrix is constructed. (This assumption is also true for dimension-reduction methods that fit generative models, such as independent component analysis or Gaussian process factor analysis [B. M. Yu et al., 2009], as fitting the model also assumes stationarity in the model’s parameters over the duration of the data.)
Where the approaches diverge is in their advantages and limitations. Dimension-reduction approaches offer the advantage of easy visualization of the trajectories of the population activity over time. This in turn allows for potentially strong qualitative conclusions, either about the conditions under which the trajectories differ—such as in encoding different stimuli (Kobak et al., 2016; Mazor & Laurent, 2005) or making different decisions (Briggman et al., 2005; Harvey, Coen, & Tank, 2012)—or about the different states repeatedly visited by the population during movement (Ahrens et al., 2012; Bruno, Frost, & Humphries, 2017; Kato et al., 2015; B. M. Yu et al., 2009). By contrast, there are not yet well-established ways of drawing quantitative conclusions from standard dimension-reduction approaches, nor of how to track changes in the population dynamics over time, such as through learning. Further, while reducing the dimensions down to just those accounting for a high proportion of the variance (or similar) in the population activity can remove noise, it also risks removing some of the higher-dimensional, and potentially informative, dynamics in the population. Finally, to date, most applications of dimension-reduction approaches have been based on just the pairwise covariance or correlation coefficient.
As I have demonstrated here, network-based approaches take a different slant on simplifying complex dynamics. The network description maintains a representation of every neuron, and so potentially captures all dynamical relationships that might be removed by dimension reduction. It is simple to use any measure of pairwise interaction, without changing the analysis. Quantitative analyses of either static (Figure 1) or changing (Figure 2) population activity are captured in simple, compact variables. And we have access to a range of null models for testing the existence of meaningful interactions between neurons and changes to those interactions. However, interpreting some of these quantifying variables, such as efficiency, in terms of neural activity is not straightforward. And it is not obvious how to visualize trial-by-trial population activity, nor how to draw qualitative conclusions about different trajectories or states of the activity. Consequently, combining both network and dimension-reduction approaches could offer complementary insights into a neural population’s dynamics (Bruno et al., 2015).
One motivation for turning to network science as a toolbox for systems neuroscience is rooted in the extraordinarily rapid advances in recording technology, now scaling to hundreds or thousands of simultaneously recorded neurons (Stevenson & Kording, 2011). Capturing whole nervous systems of even moderately complex animal models will require scaling by further orders of magnitude (Ahrens et al., 2012; Lemon et al., 2015). And here is where network science has its most striking advantage: These tools have been developed to address social and technological networks of millions of nodes or more, so easily scale to systems neuroscience problems now and in the foreseeable future.
This is not a one-way street. Systems neuroscience poses new challenges for network science. Most studies in network science concern a handful of static or slowly changing data networks. Neural populations have nonstationary dynamics, which change rapidly compared with the temporal resolution of our recordings. And systems neuroscience analysis requires quantitatively comparing multiple defined networks within and between brain regions, within and between animals, and across experimental conditions—stimuli, decisions, and other external changes. More work is needed, for example, on appropriate null models for weighted networks (Palowitch, Bhamidi, & Nobel, 2016; Rubinov & Sporns, 2011); and on appropriate ways to regularise such networks, in order to separate true interactions from stochastic noise (MacMahon & Garlaschelli, 2015). Bringing network science to bear on challenges in systems neuroscience will thus create a fertile meeting of minds.
SUPPORTING INFORMATION
Visualizations and analyses here drew on a range of open-source MATLAB (Mathworks, NA) toolboxes:
- •
Brain Connectivity Toolbox (Rubinov & Sporns, 2010): https://sites.google.com/site/bctnet/
- •
Network vizualizations used the MATLAB code of Traud et al. (2009), available here: http://netwiki.amath.unc.edu/VisComms. This also the needs MatlabBGL library: http://uk.mathworks.com/matlabcentral/fileexchange/10922-matlabbgl. Mac OSX 64-bit users will need this version: https://dgleich.wordpress.com/2010/07/08/matlabbgl-osx-64-bit/
- •
Spike-Train Communities Toolbox (Bruno et al., 2015; Humphries, 2011): implementing unsupervised consensus algorithms for module detection https://github.com/mdhumphries/SpikeTrainCommunitiesToolBox
AUTHOR CONTRIBUTION
Mark D Humphries: Conceptualization; Formal Analysis; Investigation; Visualization; Writing
ACKNOWLEDGMENTS
I thank Silvia Maggi for reading a draft, Adrien Peyrache for permission to use the rat medial prefrontal cortex data, and Angela Bruno and Bill Frost for permission to use the Aplysia pedal ganglion data.
REFERENCES
External Supplements
Competing Interests
Competing Interests: The author has declared that no competing interests exist.
Author notes
Handling Editor: Olaf Sporns