Abstract

We introduce a framework for simulating signal propagation in geometric networks (networks that can be mapped to geometric graphs in some space) and developing algorithms that estimate (i.e., map) the state and functional topology of complex dynamic geometric networks. Within the framework, we define the key features typically present in such networks and of particular relevance to biological cellular neural networks: dynamics, signaling, observation, and control. The framework is particularly well suited for estimating functional connectivity in cellular neural networks from experimentally observable data and has been implemented using graphics processing unit high-performance computing. Computationally, the framework can simulate cellular network signaling close to or faster than real time. We further propose a standard test set of networks to measure performance and compare different mapping algorithms.

1.  Introduction

Complex dynamic networks permeate many real-world engineering and biological applications. The development of mathematical and computational tools for understanding and predicting network dynamics will be key to manipulating and interacting with such real-world networks. Network theory is a subset of graph theory where the connections between vertices have a number value describing some attribute of that connection, such as bandwidth, flow rates, or a cost function. Complex networks are defined to have a nonstandard topology (i.e., the functional links between nodes in the network), implying some structure in the connectivity pattern of the network beyond a simple lattice or complete random connectivity. Biological cellular neural networks are both complex and dynamic, meaning that the connection attribute between any two vertices may change with respect to time and, more important, individual vertices exhibit their own nonlinear signaling dynamics. Complex functional interactions of networks made up of large numbers of neurons and glia presumably produce emergent systems-level phenomena such as consciousness and self-awareness and are responsible for how neural information is represented and processed. Changes in the structure of such networks underlie the development of multidimensional central nervous system disorders. For example, hypersynchronous neuronal and glial activity in networks of neurons are associated with the paroxysmal depolarization shifts that underlie epilepsy (Tian et al., 2005; Sudbury & Avoli, 2007; Wetherington, Serrano, & Dingledine, 2008; Feldt, Osterhage, Mormann, Lehnertz, & Zochowski, 2007). Ultimately the physiological behavior of a neural cell network is dependent on both its functional topology and the dynamics of individual cells.

Within a complex dynamic network, there are two topologies: a static structural topology that describes all the possible connections within the network and a dynamic functional topology that establishes how a signal propagates through the static topology. Functional topologies are subsets of the structural topology and vary depending on the functional connectivity, internal dynamics of individual vertices, and the specific stimulus to the network. In other words, cells that are physically connected need not necessarily signal each other. Having said this, though, in cellular neural circuits and networks, structure and function influence each other, and the states of cells and the connections between them may change with time as a function of plasticity mechanisms. However, structural changes in the physical connectivity of a cellular neural network leading to changes in the connectivity topology occur on a very different timescale than functional changes that can be influenced relatively quickly by plasticity mechanisms that produce changes in signaling efficacy between cells (i.e., changes in connectivity weights). While the observation of the structural network topology of cellular neural networks may be experimentally very challenging (and is the focus of much intense research), it is a relatively straightforward task. The observation of functional topologies in biological neural networks, however, poses additional experimental and theoretical challenges that need to be considered. Signaling events and resultant networks may be unique and be observable only once as a signal propagates through a network. The functional topology is dynamic and may change during observation. Noise and unknown external factors limit observability and reduce repeatability. These factors make the estimation of functional connectivity from observed activity a difficult task, though a critical one for systems neuroscience if we are to understand how dynamic functional signaling in the brain at the level of networks and circuits produces responses and behaviors in the organism.

Current approaches for studying cellular neural networks can be roughly classified into three categories. The first and most popular among experimentalists are statistical methods that correlate the activities of two or more neurons in a network. This provides purely descriptive statistics about the behavior of cells. For the most part, statistical approaches make no underlying assumptions about the cellular and systems dynamics that give rise to observed signals in a network of cells. Or, put differently, correlations do not imply causality. Another way to study networks is through simulation of networks with known connectivities and dynamic parameters in order to simulate real-world observed system-level phenomena such as vision and audition. Using well-established environments like NEURON or Genesis, many real-world phenomena have been described through simulation. However, dynamic parameters and functional connections are manually specified in simulation environments such as these in order to achieve results that mimic biological function, requiring the estimation of experimentally unobservable variables. The third category is in some ways the reverse process to simulation, where temporal data are used with appropriate models in order to estimate parameters. Within this third category, we introduce a modeling framework for using real-world data to map the functional topology of complex dynamic networks. While not a mapping algorithm or simulation environment, the framework formally defines key features of cellular neural network signaling and experimental constraints associated with observation and stimulus control and can accommodate any appropriate model of intracellular dynamics. Alongside the definition of the framework, a test set of synthetic networks with known connectivities is provided to help the development of mapping algorithms by providing a common benchmark any such algorithm should be able to map. In a subsequent article to this one, we will introduce an approach that will estimate and map the functional topology of complex networks with unknown connectivities given limited and often noisy observations that takes advantage of the results introduced here.

The proposed framework has a number of unique properties that make it particularly applicable to the constraints and experimental limitations imposed by real biological cellular neural networks. First, dynamic activity and signaling is modeled at the individual node (i.e., cell) scale. The dynamics of individual cells are modeled as state sets, with transition functions describing their evolution across discrete time steps. Cellular resolution was chosen because it represents the best compromise between observability, dynamics, and complexity. Large numbers of individual cells can now be observed in parallel in functional neural networks using optical microscopy (Homma et al., 2009; Benninger, Hao, & Piston, 2008; Garaschuk et al., 2006; Nikolenko, Poskanzer, & Yuste, 2007). Single-cell neuronal dynamics are well understood, and many models exist (see Dayan, Abbott, & Abbott, 2001; Trappenberg, 2002), while similar models of single-cell astrocyte dynamics are beginning to emerge (Stamatakis & Mantzaris, 2006; Nadkarni, Jung, Levine, & Graham, 2008; Pittà, Volman, Levine, & Ben-Jacob, 2009; Bennett, 2005; Lavrentovich & Hemkin 2008). Attempting to go to a finer, subcellular compartmental resolution dramatically increases the complexity of the model and computational demand, and is generally not experimentally observable at a network level. Second, cells are located in physical space, and their positions are easily determinable during experimental observation. When connected, cellular networks form geometric networks. Third, the effect of a signal on a target cell is defined as a state change in the target cell in response to the influence of a source cell that connects to it. That influence is not instantaneous and is delayed by the physical distance between cells and the speed of transmission. Signals are modulated in strength by functional weights that establish the magnitude of the influence. Fourth, to more realistically simulate experimental conditions and measurements, noise can be added to multiple levels within the framework, from parameters to state and observation variables. Finally, experimental user-defined controls at the individual cell level are defined within the framework. Controls should be designed to make observations more informative of the network dynamics, but should not change the underlying parameters and connectivities. The framework is described in detail in section 2. The results section (section 3) shows how single-cell dynamic models are integrated within the framework and how network connectivity is established from individual cells. In section 3.2 we also describe how the framework accommodates plasticity mechanisms. Section 3.3 discusses experimental observability associated with optical calcium imaging, and section 3.4 discusses the practical implementation of the framework using high-performance graphical processing unit (GPU) computing.

In section 3.5 we use the framework to propose a standard set of benchmark test networks of varying sizes and topologies to evaluate and compare different network mapping algorithms. Mapping algorithms would have access to simulated observable data (i.e., simulated experimental data) generated by the framework as a function of chosen test networks and be required to derive the unobservable parameters and functional network connectivity. The concept of a standardized test to gauge the effectiveness of an algorithm is not new, especially for optimization algorithms. For example, in the field of nonlinear programming and optimization, a standard benchmark set was established in a landmark collection of test problems (Hock & Schittkowski, 1980) that are used for testing any nonlinear optimization algorithm. Test collections have grown and developed into problem environments, providing the underlying problem code to be used directly by the optimizers (Bondarenko, Bortz, & More, 1999; Gould, 1995). By providing a set of problems with known solutions, algorithm developers have a standard by which to measure solution accuracies, convergence rates, computation times, and suitability to different problem types. We propose that a similar test set for algorithms designed to identify and map functional cellular neural networks and circuits will be just as useful. To address this, we have developed computer code that generates observable data from a known network and connectivity. The code encompasses all the elements of the framework, runs in real time for all the test networks, and is designed for parallel computation. It can therefore be used as a starting point for mapping algorithms.

2.  A Framework for Dynamics, Signaling, Control, and Observation in Geometric Networks

We develop the proposed framework using standard graph-theoretic and set-theoretic concepts and notation. In the most general sense, a network is a type of graph. A graph is defined as an ordered pair of finite disjoint sets (v, E) such that v is the set of J vertices of G and E is the set of edges of G; v=v(G) is the vertex set of G, while E=E(G) is the edge set of G. An edge eij is defined if there is a directed connection from vertex i to vertex j. Geometric graphs are graphs where the relative positions of vertices are assigned coordinates in some geometric space. While this is the most generic description of a graph, dynamic geometric networks, as we use the term here, are more specialized cases of generalized geometric graphs defined as follows. Vertices in a network have two attributes, a known and static position in physical Cartesian space denoted by xj for a given vertex j and a time-variant state set yj(t) of Kj state variables:
formula
2.1a
such that formally,
formula
2.1b
Next, for all vertices i other than j, let the set yj(t) be the union of all states for all vertices i, that is, the collection of states of all vertices in the network excluding vertex j, weighted and delayed relative to vertex j, in the sense that every vertex i has the potential to pass information (e.g., a signal) to vertex j with varying amounts of influence as determined by a collection of weights that modulate any directed edges from i to j. Furthermore, such information will be delayed by some finite time as a function of the geometric position of vertex i in the network relative to j and the finite speed of information propagation. We define
formula
2.2
where, without loss of generality, we define
formula
2.3
and restrict equation 2.1b for vertex i with temporal delays as vector sets:
formula
2.4
The delays are nonnegative values representing the delay of information passing from i to j. In all cases, we adopt the convention that indexing subscripts given by ij enumerate the variable that uses the subscript as linking vertex pairs i and j.
We then define a transition function with parameter set that describes the temporal progression or evolution of yj(t) in discrete time increments :
formula
2.5
where is given by
formula
2.6
uj(t) is a user control or experimental input,
formula
2.7
and is a parameter set
formula
2.8a
formula
2.8b
formula
2.8c
Lk is the number of parameters for a given state variable, Kj is the number of state variables for a given vertex j, and J represents the size of the network (i.e., the total number of vertices). Note that the functions comprising the set , each advance their respective variables in time:
formula
Similarly, we define a function that describes the time course of the weighing sets with parameter sets as follows:
formula
2.9
where, analogous with equation 2.6, is given by
formula
with parameters
formula
2.10a
formula
2.10b
where is the number of parameters for the weight parameter set associated with a particular state k.
The delays between vertex pairs are defined as functions of the positions of the two vertices:
formula
2.11
where the set is the set of parameters of the nonnegative function D, specific to the pair ij.
If the temporal evolutions of yj(t) and are continuous, they can be expressed as discrete-delay differential equations with delays for all vertices connecting to vertex j, so that the differential forms of equations 2.5 and 2.9 are
formula
2.12
and
formula
2.13
Since experimental measurements are typically in discrete time increments, the differential forms of 2.12 and 2.13 must be integrated to the discrete time forms of 2.5 and 2.9. As such, in this letter, we do not pursue further the interesting theoretical implications of the continuous forms given by equations 2.12 and 2.13.
Finally, we define an observation set zj(t) composed of M variables that operates directly on the state set yj(t):
formula
2.14
The observation function Fj is vector valued:
formula
2.15
with parameter set given by
formula
2.16a
formula
2.16b

The framework presented here is general, as it allows communication between any two state variables between any two vertices. Transition functions and their parameters are defined specific to vertex or communication between vertex pairs . This produces a large set of functions and parameters, though in practice, one or two different functions are applied to all cells or combinations. The weighing set can operate on all state variables of the connecting vertex i into target j, though usually one state of i is transmitted to one state in j. In the next section, we describe how several dynamics and communication models used to describe individual neurons and cellular neural networks fit within this framework to reproduce observable quantities similar to experimentally measured data.

3.  Results

The framework can accommodate essentially all models of both neuronal and astrocytic dynamics. Independent of the specifics of any single-cell model chosen, the framework provides a compact mathematical structure that quantitatively describes signaling and information propagation and flow in geometrically defined networks. The geometry and physical connectivity topology of the network can be simulated (e.g., random, scale free, or small world) or measured from experimental data using methods such as optical imaging. Regardless of how one chooses to set up the network, the framework provides a description of information flow through the network given knowledge of temporal signaling delays and chosen single-cell models, or can be used to identify and map unknown functional connectivities and parameters in real neural circuits and networks. In all cases, the framework is able to provide an estimate of the complete description of the functional network and the interaction between all observable and hidden state variables and parameters. Figure 1 illustrates a simple five-vertex example that summarizes everything that is needed to describe the functional dynamics of information flow through a network. Figure 2 provides a specific example of the network from Figure 1 using a Hodgkin-Huxley model and simulating 1 second worth of data. Note how the framework provides experimentally measurable variables (calcium and membrane voltage) for every cell in the network in the temporal sequence dictated by the geometry and connectivity of the network.

Figure 1:

A five-vertex dynamic network. Each vertex j has a position in physical Cartesian space denoted by vector xj and a dynamic state set yj(t). A vertex's dynamic state varies in discrete time steps and is influenced by its own previous state and the states of other vertices connecting into it with a delay and a functional connection weight . The time delays between vertices are a function of their positions in space. The magnitudes of the connection weights, , are estimated based from vertices' known positions and the observed dynamics.

Figure 1:

A five-vertex dynamic network. Each vertex j has a position in physical Cartesian space denoted by vector xj and a dynamic state set yj(t). A vertex's dynamic state varies in discrete time steps and is influenced by its own previous state and the states of other vertices connecting into it with a delay and a functional connection weight . The time delays between vertices are a function of their positions in space. The magnitudes of the connection weights, , are estimated based from vertices' known positions and the observed dynamics.

Figure 2:

Signaling dynamics of the network from Figure 1. A Hodgkin-Huxley model is used for single-cell dynamics in a 1 second simulation. The delays (written in bold blue) are in milliseconds and are based on the Cartesian distances between cells. Functional weights, shown as relative strengths in green, are chosen arbitrarily for the purposes of this example. The right panel shows the network dynamics for each of the five cells following a sequential pulse stimulation (red horizontal bar) at each cell. Experimentally observable calcium traces are shown as solid black lines on an arbitrary vertical scale. Voltage, a hidden variable, is shown as gray spikes and constitutes the neurons' action potentials.

Figure 2:

Signaling dynamics of the network from Figure 1. A Hodgkin-Huxley model is used for single-cell dynamics in a 1 second simulation. The delays (written in bold blue) are in milliseconds and are based on the Cartesian distances between cells. Functional weights, shown as relative strengths in green, are chosen arbitrarily for the purposes of this example. The right panel shows the network dynamics for each of the five cells following a sequential pulse stimulation (red horizontal bar) at each cell. Experimentally observable calcium traces are shown as solid black lines on an arbitrary vertical scale. Voltage, a hidden variable, is shown as gray spikes and constitutes the neurons' action potentials.

3.1.  Individual Cell Dynamics.

In this section we discuss how neuronal models of single-cell dynamics, synaptic connections, plasticity, and observation fit within the framework. We begin by showing how to construct the state transition of an isolated (unconnected) vertex and then build the full transition function by connecting multiple vertices into a network. In an isolated vertex with no incoming connections, the state transition reduces to
formula
3.1
This generic form encompasses neuronal models described in differential equation form, as well as those with a state reset based on some threshold value. Some neuronal models are expressed in differential form as
formula
3.2
Converting equation 3.2 into the state transition form given by equation 3.1 is a matter of numerical integration with an integration method of choice. Using Euler's method, for example, the state transition function Hj of the system in equation 3.2 becomes
formula
3.3
Here we used the Euler method of integration for its simplicity and clarity, but more complex integration methods like trapezoidal or Runge-Kutta can also be used to generate the next time step from the current step.
As an example, consider the simple Fitzhugh-Nagumo oscillator used to model neurons. In its differential form, the model is given by this pair of equations:
formula
3.4
The state set y(t) comprises two state variables . The experimental control set is composed of only one variable, affecting the V(t) state variable, so . This system can be expressed in state transition form as , with
formula
3.5
Note that we index the state transition functions based on the state variables they operate on; for example, HV(t) advances V(t). The parameter set for this system is composed of the parameters for each of the state transition equations in equation 2.21:
formula
3.6
This system has two state variables and six parameters.
Another class of neuronal models are those with a hard reset. These models are also described in differential equation form but contain a hard reset when a state variable reaches a certain value. As an example, consider the Izhikevitch simple model, written in differential form as
formula
3.7
This model consists of a voltage and amplifying currents (V(t)) and a resonant gating variable (W(t)). The system has up to nine parameters and resets both state variables when a certain voltage threshold (Vpeak parameter) is surpassed. Mapped onto our framework, the model and its transition functions are
formula
3.8
Here, the parameter set is , and just like the Fitzhugh-Nagumo model, , and . When the parameter values of the individual models are changed, different classes of neurons can be simulated with the same transition function.

Similarly, any model can be accommodated and fit into the framework, from the simplest to the most complex. Traditionally all neuronal models have membrane voltage as a state variable and propagate a discrete signal in the form of an action potential when the membrane voltage rises past some threshold value at the axon hillock in response to depolarizing and hyperpolarizing currents in dendrites mediated by spatial and temporal summation of presynaptic currents. The simplest neuronal model, the leaky integrate and fire (LIF), has voltage as a single state variable that decays to a target value and is perturbed by incoming currents. If the voltage rises past a threshold value, it is reset at the next time step to a reset value. One of the most complex and realistic single-cell models is the Hodgkin-Huxley (HH) model, which relies on four state variables to describe the dynamics responsible for the generation of action potentials. The number of parameters increases with the number of state variables, from 4 in the LIF model to 22 for the HH model. Additionally, the required time step is shorter for HH models, being on the order of 0.03 milliseconds compared to as much as 5.0 milliseconds for the LIF model. The increased number of state variables and parameters along with shorter time steps puts a significant computational burden on any simulation or mapping algorithm. The question of which model and how much complexity is required to best describe real-world data is not trivial and depends on the purpose and intent of the modeling and mapping.

Although considerably more limited than the number of existing and studied neuronal dynamic models, a few astrocyte dynamic models emphasize differing aspects and processes of astrocyte signaling (Stamatakis & Mantzaris, 2006; Nadkarni et al., 2008; Pittà et al., 2009; Bennett, 2005; Lavrentovich & Hemkin, 2008; Macdonald, Yu, Buibas, & Silva, 2008). Astrocyte models are expressed in differential forms similar to equation 2.18. Further research into astrocytic models is important, though, because astrocytes have been shown to play a direct role in the bidirectional communication between themselves and neurons via intracellular calcium transients and intercellular calcium waves under controlled experimental conditions (Fields & Stevens-Graham, 2002; Agulhon et al., 2008; Verkhratsky, 2006; Coco et al., 2003; Macdonald et al., 2008; Scemes & Giaume, 2006) and, more recently, physiologic conditions in the neural retina (Kurth-Nelson, Mishra, & Newman, 2009) and cerebellum (Hoogland, Kuhn, Göbel, & Huang, 2009). Pathophysiologically intercellular calcium waves in astrocytes independent of neuronal hyperactivity have recently been shown to occur spontaneously in vivo in the APP/PS1 transgenic mouse model of AlzheimerÕs disease (Kuchibhotla, Lattarulo, Hyman, & Bacskai, 2009); and our own group has shown that amyloid beta is sufficient to trigger complex temporally delayed intercellular calcium waves in isolated astrocyte networks (Chow, Yu, Macdonald, Buibas, & Silva, 2009).

The state transition framework handles all single-cell dynamic models, as well as heterogeneous systems of different cell types, by either different parameter sets or state transition function sets, or both. Whatever the dynamics of individual neurons or astrocytes, all perform the same general task whereby processes and inputs generate outputs to other cells in a connected network.

3.2.  Cellular Network Signaling.

There are three components to cellular signaling: how long it takes for information from one cell to reach another, what the effects of one cell on another are, and how those effects change through time given the relative dynamics of the two connected cells. In this section we describe how signaling delays, functional connectivity, and plasticity are accomodated by the framework.

3.2.1  Signaling Delays.

Neurons and astrocytes form signaling networks that pass and process information between cells, and the state transition function must be extended to include signal propagation between functionally connected cells. In biological cellular networks, signal propagation occurs at a finite and relatively slow speed (i.e., compared to electronic circuit networks). Information in cell networks propagates on the order of microns per second for astrocytes to meters per second for myelinated axons in neurons. Thus, the influence of the dynamics of one cell is felt by another cell after some delay . While the general form for the delay is given by equation 2.11, the simplest form it can take is when signaling is a geodesic between vertices (or between the centers of cell bodies or centers of other cellular compartments as needed in morphologic models) and the transmission speed s is constant:
formula
3.9
that is,
formula
See equation 2.11.

Here the delay is simply the Euclidean distance between the cell centers divided by the transmission speed. For a diffusive network, as is the case with astrocytes, delays are proportional to the square of the distance between vertices. A more complex delay function may take into account knowledge about the particular physiology of the network, curved paths between cells, and nonuniform speeds, for example. The dependence of the framework on the delays is critical to its ability to describe how and when information within the network is processed, ultimately to a significant degree dictating the intercellular dynamics of the overall neural circuit or network.

Figure 3 illustrates the dependency of network dynamics on signaling speed and delay times in a 100-vertex three-dimensional network by varying the intercellular signaling speed, with everything else, including its geometry (i.e., its physical connectivity), the functional connectivity, and input stimulus, remaining the same. We stimulated all cells with 500 ms of depolarizing current. The delays are inversely proportional to the signal propagation speed. We illustrate the effects of three signaling speeds: 2, 20, and 200 pixels per ms. At the lowest signaling speed, 2 pixels per ms, a low-frequency periodic activity was produced that qualitatively resembles a central pattern generator. At a speed of 20 pixels per ms, fewer cells exhibited low-frequency oscillations, and signaling became more sporadic. For both the 2- and 20-pixel per ms propagation speeds, however, signaling continued past the period of stimulation. At a propagation speed of 200 pixels per ms, however, there was no signaling past the stimulus period. In fully recurrent networks such as the one illustrated here, delays serve as a form of signal storage, essentially giving cells time to recover from a refractory period between activations, which maintains recurrent signaling propagation well beyond an initial stimulus. For some appropriate range of signaling speeds, and therefore delay times, this recurrent signaling can settle into a repeatable pattern. If, however, the signaling speeds are too fast, incoming signaling from upstream cells never has an opportunity to activate downstream cells because the cells are still refractory and do not respond. As a result, signaling in the network quickly dies away since it is not sustained without being driven by an external stimulus, as is the case with speeds of 200 pixels per ms in this example. A full discussion of the dependency of the network dynamics on the variables that govern it is very involved and beyond the scope of this letter. However, this example serves to illustrate that along with a network's connectivity topology and individual vertex dynamics, signaling delays play a crucial role in its overall response and dynamics and must be part of any network simulation or modeling framework that attempts to capture the inherent behaviors of neurobiological networks.

Figure 3:

Effects of signaling speed on network dynamics. (A) The spatial locations of vertices and physical connections. (B) Functional connections are assigned random weights uniformly distributed between −1 and 1 on each physical edge. An Izhikevitch simple model of bursting neurons was used to model the individual vertex dynamics. (C) When the speed of signal propagation is varied, the delay distributions are scaled and have a substantial impact on the spike dynamics (see text).

Figure 3:

Effects of signaling speed on network dynamics. (A) The spatial locations of vertices and physical connections. (B) Functional connections are assigned random weights uniformly distributed between −1 and 1 on each physical edge. An Izhikevitch simple model of bursting neurons was used to model the individual vertex dynamics. (C) When the speed of signal propagation is varied, the delay distributions are scaled and have a substantial impact on the spike dynamics (see text).

3.2.2  Functional Connectivity.

The other component of signaling is the functional connectivity of the network, or how the state of one vertex influences the state of another. In equation 2.2, the set Yj collects all the states of all the vertices in the network except j, delayed by a time value relative to j. When Yj is passed into the transition function Hj, the information contained in every state of every other vertex is made available to all state variables in vertex j. Within the framework, this is the broadest possible scope of connectivity, though in practice, typically information from one variable affects one or more variables in another vertex.

As an example, consider a neuron and its presynaptic (chemical) inputs that induce postsynaptic currents. The postsynaptic current (PSC) is modeled with additional state variables in the neuron's state vector yj(t). There are two PSC models in wide use that we have tested within the framework, although it is in no way limited to these two examples. In general, a presynaptic neuron causes a PSC in a postsynaptic neuron that ultimately affects membrane voltage. Modeling the effect of an incoming signal on a target cell is key to establishing connectivity based on the observed cell dynamics. Postsynaptic current is represented as a function of the form
formula
where the resultant signals sj(t) are summed and passed to the voltage state variable in connected vertex i. gmax is the maximum conductance, and in the case of a specific synapse, it can be expressed as a product of the maximum allowable conductance and the (instantaneous) functional connection weight . r(t) describes the time course of the current and is generally one of two forms, depending on the neuron type and neurotransmitter release (Destexhe, Mainen, & Sejnowski, 1994, 1998),
formula
where a is the time decay constant. In both cases, time starts at the moment of activation—in this case, the time of the activation of the presynaptic cell plus the delay to the postsynaptic cell, . For either case, the expressions for r(t) can be written as linear differential systems, with the spike as the impulse. For the simple exponential, the differential equation is
formula
3.10
For the -function, another state variable p(t) is used:
formula
3.11
Shifting the arrival delays is simply a matter of shifting the spike detection function of the presynaptic neuron, so the arrival time from vertex i to vertex j is effectively the time-shifted function of the voltage of . The synaptic current is a decreasing exponential with rate a, incremented by a weight value on arrival of a spike occurring time units ago at another cell i.
For example, extending the state set for the Izhikevitch model given by equation 3.8 for an arbitrary vertex to include an -function PSC model of postsynaptic currents produces and where,
formula
3.12
formula
3.13
formula
3.14
The spike detection function is given by
formula
3.15
Here, the spike inputs from other neurons are passed into the state variable p(t) with transition function Hp,j, integrated by variable r(t), and passed into the voltage variable V(t). The functional weights operate on only the voltage spikes from other neurons, so only information from the voltage state variable V(t) is passed to other neurons. A similar set can be constructed with the PSC model in equation 3.10.

3.2.3  Plasticity.

Up to this point, we have described models with fixed functional connectivity strengths. This assumption is valid for networks observed over short periods where connection strengths can be assumed to be constant for the purposes of mapping or simulating since the plasticity mechanisms that modulate connection strengths operate on longer timescales. If the weights change as a function of the activities of the cells it connects, the framework can be used to modulate connective strengths (see equations 2.9 and 2.10a). Just as with single-cell dynamics and network connectivity, there are many models of plasticity, and we will not attempt to list or review them all here. Rather, we describe how a simple spike-time-dependent plasticity model reviewed by Bi and Poo (2001) that incorporates long-term potentiation (LTP) and long-term depression (LTD) can be easily implemented within the proposed framework.

In equation 2.9, we described the functional strength or weight with transition function Gij analogous to the state set for individual neurons. When the weight is constant, the set contains only one variable, so , and there is no temporal change in and no transition function or parameters. However, if the weight is modulated by the activity of cell i on j (the connection is directional, so modulates information flowing from i to j), then the states and yj(t) affect how changes in time.

The neuronal plasticity model in Bi and Poo (2001) describes strengthening and weakening of synaptic conductance based on the timing of spikes in pre- and postsynaptic neuronal spiking. If a postsynaptic spike occurs immediately after a presynaptic spike, the synaptic conductance is increased and the functional connection is effectively strengthened. If the postsynaptic neuron spikes before the presynaptic neurons, the connection is weakened. Other conditions like spike coincidence or long times between the spikes of pre- and postsynaptic neurons have no affect on the synaptic conductance.

To incorporate this model into our framework, we first augment the set of cell states with another state variable, sj(t), that stores the time from the last spike. This state variable has transition function Hs,j:
formula
3.16
The model in Bi and Poo (2001) uses two exponential curves to describe changes in synaptic conductance based on the spike times. By defining the function qij(t) as
formula
3.17
we can reconstruct the plasticity model within the framework as
formula
3.18
Thus, the transition function for dynamic synaptic weights is
formula
3.19
The parameter set for connection ij is . This way, a static weight can be converted into a dynamical one , with behaviors governed by any arbitrary plasticity model and its associated parameter set . It is important to note that while here, we describe only one scalar weight between two vertices, the framework as defined can accommodate as many weights as there are state variables for a given vertex, thereby describing different classes of intercellular signaling between vertex pairs. For example, in networks of neurons, signaling may occur by gap junctional mediated electrical synapses in addition to chemical synapses, while in astrocyte networks, intercellular signaling is typically mediated by diffusional processes (e.g., vesicularly released adenosine triphosphate, ATP).

3.3.  Experimental Observability Through Calcium Observation.

Typically only one or a few of the state variables in the state vector are available for observation (i.e., are experimentally measurable). This is certainly the case with cellular networks, especially in neurons where voltage is measured as an indicator of signaling activity. But simultaneous voltage measurements are difficult for networks of many neurons, where the geometry of the network may be important to the analysis or interpretation of the data. While high-density planar multielectrode arrays can record from a few hundred cells at once, it is typically not possible to correlate recorded activity with the native geometry of the network. (There is one notable exception to this that we are aware of, which is the ganglion cell monolayer in the peripheral retina. Chichilnisky and colleagues are able to computationally infer the geometry and functional activity of these retinal ganglion cells due to their unique planar arrangement. See Pillow, Paninski, Uzzell, Simoncelli, & Chichilnisky, 2005; Shlens, Rieke, & Chichilnisky, 2008; Chichilnisky & Baylor, 1999. However, in the brain and even in the retina where mutlilayered ganglion cells receive incoming macular inputs, it is not possible to do with electrode arrays.) Given the challenges associated with direct electrophysiological measurements of large neuronal ensembles, calcium fluorescence imaging has been used as an indirect measure of the dynamics of large neuronal netwokrs. The dynamics of calcium, often influenced by voltage spiking, are modeled as state variables and associated transition functions added to the state set. A few models describe the time course of calcium as it is driven by changes in membrane voltage, and the research to develop more refined and robust calcium dynamics models specifically for the study neural microcircuits and networks is very active. Our intent here is not to describe ongoing efforts or the state of the art but simply to illustrate the integration of one such model within our framework.

The simplest model of calcium dynamics can be expressed as a linear system with a spike input
formula
3.20
This model is integrated into the framework by the addition of another state variable c(t) describing the cytosolic calcium concentration to the state set, with transition function
formula
This model has one parameter , which describes the removal rate of calcium after an input caused by a spike. This is the simplest model used for calcium dynamics based on neuronal spiking and is often used to extract spikes from calcium (Vogelstein, Watson, & Packer, 2009). More complex and nonlinear models of calcium dynamics have been developed, and these can also readily be integrated within the framework.
Finally is the issue of observable variables. In equation 2.14, we defined an observable set zj(t) as some function of the current state set yj(t). When fluorescent calcium indicators are used, the sole observation variable is the recorded intensity Ij(t) for cell j at a particular pixel reflecting some linear multiplier of the cytosolic calcium concentration at that point in the visual field, based on the dye loading in a cell, as well as the camera, microscope, and illumination setup. Defining with observation function Fj,
formula
The parameter set represents the scaling, offset, and noise standard deviation of the observation function. The function generates a normally distributed, random value with zero mean and standard deviation. The noise term models the type of frame-to-frame variation typically seen in the amplification of the CCD signal prior to digitization. The size of is proportional to the magnitude of the noise, itself affected by camera type and gain settings.

Since the framework is defined at cellular resolution, we are making the simplifying assumption that the recorded intensity represents the average intensity for the region of interest demarcating a specific cell j within a larger field. Additionally, if the camera records at a slower frequency than the transition dynamics , where fcamera is the camera recording frequency and is the time increment (see equation 2.5), one is averaging intensity values for the duration that the camera shutter stays open. If this is the case, then multiple sequential calcium concentration values would be averaged to produce a single intensity reading.

3.4.  GPU Implementation and Benchmarks.

The practical application of the theoretical framework for both simulation and, as will be described in a subsequent article, mapping the unknown functional connectivity of experimentally observed cellular networks necessitates its implementation in an appropriate computing environment. We have taken advantage of emerging high-performance general-purpose graphics processing unit (GPU) parallel computing, although it can run as serial code on a normal central processing unit (CPU), which we have also tested. Within the GPU environment, the code has been designed to run on nVIDIA graphics cards equipped with the CUDA interface (see http://www.nvidia.com/cuda). In this way, we can parallelize vertex dynamics, signaling dynaimcs, and observation integrations over many processor cores, achieving significant speed-up over CPU or cluster computations. The framework and associated single-cell dynamic and network connectivity models have been coded as compact Matlab-callable libraries. All graphics user interface (GUI) and input-output (I/O) operations are handled through Matlab, and the code has been written in both Matlab and C/C++ libraries that communicate through Matlab. The libraries offer direct control over all parameters for all vertices. Using concise language, any model that can be analytically described within the framework can be easily coded into a simulation library. From a practical experimental perspective, GPU computation offers unprecedented scalability to larger systems with full access to all state variables and parameters, enabling rapid parallel simulation when the framework is run in the forward direction and real-time dynamic mapping when the framework is applied to the inverse problem of mapping unknown functional connectivities of cellular neural networks. Speed and parallelization are critical for statistical simulation-based identification methods such as particle filtering to operate in real time or near real time.

Benchmarks in Figure 4 show the relative speeds of the CPU and GPU implementations of two different dynamic cell models within the framework, an Izhikevitch model and a Hodgkin-Huxley model, simulated in 40 test networks (see section 3.5 regarding the test networks). The parallel GPU implementation performed anywhere from 8 to 200 times faster than the same code executed on a single CPU process. Performance was measured as a slowdown or speed-up factor relative to real time. It is important to note that identical code ran on both CPU and GPU, with the former emulating the latter. The Izhikevitch model had six state variables and a time step of 1 millisecond. We simulated 10 seconds worth of data equivalent to 10,000 time steps. The more computationally intensive Hodgkin-Huxley model had eight state variables and a time step of 0.03 milliseconds. One second worth of data was simulated for the HH model, corresponding to about 33,333 time steps. Benchmarks were calculated as the dimensionless ratio of the actual period of time simulated (i.e., real time) to the amount of physical computational time it took the GPU or CPU to carry out the simulations. Because of the parallel implementation on the GPU, the performance fall-off was much slower than on the CPU with increasing network size. In both the CPU and GPU cases, performance decreased with increasing network density (total number of edges or vertices square), since in denser networks, more information is transferred between edges. This is more apparent in the random networks that have higher density than other network classes, producing relatively slower processing speeds.

Figure 4:

CPU and GPU benchmark results for framework simulations using Hodgkin-Huxley and Izhikevitch models of single-cell dynamics. Two different network topologies for four different classes of networks were simulated (lattice, small world, scale free, and random), with network sizes (number of vertices) of 10, 30, 100, 300, and 1000 vertices simulated for each of the eight different networks. The top graphs show performance on a single nVIDIA Tesla C1060 GPU, expressed relative to the real-time simulation period for each single-cell dynamic model (see text), while the data for the bottom two graphs show the performance on a single core of an Intel Core2Duo 2.5 GHz processor. Negative values represent a slowdown relative to real time, while positive values represent a speed-up. All 40 networks were tested on both. See section 3.5 for details regarding the test networks and network classes.

Figure 4:

CPU and GPU benchmark results for framework simulations using Hodgkin-Huxley and Izhikevitch models of single-cell dynamics. Two different network topologies for four different classes of networks were simulated (lattice, small world, scale free, and random), with network sizes (number of vertices) of 10, 30, 100, 300, and 1000 vertices simulated for each of the eight different networks. The top graphs show performance on a single nVIDIA Tesla C1060 GPU, expressed relative to the real-time simulation period for each single-cell dynamic model (see text), while the data for the bottom two graphs show the performance on a single core of an Intel Core2Duo 2.5 GHz processor. Negative values represent a slowdown relative to real time, while positive values represent a speed-up. All 40 networks were tested on both. See section 3.5 for details regarding the test networks and network classes.

In general, the constant time step in the framework makes parallelization easy on GPU architectures, delivering network simulation performance near or even faster than real time. Because of the parallel threading on GPUs, performance is only modestly decreased when going from a 100- to a 1000-vertex network. For the HH model, the GPU was able to carry out the computations in essentially real time for all of the networks tested except for the largest random networks, which were about 10 to 15 times slower than real time for random networks with over a thousand vertices (upper left panel in Figure 4). For computationally simpler models such as the Izhikevitch model, the GPU computations were actually faster than the period being simulated (i.e., faster than real time), ranging from roughly 10 to 20 times faster for most networks and about real time for the largest random networks (upper right panel in Figure 4). In contrast, CPU computations were always slower than real time, from 15 to 800 times slower for the HH model depending on the network class and size (lower left panel in Figure 4) and from just under near real time for small 10 vertex networks to about 15 times slower for the largest random networks for the Izhikevitch model (lower right panel in Figure 4). Additional GPU cards can further improve performance by splitting the task of advancing temporal cell dynamics; however, transfer of information between cells is still limited by memory and bus speeds, so dense networks run slower than sparse networks. It is important to appreciate that the ability to carry out such forward simulations or solve the inverse problem of mapping unknown functional connectivity topologies of networks in near real time or faster than real time using GPU computing is due to the mathematical construction of the framework and its efficient algorithmic implementation. It is impossible to do such network simulations or mappings in real time using biophysical compartmental simulation environments. The weaver suite, composed of the Matlab and CUDA code for the framework, are available online at http://www.silva.ucsd.edu/Silva_Lab/Links.html.

3.5.  Standardized Tests for Connectivity Estimation.

Finally, we propose a standardized basis test set to evaluate the effectiveness of mapping algorithms. A standard test set is well accepted in the field of nonlinear optimization, providing a standard measure of different algorithms (Hock & Schittkowski, 1980). There are two benefits to having a standard set of networks to use for mapping. First, multiple algorithms can be evaluated against the same network and model, providing relative performance benchmarks. Second, data generated for a network using one dynamic model can be mapped using another dynamic model, and comparisons can be made between the original topology and the mapped topology. This latter approach helps answer questions about model fitness, which are especially useful when trying to map data with multiple models or uncertainty in models. We emphasize that the test set we propose here should in no way be interpreted as implying that the full complexity and variability of real neural circuits and networks are captured or even described by the set. But we argue that any mathematical method or algorithm that claims to be able to deal with real cellular neural networks of any meaningful size (e.g., on the order of tens to hundreds of cells or larger) must at the very least be able to effectively and efficiently map functional networks derived by this test set, which offers a first-order approximation to the dynamics and complexity displayed by such networks.

Here we offer the foundation for such a test set: the location of vertices in physical space and their physical connections based on different connective classes. The choice of dynamic model, parameters, and functional weights is left open and up to the discretion of the individual investigator, since they are specific to the network being studied and the mapping algorithms being designed, but they can be directly implemented within the framework we have developed. Test networks vary in size from 10 to 1000 vertices, covering the range of cells that can be imaged simultaneously with fluorescence microscopy. At the small end of the scale, networks on the order of 10 vertices are about the limit of existing connectivity estimation methods (Makarov, Panetsos, & Feo, 2005; Eldawlatly, Zhou, Jin, & Oweiss, 2010). The upper end of the scale at 1000 vertices was chosen largely due to limits of computational power available at present.

Each graph is composed of N interconnected vertices located in two- or three-dimensional physical space, with minimum distance constraints and other dynamic parameters as described below for each network class. The physical connectivity between vertices follows one of four different graph theoretical classifications, since there is no measurable and “real” network spatial geometry defined in this case (this is also discussed below). Vertices were positioned in geometric space randomly, but with a prescribed minimum distance between neighbors. We developed a simple algorithm to populate a physical space with N cells or nodes:

while

 

 

 forj=1 to i − 1 do

  if||XjXi|| < dthen

   (i.e., if cell Xi is less than d units from cell Xj)

   

   break for loop

  end if

 end for

 iffound = 1 then

  

 end if

end while

In this algorithm, the space range establishes the dimensions of the space occupied by vertices in the network. For a two-dimensional network, this could be a square area of distance units. Distance and position units are nondimensional and can be scaled as needed. The minimum distance between vertices is a parameter but can also be expressed as a function of the number of vertices N and the dimension of the space the graph occupies:
formula
Here the minimum distance dmin is proportional to the length of the space L times the number of vertices N raised to the negative inverse of the dimension D. For example, a three-dimensional space of length L=100 could fit N=1000 vertices with minimum distance . This implies a minimum distance of 10. While this is the absolute minimum distance with cubic packing, when vertices are placed at random, this minimum distance is reduced to allow for some variability in placement. Figure 5 shows examples of vertex placement in two and three dimensions for three different size networks. It is important to note that with this formula, the minimum distance can be prescribed for fractional dimensions, as may be the case for some neural tissues where cell arrangement is neither flat nor fully three-dimensionally filling. Vertices are numbered from the center outward, so mapping can be performed on a subset of vertices that interact with the complete network. This is a more difficult case but a more realistic scenario, as vertices would be receiving inputs from unobserved vertices in real cellular networks due to experimentally limited windows of observability.
Figure 5:

Random placement of vertices with a minimum distance constraint. Examples of groups of 10, 100, and 1000 vertices placed in two (left column) and three (right column) dimensions are shown here. Vertices were colored randomly for clarity. The minimum distance is a function of both the number of vertices and the spatial dimension. For the same number of vertices, a two-dimensional network will have a smaller minimum distance and thus be packed tighter than a three-dimensional network. See text for details.

Figure 5:

Random placement of vertices with a minimum distance constraint. Examples of groups of 10, 100, and 1000 vertices placed in two (left column) and three (right column) dimensions are shown here. Vertices were colored randomly for clarity. The minimum distance is a function of both the number of vertices and the spatial dimension. For the same number of vertices, a two-dimensional network will have a smaller minimum distance and thus be packed tighter than a three-dimensional network. See text for details.

Once the vertices are placed in physical space, connections are made using established graph-theoretic classes. We included lattice, small world, scale-free, and random classes in the test set. For each connectivity class, we generated two networks of different edge densities—one with fewer and one with more edges. We chose these four classes because they represent the major graph-theoretic topologies, but of course any other algorithmically defined class can be used. The physical connectivity of a graph intuitively represents a constrained phase space on which dynamic signals propagate in both space and time (i.e., the functional connectivity topology) as determined by the network signaling framework and chosen model of single-cell dynamics. Specifically, we considered the following classes and specific parameters for each:

  • • 

    Lattice networks. This class of networks has only nearest-neighbor connections with no long-distance connections. The number of nearest local connections or total number of edges can be specified before construction. In our case, we limited connections for each vertex to its closest three and eight neighbors.

  • • 

    Small world networks. This is a modification of a lattice network that includes a specified probability of long-range connections (Barabasi & Albert, 1999). The probability of long-range connections ranges between 0 (lattice network) and 1 (random network), but typical values are around 10%, meaning that 10% of all edges are randomly chosen. In our case, we built networks of 5% and 15% probability of random rewiring.

  • • 

    Scale-free networks. These networks follow a power law connection (edge) degree distribution, with many cells having few connections and few cells having many connections. When positional aspects are taken into account, scale-free networks take on some small world properties and are essentially scale-free geometric graphs called apollonian networks (Andrade, Herrmann, Andrade, & Silva, 2005).

  • • 

    Random networks. The study of random graphs extends all the way back to the original work of Erdös and Rényi. In a random graph, a specified number of edges are placed between randomly chosen vertices, without regard for vertex position, which has no meaning. We built random networks of 10% and 20% densities, meaning about or number of edges, where N is the number of vertices in the network. A 100% dense network connects every vertex to every other vertex.

The different classes and densities are shown in Figure 6, along with graph-theoretic statistics on connectivity and wiring lengths. The connections establish the physical connection between vertices or the edges along which functional connections are possible. The magnitude of the functional weights should be chosen based on the dynamic model and the specifics of the system studied. A mapping algorithm must identify the functional connections without knowledge of the physical connectivity class. Delays are defined according to the Cartesian distances between connected vertices. Three-dimensional spaces generally have narrower and smaller distance distributions than two-dimensional packings. The formula presented in equation 3.9 is used to establish delays, with the speed parameter chosen arbitrarily based on the system being studied. The complete test set is 80 networks, combinations of two dimensions, five sizes, and four connectivity classes, as outlined in Table 1. Figure 7 shows an example of simulated calcium response raster plots for the networks shown in Figure 6. A mapping algorithm should be able to identify the dynamics parameters and the functional connectivity of each test network, given a chosen single-cell dynamic model. Ultimately, the only kind of measured experimental data available to any such algorithm would be imaged calcium responses such as those simulated in Figure 7 or some equivalent data for another marker of functional cellular activity. These are the practical experimental constrains that any theoretical methods aimed at mapping functional activity in cellular neural circuits and networks with single-cell resolution must face. The test networks are also available for download from our Web site (http://www.silva.ucsd.edu/Silva_Lab/Links.html).

Figure 6:

Test network topologies for a 100-vertex two-dimensional network. For each of the four topologies included in the test set, two edge densities were considered—one with fewer edges or randomness (green) and a more complex one with more edges or randomness (blue; left two columns). The third column and fourth columns from the left show the degree and geometric length distributions of each network, respectively. For each network, the total number of edges and total wiring lengths can be computed from the areas under the degree and geometric-length distribution plots. The higher-edge-density networks (colored in blue) have both higher total number of edges and total wiring length than the lower-edge-density networks (colored in green). For a constant signal, propagation delays are proportional to wiring lengths in a network.

Figure 6:

Test network topologies for a 100-vertex two-dimensional network. For each of the four topologies included in the test set, two edge densities were considered—one with fewer edges or randomness (green) and a more complex one with more edges or randomness (blue; left two columns). The third column and fourth columns from the left show the degree and geometric length distributions of each network, respectively. For each network, the total number of edges and total wiring lengths can be computed from the areas under the degree and geometric-length distribution plots. The higher-edge-density networks (colored in blue) have both higher total number of edges and total wiring length than the lower-edge-density networks (colored in green). For a constant signal, propagation delays are proportional to wiring lengths in a network.

Table 1:
Range of Parameters Specified by the Test Set.
ParameterRange
Network size N 10, 30, 100, 300, 1000 
Geometric dimension 2D, 3D (may even be fractal, i.e., 2.5D) 
Connectivity type Lattice, small world, scale-free, random 
Edge density Low, high 
ParameterRange
Network size N 10, 30, 100, 300, 1000 
Geometric dimension 2D, 3D (may even be fractal, i.e., 2.5D) 
Connectivity type Lattice, small world, scale-free, random 
Edge density Low, high 

Notes: The current set is composed of 80 test networks spanning ranges in network size, geometric dimension, connectivity type, and edge density. The choice of dynamical cell model, parameters, functional weights, observation variable, user inputs, and noise levels is left to the individual user.

Figure 7:

Computed (i.e., simulated) dynamics of experimentally observable calcium signaling for each of the eight networks shown in Figure 6, arranged in the same order. Each network was simulated for 4 seconds using the Izhikevitch simple model, with a constant current input to generate activity. The colors of the maps encode calcium concentration, representing the approximate instantaneous spike rate of individual neurons in the network.

Figure 7:

Computed (i.e., simulated) dynamics of experimentally observable calcium signaling for each of the eight networks shown in Figure 6, arranged in the same order. Each network was simulated for 4 seconds using the Izhikevitch simple model, with a constant current input to generate activity. The colors of the maps encode calcium concentration, representing the approximate instantaneous spike rate of individual neurons in the network.

4.  Discussion

Within the study of networks, there are two opposing yet deeply interrelated processes: simulation and estimation. Simulation of networks deals with the forward problem of making predictions using an established model and measured parameters and connectivities. The reverse problem, estimation or mapping, uses the result of actual collected data to infer, estimate, or map the parameters of a model or, for networks, functional connectivity. The framework we introduce here can be used for both simulating signal propagation in physically realistic networks and the reverse process of estimating or mapping unknown functional connectivities of networks. The framework is bounded by a set of rules and constraints imposed by the experimental reality of cellular neurobiological methods: complex nonlinear dynamics, limited observability, noise and uncertainty, and experimental control. It is designed around current observation and experimental capabilities, which are shifting from single-neuron multiple trials to multiple-neuron, single-trial experiments (Quiroga & Panzeri, 2009).

Within the framework, the mathematical construction for the dynamic model describes the time course of each vertex. A general state transition representation encompasses different model types, from ordinary differential equations to state machines and Markov models, to simplified neuronal models like LIF and Izhikevitch. The choice of model will certainly affect the estimation of the dynamic parameters from the collected data; how much the single-cell dynamic model affects the estimation of functional weights is is still an open question. The proposed test set should help address this by simulating artificial data with one model and estimating with another.

Mapping a complete functional topology is ultimately a reverse process and will involve some combination of estimation, filtering, and optimization. While some approaches exist for estimating parameters and dynamics of single neurons (Creveling, Jeanne, & Abarbanel, 2008) or small groups of neurons (Makarov et al., 2005; Eichler, Dahlhaus, & Sandkuhler, 2003; Eldawlatly et al., 2010), mapping large networks within biologically realistic constraints remains a challenge, and we are still a long way from establishing a complete functional connectivity map of even simple processes and tasks. Indeed, from a neurophysiological perspective, it is not even entirely clear what we should be mapping or how to properly interpret such data from the perspective of deciphering the neural code. The proposed framework attempts to unify both theoretical and practical considerations as an open standard for the development of large-scale functional topology reconstruction algorithms.

Since the goal of mapping is to identify both the dynamic parameters of individual vertices and the connectivity between vertices, a well-designed input control should be used to make the observation as informative as possible, provided it does not alter the parameters and connectivities of interest. Experimentally, input control can take on many forms. The dynamics of individual cells can be perturbed using methods such as optogenetics, pharmacologically using appropriate agonists and antagonists, and electrophysiology. For single cells, there are different input functions that can be used and a few approaches describing input function design to extract the most amount of information (Lewi, Butera, & Paninski, 2009; Benda, Gollisch, Machens, & Herz, 2007). At the network level, the set of input functions for each cell must be designed in parallel and coordinated with observed activity in order to provide the most amount of information to the mapping algorithm.

Acknowledgments

We thank the anonymous reviewers for their tremendous effort reading and commenting on the various versions of the submitted manuscript. This work was supported by grant RO1 NS054736 from the National Institute for Neurological Disorders and Stroke at the National Institutes of Health.

References

Agulhon
,
C.
,
Petravicz
,
J.
,
McMullen
,
A.
,
Sweger
,
E.
,
Minton
,
S.
,
Taves
,
S.
, et al
(
2008
).
What is the role of astrocyte calcium in neurophysiology?
Neuron
,
59
(
6
),
932
946
.
Andrade
,
J. S.
,
Herrmann
,
H. J.
,
Andrade
,
R. F. S.
, &
Silva
,
L. R. D.
(
2005
).
Apollonian networks: Simultaneously scale-free, small world, Euclidean, space filling, and with matching graphs
.
Physical Review Letters
,
94
(
1
),
4
.
Barabasi
,
A.
, &
Albert
,
R.
(
1999
).
Emergence of scaling in random networks
.
Science
,
286
(
5439
),
509
512
.
Benda
,
J.
,
Gollisch
,
T.
,
Machens
,
C. K.
, &
Herz
,
A. V.
(
2007
).
From response to stimulus: Adaptive sampling in sensory physiology
.
Curr. Opin. Neurobiol.
,
17
,
430
436
.
Bennett
,
M. R.
(
2005
).
A quantitative model of purinergic junctional transmission of calcium waves in astrocyte networks
.
Biophys. J.
,
89
(
4
),
2235
2250
.
Benninger
,
R. K. P.
,
Hao
,
M.
, &
Piston
,
D. W.
(
2008
).
Multi-photon excitation imaging of dynamic processes in living cells and tissues
.
Rev. Physiol. Biochem. Pharmacol.
,
160
,
71
92
.
Bi
,
G.
, &
Poo
,
M.
(
2001
).
Synaptic modification by correlated activity: Hebb's postulate revisited
.
Annu. Rev. Neurosci.
,
24
,
139
166
.
Bondarenko
,
A.
,
Bortz
,
D.
, &
More
,
J.
(
1999
).
COPS: Large-scale nonlinearly constrained optimization problems
(
Tech. Rep. ANL/MCS-TM-237
).
Argonne, IL
:
Mathematics and Computer Science Division, Argonne National Laboratory
.
Chichilnisky
,
E. J.
, &
Baylor
,
D. A.
(
1999
).
Receptive-field microstructure of blue-yellow ganglion cells in primate retina
.
Nat. Neurosci.
,
2
(
10
),
889
893
.
Chow
,
S.-K.
,
Yu
,
D.
,
Macdonald
,
C. L.
,
Buibas
,
M.
, &
Silva
,
G. A.
(
2009
).
Amyloid-beta directly induces spontaneous calcium transients, delayed intercellular calcium waves, and gliosis in rat cortical astrocytes
.
ASN neuro
(
pp. ASN NEURO 2(1):art:e00026.doi:10.1042/AN20090035
).
Coco
,
S.
,
Calegari
,
F.
,
Pravettoni
,
E.
,
Pozzi
,
D.
,
Taverna
,
E.
,
Rosa
,
P.
, et al
(
2003
).
Storage and release of ATP from astrocytes in culture
.
Journal of Biological Chemistry
,
278
(
2
),
1354
1362
.
Creveling
,
D. R.
,
Jeanne
,
J. M.
, &
Abarbanel
,
H. D. I.
(
2008
).
Parameter estimation using balanced synchronization
.
Physics Letters A
,
372
(
12
),
2043
2047
.
Dayan
,
P.
,
Abbott
,
L.
, &
Abbott
,
L.
(
2001
).
Theoretical neuroscience: Computational and mathematical modeling of neural systems
.
Cambridge, MA
:
MIT Press
.
Destexhe
,
A.
,
Mainen
,
Z. F.
, &
Sejnowski
,
T.
(
1994
).
An efficient method for computing synaptic conductances based on a kinetic model of receptor binding
.
Neural Computation
,
6
(
1
),
14
18
.
Destexhe
,
A.
,
Mainen
,
Z.
, &
Sejnowski
,
T.
(
1998
).
Methods in neuronal modeling
.
Cambridge, MA
:
MIT Press
.
Eichler
,
M.
,
Dahlhaus
,
R.
, &
Sandkuhler
,
J.
(
2003
).
Partial correlation analysis for the identification of synaptic connections
.
Biological Cybernetics
,
89
(
4
),
289
302
.
Eldawlatly
,
S.
,
Zhou
,
Y.
,
Jin
,
R.
, &
Oweiss
,
K. G.
(
2010
).
On the use of dynamic Bayesian networks in reconstructing functional neuronal networks from spike train ensembles
.
Neural Computation
,
22
(
1
),
158
189
.
Feldt
,
S.
,
Osterhage
,
H.
,
Mormann
,
F.
,
Lehnertz
,
K.
, &
Zochowski
,
M.
(
2007
).
Internetwork and intranetwork communications during bursting dynamics: Applications to seizure prediction
.
Physical Review E, Statistical, Nonlinear, and Soft Matter Physics
,
76
(
2 Pt. 1
),
021920
.
Fields
,
R. D.
, &
Stevens-Graham
,
B.
(
2002
).
New insights into neuron-glia communication
.
Science
,
298
,
556
562
.
Garaschuk
,
O.
,
Milos
,
R.-I.
,
Grienberger
,
C.
,
Marandi
,
N.
,
Adelsberger
,
H.
, &
Konnerth
,
A.
(
2006
).
Optical monitoring of brain function in vivo: From neurons to networks
.
Pflugers Arch.—Eur. J. Physiol.
,
453
(
3
),
385
396
.
Gould
,
N.
(
1995
).
CUTE: Constrained and unconstrained testing environment
.
ACM Transactions on Mathematical Software
,
21
,
123
160
.
Hock
,
W.
, &
Schittkowski
,
K.
(
1980
).
Test examples for nonlinear programming codes
.
Journal of Optimization Theory and Applications
,
30
,
127
129
.
Homma
,
R.
,
Baker
,
B.
,
Jin
,
L.
,
Garaschuk
,
O.
,
Konnerth
,
A.
,
Cohen
,
L.
, et al
(
2009
).
Wide-field and two-photon imaging of brain activity with voltage- and calcium-sensitive dyes
.
Methods Mol. Biol.
,
489
,
43
80
.
Hoogland
,
T.
,
Kuhn
,
B.
,
Göbel
,
W.
, &
Huang
,
W.
(
2009
).
Radially expanding transglial calcium waves in the intact cerebellum
.
Proceedings of the National Academy of Sciences
,
106
,
3496
3501
.
Kuchibhotla
,
K.
,
Lattarulo
,
C.
,
Hyman
,
B.
, &
Bacskai
,
B.
(
2009
).
Synchronous hyperactivity and intercellular calcium waves in astrocytes in alzheimer mice
.
Science Signaling
,
323
(
5918
),
1211
.
Kurth-Nelson
,
Z. L.
,
Mishra
,
A.
, &
Newman
,
E. A.
(
2009
).
Spontaneous glial calcium waves in the retina develop over early adulthood
.
Journal of Neuroscience
,
29
(
36
),
11339
11346
.
Lavrentovich
,
M.
, &
Hemkin
,
S.
(
2008
).
A mathematical model of spontaneous calcium. II: Oscillations in astrocytes
.
Journal of Theoretical Biology
,
251
(
4
),
553
560
.
Lewi
,
J.
,
Butera
,
R.
, &
Paninski
,
L.
(
2009
).
Sequential optimal design of neurophysiology experiments
.
Neural Computation
,
21
(
3
),
619
687
.
Macdonald
,
C. L.
,
Yu
,
D.
,
Buibas
,
M.
, &
Silva
,
G.
(
2008
).
Diffusion modeling of atp signaling suggests a partially regenerative mechanism underlies astrocyte intercellular calcium waves
.
Front. Neuroeng.
,
1
,
1
13
.
Makarov
,
V.
,
Panetsos
,
F.
, &
Feo
,
O.
(
2005
).
A method for determining neural connectivity and inferring the underlying network dynamics using extracellular spike recordings
.
Journal of Neuroscience Methods
,
144
,
265
279
.
Nadkarni
,
S.
,
Jung
,
P.
,
Levine
,
H.
, &
Graham
,
L. J.
(
2008
).
Astrocytes optimize the synaptic transmission of information
.
PLoS Computational Biology
,
4
(
5
),
e1000088
.
Nikolenko
,
V.
,
Poskanzer
,
K. E.
, &
Yuste
,
R.
(
2007
).
Two-photon photostimulation and imaging of neural circuits
.
Nat. Methods
,
4
(
11
),
943
50
.
Pillow
,
J. W.
,
Paninski
,
L.
,
Uzzell
,
V. J.
,
Simoncelli
,
E. P.
, &
Chichilnisky
,
E. J.
(
2005
).
Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model
.
J. Neurosci.
,
25
(
47
),
11003
11013
.
Pittà
,
M. D.
,
Volman
,
V.
,
Levine
,
H.
, &
Ben-Jacob
,
E.
(
2009
).
Multimodal encoding in a simplified model of intracellular calcium signaling
.
Cogn. Process.
,
10
(
Suppl. 1
),
S55
70
.
Quiroga
,
R. Q.
, &
Panzeri
,
S.
(
2009
).
Extracting information from neuronal populations: Information theory and decoding approaches
.
Nature Reviews Neuroscience
,
10
(
3
),
173
185
.
Scemes
,
E.
, &
Giaume
,
C.
(
2006
).
Astrocyte calcium waves: What they are and what they do
.
Glia
,
54
(
7
),
716
725
.
Shlens
,
J.
,
Rieke
,
F.
, &
Chichilnisky
,
E.
(
2008
).
Synchronized firing in the retina
.
Curr. Opin. Neurobiol.
,
18
(
4
),
396
402
.
Stamatakis
,
M.
, &
Mantzaris
,
N.
(
2006
).
Modeling of ATP-mediated signal transduction and wave propagation in astrocytic cellular networks
.
Journal of Theoretical Biology
,
241
(
3
),
649
668
.
Sudbury
,
J. R.
, &
Avoli
,
M.
(
2007
).
Epileptiform synchronization in the rat insular and perirhinal cortices in vitro
.
Eur. J. Neurosci.
,
26
(
12
),
3571
3582
.
Tian
,
G.-F.
,
Azmi
,
H.
,
Takano
,
T.
,
Xu
,
Q.
,
Peng
,
W.
,
Lin
,
J.
, et al
(
2005
).
An astrocytic basis of epilepsy
.
Nat. Med.
,
11
,
973
981
.
Trappenberg
,
T.
(
2002
).
Fundamentals of computational neuroscience
.
New York
:
Oxford University Press
.
Verkhratsky
,
A.
(
2006
).
Glial calcium signaling in physiology and pathophysiology, 1
.
Acta Pharmacologica Sinica
,
27
(
7
),
773
780
.
Vogelstein
,
J.
,
Watson
,
B.
, &
Packer
,
A.
(
2009
).
Spike inference from calcium imaging using sequential Monte Carlo methods
.
Biophysical Journal
,
97
,
636
655
.
Wetherington
,
J.
,
Serrano
,
G.
, &
Dingledine
,
R.
(
2008
).
Astrocytes in the epileptic brain
.
Neuron
,
58
(
2
),
168
178
.