Abstract

Coupled oscillators are prevalent throughout the physical world. Dynamical system formulations of weakly coupled oscillator systems have proven effective at capturing the properties of real-world systems and are compelling models of neural systems. However, these formulations usually deal with the forward problem: simulating a system from known coupling parameters. Here we provide a solution to the inverse problem: determining the coupling parameters from measurements. Starting from the dynamic equations of a system of symmetrically coupled phase oscillators, given by a nonlinear Langevin equation, we derive the corresponding equilibrium distribution. This formulation leads us to the maximum entropy distribution that captures pairwise phase relationships. To solve the inverse problem for this distribution, we derive a closed-form solution for estimating the phase coupling parameters from observed phase statistics. Through simulations, we show that the algorithm performs well in high dimensions (d = 100) and in cases with limited data (as few as 100 samples per dimension). In addition, we derive a regularized solution to the estimation and show that the resulting procedure improves performance when only a limited amount of data is available. Because the distribution serves as the unique maximum entropy solution for pairwise phase statistics, phase coupling estimation can be broadly applied in any situation where phase measurements are made. Under the physical interpretation, the model may be used for inferring coupling relationships within cortical networks.

1.  Introduction

Many complex natural phenomena can be modeled as networks of coupled oscillators. Examples can be drawn from the physical, chemical, and biological worlds. Oscillator models have been effective at describing the dynamics of coupled pendula, coupled Josephson junctions, reaction diffusion systems, circadian rhythms, oscillating neural networks, and even the coupling of firefly luminescence (see, e.g., Buzsaki, 2006; Kuramoto, 1984; Mirollo & Strogatz, 1990; Strogatz, 2003; Winfree, 2001).

Oscillatory dynamics are prevalent in neural systems, and their computational use is critical to many theories of neural function. There is experimental evidence for oscillations in a variety of neural systems, from primary sensory effectors, through local cortical networks, and up to the cortico-hippocampal network (Buzsaki & Draguhn, 2004). Oscillations in the olfactory system are present in the olfactory bulb (Adrian, 1942) and are thought to subserve the computation of similarity between odorant patterns (Hopfield, 1995). There is growing evidence for oscillations at many levels of the visual system. Some of the first evidence for oscillations in the human brain came from recordings using EEG over the occipital lobe (Berger, 1929), and more recent findings have shown that subpopulations of pyramidal neurons exhibit gamma-band oscillations in response to visual stimulation (Gray, König, Engel, & Singer, 1989; Eckhorn et al., 1988). In retina, it has been found that some retinal ganglion cells exhibit periodic firing, and this periodicity provides an extra channel for information transmission to LGN and cortex (Koepsell et al., 2009). Evidence for oscillatory activity in the auditory system extends from the periphery to the cortex. Inner hair cells exhibit characteristic frequencies tuned to different bandwidths (Crawford & Fettiplace, 1981), cochlear projections transmit phase-locked responses to subcortical regions (Rose, Brugge, Anderson, & Hind, 1967), and stimulus mediated gamma oscillations have been observed in primary auditory cortex (Brosch, Budinger, & Scheich, 2002). In the hippocampus, oscillatory patterns are well studied and are thought to subserve memory formation (Jutras, Fries, & Buffalo, 2009) and communication between hippocampus and entorhinal cortex (Chrobak & Buzsaki, 1996; Colgin et al., 2009).

Although the presence of oscillatory dynamics in the brain is largely accepted, the role of these oscillations is widely debated. While it is likely that neural oscillations assume multiple roles in neural systems, and maybe even multiple roles in the same neural system, we have found that it is useful to keep in mind a model of intracortical communication that is mediated by neural oscillations in different populations of neurons. Under the proposed theory of communication through coherence, only groups of neurons that are coherently oscillating can interact effectively (Varela, Lachaux, Rodriguez, & Martinerie, 2001; Fries, 2005). This implies that if we are able to measure neural synchrony, we can infer the actively communicating cortical populations. Such inter-area cortical synchronization is thought to mediate perceptual binding (Pareti & De Palma, 2004), attentional feedback (Siegel, Donner, Oostenveld, Fries, & Engel, 2008), and the transfer of memory from hippocampus to cortex (Fell et al., 2001). The formation of synchronized networks is also thought to be dynamic and indicative of behavioral and cognitive functional state (Fox et al., 2005).

Many studies that investigate the presence, dynamics, and behavioral dependence of neural synchrony have used measures closely related to phase correlation. Phase correlation is a pairwise measurement that determines the second-order statistical dependency between two oscillations, which can include both the strength of the phase dependency and the phase offset of the dependency. A variety of measurements based on phase correlation have been used recently: phase correlation (Siapas, Lubenov, & Wilson, 2005), phase locking value (Lachaux, Rodriguez, Martinerie, & Varela, 1999; Le Van Quyen et al., 2001; Wang, Hong, Gao, & Gao, 2006; Siegel, Warden, & Miller, 2009), phases locking factor (Palva, Palva, & Kaila, 2005), phase coherence (Spencer et al., 2003; Saalmann, Pigarev, & Vidyasagar, 2007; Womelsdorf et al., 2007), and coherence statistic (Jarvis & Mitra, 2001; Buschman & Miller, 2007). These measurements have provided powerful evidence for phase synchrony in cortex, however, all fail to define a proper probabilistic model or the dynamics of a system that would produce such measurements. As we will show in section 2.2, phase correlations are only indirectly related to the parameters of a probabilistic distribution and thus indirectly related to the network interactions in the dynamical system we propose. Therefore, taking phase correlations at face value may lead to false inferences about network interactions.

Given the wealth of evidence for neural oscillations, the myriad hypothesized roles for oscillatory dynamics in the brain, and the prevalent use of phase correlation measurements, it is important that we develop tools for inferring the connectivity within oscillatory networks from measurements using probabilistic models. The process of inferring network connectivity from measurements is known as the inverse problem and must be solved if we are to scientifically investigate the role of oscillatory dynamics in networks of neurons.

In statistical mechanics, the inverse problem is typically solved by proposing a probability distribution and estimating the distribution's parameters from measurements. A natural choice for the estimation, a highly underdetermined problem, is given by the unique maximum entropy distribution that reproduces the statistics of the measurements (Jaynes, 1957). A number of such distributions and estimation techniques are used throughout the science and engineering communities. In the real-valued case the multivariate gaussian distribution, and in the binary case the Ising model, serve as widely used multivariate maximum entropy distributions consistent with second-order statistics. Each of these cases has well-known estimation techniques for inferring the distribution's parameters from observations. The availability of these techniques has led to a number of applications, for example, the Ising model and its corresponding estimation techniques have been used to infer the coupling in networks of retinal ganglion cells (Shlens et al., 2006; Schneidman, Berry, Segev, & Bialek, 2006; Cocco, Leibler, & Monasson, 2009). However, for the phase variables that are of interest in networks of oscillators, there has been little work on providing a corresponding multivariate probabilistic distribution or deriving estimation techniques to infer the distribution's parameters from data. We review related work on multivariate phase distributions and solutions to the inverse problem in section 4.

In this letter, we provide a solution to the inverse problem for systems of symmetrically coupled phase oscillators. We begin by presenting the Langevin dynamics for a generalized form of the Kuramoto model of coupled phase oscillators. Solving for the equilibrium distribution yields a multivariate probability distribution of coupled phase variables. This probabilistic formalism allows us to derive a novel estimation technique for the coupling terms from phase variable measurements. We show that this technique performs robustly in high dimensions, and we introduce a regularization on the solution that provides better estimates under limited data. We close with a discussion of the technique as a statistical and physical model and express the limitations of the physical interpretation imposed by the model's assumptions.

2.  Phase Coupling Estimation

2.1.  A Model of Coupled Phase Oscillators.

Consider a network of d identical coupled oscillators with intrinsic frequency ω. In the limit of weak coupling, the amplitude of the oscillators can be assumed to be constant, and the equations of motion can be formulated in terms of d phase variables , . A popular choice for the dynamics of such a system is given by the Kuramoto model (Kuramoto, 1984), which has constant coupling between oscillators. Such models have been used to simulate neural systems (Vicente, Arenas, & Bonilla, 1996; Hoppensteadt & Izhikevich, 1995). We can generalize this model to include inhomogeneous couplings and inhomogeneous phase offsets between oscillators. The dynamic equation is then given by
formula
2.1
where is the coupling strength and is the preferred phase between two oscillators i and j. We consider only the case of symmetric coupling (, ). The constant intrinsic frequency, denoted by ω, is identical for all oscillators. The noise fluctuations, , are zero-mean gaussian distributed with δ covariance functions and variance , corresponding to the temperature of the system:
formula
2.2
The equations of motion given in equation 2.1, for our system of coupled oscillators can be considered as a nonlinear Langevin equation describing Brownian motion on a d-torus in the presence of the potential given by
formula
2.3
where is now a d-dimensional vector with components and K is the -dimensional parameter matrix with elements . Note that by applying the transformation to equation 2.1, we can assume without loss of generality.
By changing the coordinates from the angular representation, , to the complex representation, with components , we can rewrite equation 2.3 more compactly as the (real-valued) quadratic Hermitian form:
formula
2.4
where denotes Hermitian conjugation of the vector x and K is the Hermitian matrix defined above. This energy function, equation 2.3, is closely related to the XY-model (Barouch, McCoy, & Dresden, 1970), which has only homogeneous nearest-neighbor couplings (kij= const.) and no phase offsets ( 0). This generalization is analogous to the extension of the homogeneous Ising model to spin glasses.
It is known (see, e.g., Risken, 1989) that the probability density of a system governed by Langevin dynamics evolves according to the Fokker-Planck equation,
formula
2.5
with drift and diffusion coefficients given by
formula
2.6
Since the drift coefficient Di is a gradient field and the diffusion coefficient Dij is constant, we can solve the Fokker-Planck equation, given by equation 2.5, for the stationary solution in closed form and obtain
formula
2.7
with the energy function given by equation 2.3, and partition function Z(K).

2.2.  Phase Correlations.

Given a probability distribution of multivariate phase variables, we can now show that phase correlations in systems of coupled phase oscillators are only indirectly related to the coupling parameters. Phase correlations, a pairwise measurement often used to characterize oscillator systems, have a direct relationship to the marginal distribution of phase differences but not to coupling parameters. The form of the marginal distribution can be derived by examination of the individual factors in the definition
formula
2.8
in which the integration is over all phases with . After applying the variable substitution , all terms in equation 2.8 either depend on the phase difference , or are independent of and . The independent terms integrate to a constant, and the remaining terms combine to a von Mises distribution in the pairwise phase difference,
formula
2.9
with mean phase and concentration parameter . The parameters of the distribution (2.9) can be estimated from the first circular moment : the mean phase is the complex angle of the first moment, and the concentration parameter can be obtained by numerically solving the equation and the normalization constant is given by . I0(x) and I1(x) denote the modified Bessel functions of zeroth and first order, respectively. The value of is related to the coupling parameters K through equation 2.8. Therefore, there is a nontrivial relationship between the measured phase correlations and the coupling parameters.

Because of this nontrivial relationship, pairwise measurements can often lead to false interpretations of the true coupling. To demonstrate this point, we simulated three model systems and present the results in Figure 1. Each system demonstrates a case in which taking phase correlations at face value will lead to false conclusions about the network connectivity. The first system (see row 1, column 1 in Figure 1) is a network of three coupled oscillators in which the two oscillators labeled A and B are not directly coupled (). Although these oscillators are not directly coupled, the empirical distribution of their phase difference shows a strong peak (second column, ). In the third column, we depict the network coupling that is implied by using phase correlations as a proxy for underlying network coupling. In this case, phase correlations indicate that oscillators A and B are directly coupled when their true coupling is zero.

Figure 1:

Phase coupling estimation recovers the true coupling from measurements in situations where phase correlations alone produce false inferences about network coupling. True coupling (first column): depicts the network coupling used to simulate a system of oscillators; Pairwise distribution (second column): the empirical distribution of the phase difference between oscillators A and B. Phase correlations (third column): inferred network coupling using phase correlations alone. Phase coupling estimation (fourth column): inferred network coupling using our estimation technique. Each row presents a different situation in which phase correlations provide incorrect inference about oscillator interactions. Row 1, spurious coupling: phase correlations indicate coupling between A and B when the true coupling and coupling estimated using our model are 0. Row 2, missing coupling: phase correlations indicate a lack of coupling between A and B when there is an interaction between A and B, which is correctly inferred using our estimation technique. Row 3, incorrect offset: phase correlations indicate that oscillator A leads oscillator B, when the true relationship, and relationship inferred by our technique, is that the interaction between oscillator A and oscillator B is a phase lag.

Figure 1:

Phase coupling estimation recovers the true coupling from measurements in situations where phase correlations alone produce false inferences about network coupling. True coupling (first column): depicts the network coupling used to simulate a system of oscillators; Pairwise distribution (second column): the empirical distribution of the phase difference between oscillators A and B. Phase correlations (third column): inferred network coupling using phase correlations alone. Phase coupling estimation (fourth column): inferred network coupling using our estimation technique. Each row presents a different situation in which phase correlations provide incorrect inference about oscillator interactions. Row 1, spurious coupling: phase correlations indicate coupling between A and B when the true coupling and coupling estimated using our model are 0. Row 2, missing coupling: phase correlations indicate a lack of coupling between A and B when there is an interaction between A and B, which is correctly inferred using our estimation technique. Row 3, incorrect offset: phase correlations indicate that oscillator A leads oscillator B, when the true relationship, and relationship inferred by our technique, is that the interaction between oscillator A and oscillator B is a phase lag.

Phase correlations can also indicate that there is no coupling when the true system is coupled (see Figure 1 row 2) and can indicate phase lead when the true system interacts with a phase lag (see Figure 1 row 3). In the second example, oscillators A and B are coupled with . However, the pairwise empirical distribution shows no significant coupling (), and inference of the network connectivity using phase correlations (column 3) leads to the incorrect inference that oscillators A and B are not coupled. In the final example, oscillators A and B are coupled with and . However, the empirical distribution indicates a phase offset of the opposite sign (), and the phase correlation indicates that A leads B. These incorrect inferences arise because phase correlations are a measurement and do not provide a model of the interactions within the system. In other words, phase correlations alone fail to account for the interactions between oscillators that arise through intermediate oscillators and detailed network connectivity.

2.3.  Closed-Form Solution.

We wish to solve the inverse problem for the general case of coupled phase oscillators in equation 2.7. Stated explicitly, the problem is to infer the distribution's parameters (coupling terms and phase offsets ) from measurements of the network's state, .

The inverse problem is typically solved by following a maximum likelihood estimation procedure. Given an empirical data distribution defined by a set of N-independent measurements and a likelihood distribution , which is the probability distribution under the model for a given set of parameters K, the log-likelihood function is defined as the joint density distribution of the observations considered as a function of K:
formula
2.10
where denotes the expectation value taken over the data distribution. The maximum likelihood of the observed data with respect to the distribution parameters can be computed by setting the derivative of the average log-likelihood function to zero:
formula
2.11
where and denote the expectation values taken over the data and model distribution, respectively. In our situation, a closed-form solution to this equation does not exist. However, we can find a solution by iteratively descending the gradient. This procedure has a number of drawbacks: the procedure is inherently iterative, estimating the expectation under the estimated distribution in equation 2.11 involves a computationally expensive sampling procedure, and the sampling procedure may suffer from a variety of problems due to the landscape of the energy function.

To avoid the pitfalls of the maximum likelihood approach, we now derive a closed-form solution to the inverse problem for phase-coupled oscillators. We use the score matching method introduced by Hyvärinen (2005, 2007). Score matching allows the fitting of probability distributions of the exponential form for real-valued data without computation of the normalization constant Z. If the energy depends linearly on the distribution parameters, the solution can be computed in closed form by setting the derivative of the score function to zero (Hyvärinen, 2007).

We follow this approach to estimate the distribution parameters for our distribution in equation 2.7. The score matching estimator of K is given by , and the score function JSM(K) is given by
formula
with the expectation value, , taken over the data distribution. Using the quadratic form of the energy in equation 2.7 and the Jacobian , we compute
formula
where denotes complex conjugation, MT denotes transposition, and denotes Hermitian conjugation of the matrix M. The estimator is computed by setting the derivative of the score function to zero. This produces a system of linear equations,
formula
2.12
where the expectation values are defined as and . By setting the diagonal elements of K to zero, we can remove the corresponding equations where i=j from the system of equations. We can solve the resulting system of linear equations using standard techniques.

2.4.  Regularized Solution.

In situations with limited or noisy data, it is desirable to use a regularized estimate of the phase coupling parameters. Because the solution in equation 2.12 is a linear system of equations, many standard regularization techniques may be used. Here we provide the commonly used Tikhonov regularizer (Tikhonov, 1963) for phase coupling estimation. This approach is also known as ridge regression. We first rewrite the system of linear equations in matrix form,
formula
2.13
where the rows of the matrix A are referenced by the index I≔(i, j) where , and the columns are referenced by the index K≔(k, l) where :
formula
2.14
The notation vec M is the column vector defined by the index I≔(i, j):
formula
In Tikhonov regularization, a regularization term is added to the standard linear least squares solution to give
formula
2.15
where is the Tikhonov matrix. This matrix can be chosen to bias the solution based on a variety of previous information or assumptions about the system. For example, the matrix can be chosen to incorporate information about known anatomical or physical coupling mechanisms between oscillators, or to bias for local coupling in networks with rigorous spatial arrangements (e.g., in 2D grids of oscillators). Here we focus on the case where the Tikhonov matrix is chosen to be the identity matrix with scale parameter . This choice gives preference to solutions with smaller coupling strengths. The resulting regularized solution is given as
formula
2.16
Under a Bayesian interpretation, this regularizer assumes that the coupling strengths kij are chosen from a Rayleigh distribution, and the preferred phase offsets are uniformly distributed or, equivalently, the real and imaginary components of Kij are independent, zero-mean gaussian random variables.

3.  Results

3.1.  Coupling Estimation in Small Networks of Oscillators.

In the following, we show that phase coupling estimation recovers the parameters of simulated coupled oscillator models. We begin by estimating the coupling parameters of the example systems shown in Figure 1. Given samples of the simulated phase variables , we compute the correlations, Cij, and required four-point functions, Cijkl, and invert the linear system in equation 2.12. This produces an estimate of the coupling parameters. Phase coupling estimation recovers the true coupling parameters as shown in the fourth column of Figure 1. In each of the three example systems, phase coupling estimation recovers the true system parameters when a direct interpretation of phase correlations would lead to false inferences about the network (as shown in column 3). This is a result of the fact that phase coupling estimation makes explicit an underlying model of the dynamical system and recovers the parameters of the model that are necessary to produce the observed phase correlations.

3.2.  Performance of Phase Coupling Estimation.

We now systematically analyze the performance of phase coupling estimation: the ability of the technique to recover the distribution parameters from data. The procedure is as follows. We begin by sampling a set of distribution parameters K. Given these parameters, we then sample phase variables by numerically integrating equation 2.1. We then estimate the parameters given the sampled data using equation 2.12. The real and imaginary entries of the complex matrix K are sampled from a normal distribution, , and the diagonal entries are set to zero: Kii=0. Note that this produces a dense coupling matrix.

In the first column of Figure 2, we graphically display the element-wise amplitude and phase of a sample matrix K where d=16. Using this matrix, we sampled N=2560 phase vectors. The recovered parameters are shown in the second column of Figure 2. While it is clear that these matrices are visually similar, we quantified the error using two different metrics. First, we calculate the mean squared error of the matrix elements, , where is the estimated parameter. In the third column of Figure 2 we display the element-wise error before averaging. We also computed a metric indicating the quality of the recovered parameters borrowed from Timme (2007): , where , u is the Heaviside step function, and Kmax is the maximum absolute value of all matrix entries Kij and . For the example in Figure 2, mse=0.02, and Q.95=0.75.

Figure 2:

Phase coupling estimation for a system of 16 coupled oscillators with random coupling. (a) True coupling matrix K: true system parameters for d = 16 (first row, element-wise amplitude; second row, element-wise angle with alpha channel scaled by the amplitude of the corresponding element, best viewed in color). (b) Estimated coupling matrix : estimated parameters recovered from 2560 time samples of using equation 2.12. (c) Estimation error: first row element-wise mse (note scaling), second row estimation error measurements, mse, and Q.95 (see text for definition).

Figure 2:

Phase coupling estimation for a system of 16 coupled oscillators with random coupling. (a) True coupling matrix K: true system parameters for d = 16 (first row, element-wise amplitude; second row, element-wise angle with alpha channel scaled by the amplitude of the corresponding element, best viewed in color). (b) Estimated coupling matrix : estimated parameters recovered from 2560 time samples of using equation 2.12. (c) Estimation error: first row element-wise mse (note scaling), second row estimation error measurements, mse, and Q.95 (see text for definition).

We computed these error metrics over a range of dimensions and samples per dimension. The error metrics for each dimension and samples per dimension were averaged over 20 trials and are plotted in Figure 3. The algorithm achieves highly accurate parameter recovery for as few as 100 samples per dimension and achieves full recovery of parameters as the number of samples per dimension reaches 1000. This indicates that recovery of true parameters is quite feasible in many real-world settings.

Figure 3:

Performance of phase coupling estimation. mse metric as a function of samples per dimension for various dimensions, d = 8, 16, 32, 64, 100. Q.95 metric. The example displayed in Figure 2 is indicated by the black diamond. Values are averaged over 20 trials. For visual clarity, standard error bars are displayed only for d = 8 and d = 100. Phase coupling estimation accurately recovers the system parameters with only 100 samples per dimension and achieves nearly perfect recovery with 1000 samples per dimension.

Figure 3:

Performance of phase coupling estimation. mse metric as a function of samples per dimension for various dimensions, d = 8, 16, 32, 64, 100. Q.95 metric. The example displayed in Figure 2 is indicated by the black diamond. Values are averaged over 20 trials. For visual clarity, standard error bars are displayed only for d = 8 and d = 100. Phase coupling estimation accurately recovers the system parameters with only 100 samples per dimension and achieves nearly perfect recovery with 1000 samples per dimension.

3.3.  Performance of Regularized Phase Coupling Estimation.

We next examine the performance of the regularized phase coupling estimator in equation 2.16. For systems with 12, 16, and 20 oscillators, we again sampled coupling parameters from a normal distribution: , and the diagonal entries are set to zero: Kii=0. We then simulated the dynamical system given by equation 2.1 and randomly drew phase samples to give 40 samples per dimension, N=40d. We then solved the regularized system given by equation 2.16 for a range of scale parameters λ. We evaluated the performance of the estimate using the mse of the matrix elements. The results are shown in Figure 4. For graphical display, we normalized the scale parameter λ by the dimensionality of the system and graph the mse as a fraction of the mse for the unregularized solution (). We leave the methodology of choosing an optimal value of λ to future work, but standard resampling-based procedures that optimize the distance of the estimated parameters to parameters estimated using randomized surrogates may be used (see, e.g., Biessmann et al., 2010).

Figure 4:

Regularized phase coupling estimation. The error of the estimated phase coupling parameters plotted as a function of the regularization scale parameter λ. Error is measured as mean squared error between the recovered matrix elements and the true matrix elements. To aid visualization, the phase coupling parameter is normalized by the dimensionality of the oscillator system (). The mean squared error is normalized by the mean squared error achieved by the unregularized solution (). We simulated systems of 12, 16, and 20 oscillators and estimated the parameters using the regularized solution from 40 phase samples per dimension (480, 640, and 800 samples, respectively, for each system). Note that the unregularized estimate is plotted at , and more accurate estimates are achieved for between 0.2 and 0.4.

Figure 4:

Regularized phase coupling estimation. The error of the estimated phase coupling parameters plotted as a function of the regularization scale parameter λ. Error is measured as mean squared error between the recovered matrix elements and the true matrix elements. To aid visualization, the phase coupling parameter is normalized by the dimensionality of the oscillator system (). The mean squared error is normalized by the mean squared error achieved by the unregularized solution (). We simulated systems of 12, 16, and 20 oscillators and estimated the parameters using the regularized solution from 40 phase samples per dimension (480, 640, and 800 samples, respectively, for each system). Note that the unregularized estimate is plotted at , and more accurate estimates are achieved for between 0.2 and 0.4.

4.  Discussion

In this section, we discuss the interpretation of the model and other approaches to solving the inverse problem in networks of oscillators. We may interpret the model and estimation technique from a statistical and a physical systems perspective. In practice, the distribution and estimator may be applied in any situation where multivariate phase measurements are taken. There are two important aspects to the interpretation of the resulting parameters. First, the estimated parameters may be viewed as purely statistical quantities, and second, under certain assumptions, the parameters may correspond to interactions of an underlying physical process.

4.1.  Statistical Interpretation.

The pairwise distribution in equation 2.7 provides the most parsimonious statistical model of the joint multivariate phase distribution given only pairwise phase measurements. The corresponding estimator in equation 2.12 provides the unique maximum entropy solution. Maximum entropy solutions serve as the least biased estimate of the distribution possible and can be used when the true joint distribution is unknown. For this reason, phase coupling estimation can potentially provide a contribution in a variety of situations of interest to scientists and engineers. In applications where the true joint distribution is unknown, it is possible to test the validity of the pairwise maximum entropy assumption by comparing higher-order moments of the empirical distribution to the maximum entropy distribution. Such tests would indicate to what extent the true joint distribution is captured by the pairwise model.

4.1.1.  Other Statistical Models of Phase.

Our multivariate phase distribution and estimation technique can be compared to a number of previous efforts to characterize statistical dependencies in circular phase variables. Examples from the statistics community include the von Mises distribution, the von Mises–Fisher distribution (Fisher, 1953), the Kent distribution (Kent, 1982), and the cosine model (Mardia, Taylor, & Subramaniam, 2007). The von Mises distribution is a widely used distribution when dealing with circular variables. It is a univariate, unimodal distribution for a circular variable that is analogous to a univariate gaussian for a scalar variable. The von Mises–Fisher distribution is a directional distribution that extends the univariate von Mises distribution to the hypersphere. Unfortunately, the von Mises–Fisher distribution does not model densities on the d-dimensional torus (instead it models distributions on the d-dimensional sphere), and it fails to capture the pairwise dependencies in dynamical systems models of coupled oscillators. The cosine model (Mardia et al., 2007) describes the pairwise statistics between two phase variables. This distribution has a similar functional form to the one in equation 2.7, however, it models only two variables and does not have an efficient estimation procedure. The generalized cosine model (Mardia, Hughes, Taylor, & Singh, 2008) does formalize particular pairwise phase relationships, but estimation techniques for more than two phase variables have not been produced. Other work in the statistics community has addressed univariate phase distributions by extending the unimodal von Mises distribution to multimodal distributions (Mardia, 1975; Jammalamadaka & Sengupta, 2001). This work may be of interest for further alterations to the dynamical system model we have proposed, but it is not directly relevant because they do not address multivariate distributions.

4.2.  Physical Interpretation.

Physical models can bridge the gap between measurements and physical interactions by formulating the transformation from physical process to measurement. Using such a model, it is then possible to provide concrete predictions that can either advance our understanding or falsify the model. Inevitably, a physical model must make assumptions about the underlying physical processes.

The procedure we have described takes a model of a physical system, which makes specific assumptions, finds the corresponding steady-state distribution, and estimates the parameters of the distribution. The attribution of the parameters in the probabilistic model to physical processes or interactions is predicated on the validity of the assumptions made by the physical model. The mapping from dynamical systems to steady-state distributions is not unique, and thus many physical systems can produce the same estimated coupling parameters.

Although the assumptions made by the physical model in equation 2.1 may be violated in real-world situations, central aspects of the system may still be captured. On a case-by-case basis, the assumptions and predictions of the physical model must be tested to determine if the attribution of the coupling parameters to physical processes is valid. As the assumptions of the model are violated, the degree to which the model parameters correspond to the true physical interactions will deteriorate. We discuss the assumptions imposed by the model and limitations related to physical interpretation in the next section.

4.2.1.  Assumptions and Limitations.

The assumptions of the physical model of coupled oscillators fall into three categories. First, the model assumes that there is no structured external input to the system. In equation 2.1, the external input is modeled as unstructured white noise. Generally it is not possible to predict how structured external inputs would affect the interactions of the oscillators and the estimate of their coupling terms. It may be possible to explicitly model unknown input as unobserved oscillators that interact with the observed oscillators. (For a relevant statistical approach, see Zemel, Williams, & Mozer, 1995.)

Second, the model assumes that the oscillators interact symmetrically and instantaneously. Because our model captures only the steady-state distribution, it is blind to time reversal and cannot model directed interactions. Therefore, the model cannot be used for inferring directionality of interactions. However, extending the analysis to include multiple time slices may provide an extension capable of modeling directed interactions (for a d = 2 example, see Rosenblum & Pikovsky, 2001). Closely related to modeling directed interactions is the study of causality. Like all other statistical models, the distribution and model we have introduced here does not directly address the issue of causality. However, our distribution and estimation technique may be instrumental in attempts to infer causality in high-dimensional phase spaces.

Finally, the model in its current form assumes that the oscillators all share the same intrinsic frequency (ω) and interact through a cosine function. Because differences in the intrinsic frequencies enter the dynamic equations in a similar way as structured external inputs, it is likely that they would lead to misestimation of the network interactions. (For a technique that does attempt to estimate the interaction functions and frequencies of the oscillators, see Tokuda, Jain, Kiss, & Hudson, 2007).

4.3.  Inverse Problems.

Despite great progress in solving the inverse problem for coupled phase oscillators, current methods suffer from a number of drawbacks that our method addresses. Some methods require intervention by perturbing a single isolated oscillator (Kuramoto, 1984; Sakaguchi, Shinomoto, & Kuramoto, 1988; Kiss, Zhai, & Hudson, 2005) or the entire system (Timme, 2007). Such intervention techniques may not be feasible in many real-world situations. Other techniques model only two-oscillator systems and do not generalize to arbitrarily large systems (Galán, Ermentrout, & Urban, 2005; Miyazaki & Kinoshita, 2006; Kralemann, Cimponeriu, Rosenblum, Pikovsky, & Mrowka, 2008). The technique most related to our own is that of Tokuda et al. (2007). This technique addresses the inference of coupling from multivariate statistics; however, in its current form it is restricted to homogeneous coupling.

5.  Conclusion

In this letter, we have introduced a closed-form solution to the inverse problem for systems of coupled phase oscillators using a maximum entropy approach. We derived the solution by formulating a dynamical system and showed that the estimation technique recovers the parameters of the model in high dimensions and from noisy measurements. Phase coupling estimation provides a rigorous framework for analyzing neural systems that exhibit oscillatory dynamics and phase synchronization. Because phase coupling estimation successfully recovers the true network connectivity of the specified dynamical system in situations where phase correlations alone lead to false conclusions, this technique may be instrumental in elucidating the presence and role of dynamic functional networks in cortex.

Acknowledgments

We thank Matthias Bethge, Chris Hillar, Martin Lisewski, Bruno Olshausen, Jascha Sohl-Dickstein, and Marc Timme for helpful comments and discussions. This work has been supported by NSF grants IIS-0705939, IIS-0713657, and IIS-0917342 and NGA grant HM1582-05-1-2017.

References

Adrian
,
E.
(
1942
).
Olfactory reactions in the brain of the hedgehog
.
Journal of Physiology
,
100
(
4
),
459
473
.
Barouch
,
E.
,
McCoy
,
B.
, &
Dresden
,
M.
(
1970
).
Statistical mechanics of the XY model. I
.
Physical Review A
,
2
(
3
),
1075
1092
.
Berger
,
H.
(
1929
).
Uber das elektrenkephalogramm des menschen
.
Arch. Psychiatr. Nervenkr.
,
87
,
527
570
.
Biessmann
,
F.
,
Meinecke
,
F. C.
,
Gretton
,
A.
,
Rauch
,
A.
,
Rainer
,
G.
,
Logothetis
,
N. K.
, et al
(
2010
).
Temporal kernel CCA and its application in multimodal neuronal data analysis
.
Machine Learning
,
79
(
1–2
),
5
27
.
Brosch
,
M.
,
Budinger
,
E.
, &
Scheich
,
H.
(
2002
).
Stimulus-related gamma oscillations in primate auditory cortex
.
Journal of Neurophysiology
,
87
(
6
),
2715
2725
.
Buschman
,
T.
, &
Miller
,
E.
(
2007
).
Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices
.
Science
,
315
(
5820
),
1860
1862
.
Buzsaki
,
G.
(
2006
).
Rhythms of the brain
.
New York
:
Oxford University Press
.
Buzsaki
,
G.
, &
Draguhn
,
A.
(
2004
).
Neuronal oscillations in cortical networks
.
Science
,
304
(
5679
),
1926
1929
.
Chrobak
,
J.
, &
Buzsaki
,
G.
(
1996
).
High-frequency oscillations in the output networks of the hippocampal-entorhinal axis of the freely behaving rat
.
Journal of Neuroscience
,
16
(
9
),
3056
3066
.
Cocco
,
S.
,
Leibler
,
S.
, &
Monasson
,
R.
(
2009
).
Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods
.
Proceedings of the National Academy of Sciences
,
106
(
33
),
14058
14062
.
Colgin
,
L. L.
,
Denninger
,
T.
,
Fyhn
,
M.
,
Hafting
,
T.
,
Bonnevie
,
T.
,
Jensen
,
O.
, et al
(
2009
).
Frequency of gamma oscillations routes flow of information in the hippocampus
.
Nature
,
462
(
7271
),
353
357
.
Crawford
,
A.
, &
Fettiplace
,
R.
(
1981
).
An electrical tuning mechanism in turtle cochlear hair cells
.
Journal of Physiology
,
312
(
1
),
377
412
.
Eckhorn
,
R.
,
Bauer
,
R.
,
Jordan
,
W.
,
Brosch
,
M.
,
Kruse
,
W.
,
Munk
,
M.
, et al
(
1988
).
Coherent oscillations: A mechanism of feature linking in the visual cortex?
Biological Cybernetics
,
60
(
2
),
121
130
.
Fell
,
J.
,
Klaver
,
P.
,
Lehnertz
,
K.
,
Grunwald
,
T.
,
Schaller
,
C.
,
Elger
,
C.
, et al
(
2001
).
Human memory formation is accompanied by rhinal-hippocampal coupling and decoupling
.
Nature Neuroscience
,
4
,
1259
1264
.
Fisher
,
R.
(
1953
).
Dispersion on a sphere
.
Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences
,
217
(
1130
),
295
305
.
Fox
,
M.
,
Snyder
,
A.
,
Vincent
,
J.
,
Corbetta
,
M.
,
Van Essen
,
D.
, &
Raichle
,
M.
(
2005
).
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
.
Proceedings of the National Academy of Sciences
,
102
(
27
),
9673
9678
.
Fries
,
P.
(
2005
).
A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence
.
Trends in Cognitive Sciences
,
9
(
10
),
474
480
.
Galán
,
R. F.
,
Ermentrout
,
G. B.
, &
Urban
,
N. N.
(
2005
).
Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling
.
Phys. Rev. Lett.
,
94
(
15
),
158101
.
Gray
,
C.
,
König
,
P.
,
Engel
,
A.
, &
Singer
,
W.
(
1989
).
Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties
.
Nature
,
338
(
6213
),
334
337
.
Hopfield
,
J.
(
1995
).
Pattern recognition computation using action potential timing for stimulus representation
.
Nature
,
376
(
6535
),
33
36
.
Hoppensteadt
,
F.
, &
Izhikevich
,
E.
(
1995
).
Canonical neural models
. In
M. Arbib
(Ed.),
Brain theory and neural networks
(pp.
181
186
).
Cambridge, MA
:
MIT Press
.
Hyvärinen
,
A.
(
2005
).
Estimation of non-normalized statistical models by score matching
.
Journal of Machine Learning Research
,
6
,
695
709
.
Hyvärinen
,
A.
(
2007
).
Some extensions of score matching
.
Computational Statistics and Data Analysis
,
51
(
5
),
2499
2512
.
Jammalamadaka
,
S.
, &
Sengupta
,
A.
(
2001
).
Topics in circular statistics
.
Singapore
:
World Scientific
.
Jarvis
,
M.
, &
Mitra
,
P.
(
2001
).
Sampling properties of the spectrum and coherency of sequences of action potentials
.
Neural Computation
,
13
(
4
),
717
749
.
Jaynes
,
E.
(
1957
).
Information theory and statistical mechanics
.
Physical Review
,
106
(
4
),
620
630
.
Jutras
,
M.
,
Fries
,
P.
, &
Buffalo
,
E.
(
2009
).
Gamma-band synchronization in the macaque hippocampus and memory formation
.
Journal of Neuroscience
,
29
(
40
),
12521
12531
.
Kent
,
J.
(
1982
).
The Fisher-Bingham distribution on the sphere
.
Journal of the Royal Statistical Society, Series B (Methodological)
,
44
(
1
),
71
80
.
Kiss
,
I. Z.
,
Zhai
,
Y.
, &
Hudson
,
J. L.
(
2005
).
Predicting mutual entrainment of oscillators with experiment-based phase models
.
Phys. Rev. Lett.
,
94
(
24
),
248301
.
Koepsell
,
K.
,
Wang
,
X.
,
Vaingankar
,
V.
,
Wei
,
Y.
,
Wang
,
Q.
,
Rathbun
,
D.
, et al
(
2009
).
Retinal oscillations carry visual information to cortex
.
Frontiers in Systems Neuroscience
,
3
(
4
),
1
18
.
Kralemann
,
B.
,
Cimponeriu
,
L.
,
Rosenblum
,
M.
,
Pikovsky
,
A.
, &
Mrowka
,
R.
(
2008
).
Phase dynamics of coupled oscillators reconstructed from data
.
Phys. Rev. E
,
77
(
6
),
066205
.
Kuramoto
,
Y.
(
1984
).
Chemical oscillations, waves, and turbulence
.
Berlin
:
Springer-Verlag
.
Lachaux
,
J.
,
Rodriguez
,
E.
,
Martinerie
,
J.
, &
Varela
,
F.
(
1999
).
Measuring phase synchrony in brain signals
.
Human Brain Mapping
,
8
(
4
),
194
208
.
Le Van Quyen
,
M.
,
Foucher
,
J.
,
Lachaux
,
J.
,
Rodriguez
,
E.
,
Lutz
,
A.
,
Martinerie
,
J.
, et al
(
2001
).
Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony
.
Journal of Neuroscience Methods
,
111
(
2
),
83
98
.
Mardia
,
K.
(
1975
).
Statistics of directional data
.
J. Royal Statistical Society, Series B (Methodological)
,
37
(
3
),
349
393
.
Mardia
,
K.
,
Hughes
,
G.
,
Taylor
,
C.
, &
Singh
,
H.
(
2008
).
A multivariate von Mises distribution with applications to bioinformatics
.
Canadian Journal of Statistics-Revue Canadienne de Statistique
,
36
(
1
),
99
110
.
Mardia
,
K.
,
Taylor
,
C.
, &
Subramaniam
,
G.
(
2007
).
Protein bioinformatics and mixtures of bivariate von mises distributions for angular data
.
Biometrics
,
63
(
2
),
505
512
.
Mirollo
,
R.
, &
Strogatz
,
S.
(
1990
).
Synchronization of pulse-coupled biological oscillators
.
SIAM J. Appl. Math
,
50
(
6
),
1645
1662
.
Miyazaki
,
J.
, &
Kinoshita
,
S.
(
2006
).
Determination of a coupling function in multicoupled oscillators
.
Phys. Rev. Lett.
,
96
(
19
),
194101
.
Palva
,
J.
,
Palva
,
S.
, &
Kaila
,
K.
(
2005
).
Phase synchrony among neuronal oscillations in the human cortex
.
Journal of Neuroscience
,
25
(
15
),
3962
3672
.
Pareti
,
G.
, &
De Palma
,
A.
(
2004
).
Does the brain oscillate? The dispute on neuronal synchronization
.
Neurological Sciences
,
25
(
2
),
41
47
.
Risken
,
H.
(
1989
).
The Fokker-Planck equation: Methods of solution and applications
.
Berlin
:
Springer-Verlag
.
Rose
,
J.
,
Brugge
,
J.
,
Anderson
,
D.
, &
Hind
,
J.
(
1967
).
Phase-locked response to low-frequency tones in single auditory nerve fibers of the squirrel monkey
.
J. Neurophysiol
,
30
(
4
),
769
793
.
Rosenblum
,
M.
, &
Pikovsky
,
A.
(
2001
).
Detecting direction of coupling in interacting oscillators
.
Phys. Rev. E
,
64
(
4
),
045202
.
Saalmann
,
Y.
,
Pigarev
,
I.
, &
Vidyasagar
,
T.
(
2007
).
Neural mechanisms of visual attention: How top-down feedback highlights relevant locations
.
Science
,
316
(
5831
),
1612
1615
.
Sakaguchi
,
H.
,
Shinomoto
,
S.
, &
Kuramoto
,
Y.
(
1988
).
Mutual entrainment in oscillator lattices with nonvariational type interaction
.
Progress of Theoretical Physics
,
79
(
5
),
1069
1079
.
Schneidman
,
E.
,
Berry
,
M. J.
,
Segev
,
R.
, &
Bialek
,
W.
(
2006
).
Weak pairwise correlations imply strongly correlated network states in a neural population
.
Nature
,
440
(
7087
),
1007
1012
.
Shlens
,
J.
,
Field
,
G. D.
,
Gauthier
,
J. L.
,
Grivich
,
M. I.
,
Petrusca
,
D.
,
Sher
,
A.
, et al
(
2006
).
The structure of multi-neuron firing patterns in primate retina
.
J. Neurosci.
,
26
(
32
),
8254
8266
.
Siapas
,
A.
,
Lubenov
,
E.
, &
Wilson
,
M.
(
2005
).
Prefrontal phase locking to hippocampal theta oscillations
.
Neuron
,
46
(
1
),
141
151
.
Siegel
,
M.
,
Donner
,
T.
,
Oostenveld
,
R.
,
Fries
,
P.
, &
Engel
,
A.
(
2008
).
Neuronal synchronization along the dorsal visual pathway reflects the focus of spatial attention
.
Neuron
,
60
(
4
),
709
719
.
Siegel
,
M.
,
Warden
,
M.
, &
Miller
,
E.
(
2009
).
Phase-dependent neuronal coding of objects in short-term memory
.
Proceedings of the National Academy of Sciences
,
106
(
50
),
21341
21346
.
Spencer
,
K.
,
Nestor
,
P.
,
Niznikiewicz
,
M.
,
Salisbury
,
D.
,
Shenton
,
M.
, &
McCarley
,
R.
(
2003
).
Abnormal neural synchrony in schizophrenia
.
Journal of Neuroscience
,
23
(
19
),
7407
7411
.
Strogatz
,
S.
(
2003
).
Sync: The emerging science of spontaneous order
.
New York
:
Hyperion
.
Tikhonov
,
A.
(
1963
).
Solution of incorrectly formulated problems and the regularization method
.
Soviet Math. Dokl
,
4
,
1035
1038
.
Timme
,
M.
(
2007
).
Revealing network connectivity from response dynamics
.
Phys. Rev. Lett.
,
98
(
22
),
224101
.
Tokuda
,
I. T.
,
Jain
,
S.
,
Kiss
,
I. Z.
, &
Hudson
,
J. L.
(
2007
).
Inferring phase equations from multivariate time series
.
Phys. Rev. Lett.
,
99
(
6
),
064101
.
Varela
,
F.
,
Lachaux
,
J.
,
Rodriguez
,
E.
, &
Martinerie
,
J.
(
2001
).
The brainweb: Phase synchronization and large-scale integration
.
Nature Reviews Neuroscience
,
2
(
4
),
229
239
.
Vicente
,
C.
,
Arenas
,
A.
, &
Bonilla
,
L.
(
1996
).
On the short-time dynamics of networks of Hebbian coupled oscillators
.
Journal of Physics A: Mathematical and General
,
29
,
L9
L16
.
Wang
,
Y.
,
Hong
,
B.
,
Gao
,
X.
, &
Gao
,
S.
(
2006
).
Phase synchrony measurement in motor cortex for classifying single-trial EEG during motor imagery
. In
Proc. 28th Int. IEEE EMBS Conf.
(pp.
75
78
).
Piscataway, NJ
:
IEEE
.
Winfree
,
A.
(
2001
).
The geometry of biological time
.
Berlin
:
Springer-Verlag
.
Womelsdorf
,
T.
,
Schoffelen
,
J.
,
Oostenveld
,
R.
,
Singer
,
W.
,
Desimone
,
R.
,
Engel
,
A.
, et al
(
2007
).
Modulation of neuronal interactions through neuronal synchronization
.
Science
,
316
(
5831
),
1609
1612
.
Zemel
,
R.
,
Williams
,
C.
, &
Mozer
,
M.
(
1995
).
Lending direction to neural networks
.
Neural Networks
,
8
(
4
),
503
512
.