## Abstract

Coupled oscillators are prevalent throughout the physical world. Dynamical system formulations of weakly coupled oscillator systems have proven effective at capturing the properties of real-world systems and are compelling models of neural systems. However, these formulations usually deal with the forward problem: simulating a system from known coupling parameters. Here we provide a solution to the inverse problem: determining the coupling parameters from measurements. Starting from the dynamic equations of a system of symmetrically coupled phase oscillators, given by a nonlinear Langevin equation, we derive the corresponding equilibrium distribution. This formulation leads us to the maximum entropy distribution that captures pairwise phase relationships. To solve the inverse problem for this distribution, we derive a closed-form solution for estimating the phase coupling parameters from observed phase statistics. Through simulations, we show that the algorithm performs well in high dimensions (*d* = 100) and in cases with limited data (as few as 100 samples per dimension). In addition, we derive a regularized solution to the estimation and show that the resulting procedure improves performance when only a limited amount of data is available. Because the distribution serves as the unique maximum entropy solution for pairwise phase statistics, phase coupling estimation can be broadly applied in any situation where phase measurements are made. Under the physical interpretation, the model may be used for inferring coupling relationships within cortical networks.

## 1. Introduction

Many complex natural phenomena can be modeled as networks of coupled oscillators. Examples can be drawn from the physical, chemical, and biological worlds. Oscillator models have been effective at describing the dynamics of coupled pendula, coupled Josephson junctions, reaction diffusion systems, circadian rhythms, oscillating neural networks, and even the coupling of firefly luminescence (see, e.g., Buzsaki, 2006; Kuramoto, 1984; Mirollo & Strogatz, 1990; Strogatz, 2003; Winfree, 2001).

Oscillatory dynamics are prevalent in neural systems, and their computational use is critical to many theories of neural function. There is experimental evidence for oscillations in a variety of neural systems, from primary sensory effectors, through local cortical networks, and up to the cortico-hippocampal network (Buzsaki & Draguhn, 2004). Oscillations in the olfactory system are present in the olfactory bulb (Adrian, 1942) and are thought to subserve the computation of similarity between odorant patterns (Hopfield, 1995). There is growing evidence for oscillations at many levels of the visual system. Some of the first evidence for oscillations in the human brain came from recordings using EEG over the occipital lobe (Berger, 1929), and more recent findings have shown that subpopulations of pyramidal neurons exhibit gamma-band oscillations in response to visual stimulation (Gray, König, Engel, & Singer, 1989; Eckhorn et al., 1988). In retina, it has been found that some retinal ganglion cells exhibit periodic firing, and this periodicity provides an extra channel for information transmission to LGN and cortex (Koepsell et al., 2009). Evidence for oscillatory activity in the auditory system extends from the periphery to the cortex. Inner hair cells exhibit characteristic frequencies tuned to different bandwidths (Crawford & Fettiplace, 1981), cochlear projections transmit phase-locked responses to subcortical regions (Rose, Brugge, Anderson, & Hind, 1967), and stimulus mediated gamma oscillations have been observed in primary auditory cortex (Brosch, Budinger, & Scheich, 2002). In the hippocampus, oscillatory patterns are well studied and are thought to subserve memory formation (Jutras, Fries, & Buffalo, 2009) and communication between hippocampus and entorhinal cortex (Chrobak & Buzsaki, 1996; Colgin et al., 2009).

Although the presence of oscillatory dynamics in the brain is largely accepted, the role of these oscillations is widely debated. While it is likely that neural oscillations assume multiple roles in neural systems, and maybe even multiple roles in the same neural system, we have found that it is useful to keep in mind a model of intracortical communication that is mediated by neural oscillations in different populations of neurons. Under the proposed theory of communication through coherence, only groups of neurons that are coherently oscillating can interact effectively (Varela, Lachaux, Rodriguez, & Martinerie, 2001; Fries, 2005). This implies that if we are able to measure neural synchrony, we can infer the actively communicating cortical populations. Such inter-area cortical synchronization is thought to mediate perceptual binding (Pareti & De Palma, 2004), attentional feedback (Siegel, Donner, Oostenveld, Fries, & Engel, 2008), and the transfer of memory from hippocampus to cortex (Fell et al., 2001). The formation of synchronized networks is also thought to be dynamic and indicative of behavioral and cognitive functional state (Fox et al., 2005).

Many studies that investigate the presence, dynamics, and behavioral dependence of neural synchrony have used measures closely related to phase correlation. Phase correlation is a pairwise measurement that determines the second-order statistical dependency between two oscillations, which can include both the strength of the phase dependency and the phase offset of the dependency. A variety of measurements based on phase correlation have been used recently: phase correlation (Siapas, Lubenov, & Wilson, 2005), phase locking value (Lachaux, Rodriguez, Martinerie, & Varela, 1999; Le Van Quyen et al., 2001; Wang, Hong, Gao, & Gao, 2006; Siegel, Warden, & Miller, 2009), phases locking factor (Palva, Palva, & Kaila, 2005), phase coherence (Spencer et al., 2003; Saalmann, Pigarev, & Vidyasagar, 2007; Womelsdorf et al., 2007), and coherence statistic (Jarvis & Mitra, 2001; Buschman & Miller, 2007). These measurements have provided powerful evidence for phase synchrony in cortex, however, all fail to define a proper probabilistic model or the dynamics of a system that would produce such measurements. As we will show in section 2.2, phase correlations are only indirectly related to the parameters of a probabilistic distribution and thus indirectly related to the network interactions in the dynamical system we propose. Therefore, taking phase correlations at face value may lead to false inferences about network interactions.

Given the wealth of evidence for neural oscillations, the myriad hypothesized roles for oscillatory dynamics in the brain, and the prevalent use of phase correlation measurements, it is important that we develop tools for inferring the connectivity within oscillatory networks from measurements using probabilistic models. The process of inferring network connectivity from measurements is known as the *inverse problem* and must be solved if we are to scientifically investigate the role of oscillatory dynamics in networks of neurons.

In statistical mechanics, the inverse problem is typically solved by proposing a probability distribution and estimating the distribution's parameters from measurements. A natural choice for the estimation, a highly underdetermined problem, is given by the unique maximum entropy distribution that reproduces the statistics of the measurements (Jaynes, 1957). A number of such distributions and estimation techniques are used throughout the science and engineering communities. In the real-valued case the multivariate gaussian distribution, and in the binary case the Ising model, serve as widely used multivariate maximum entropy distributions consistent with second-order statistics. Each of these cases has well-known estimation techniques for inferring the distribution's parameters from observations. The availability of these techniques has led to a number of applications, for example, the Ising model and its corresponding estimation techniques have been used to infer the coupling in networks of retinal ganglion cells (Shlens et al., 2006; Schneidman, Berry, Segev, & Bialek, 2006; Cocco, Leibler, & Monasson, 2009). However, for the phase variables that are of interest in networks of oscillators, there has been little work on providing a corresponding multivariate probabilistic distribution or deriving estimation techniques to infer the distribution's parameters from data. We review related work on multivariate phase distributions and solutions to the inverse problem in section 4.

In this letter, we provide a solution to the inverse problem for systems of symmetrically coupled phase oscillators. We begin by presenting the Langevin dynamics for a generalized form of the Kuramoto model of coupled phase oscillators. Solving for the equilibrium distribution yields a multivariate probability distribution of coupled phase variables. This probabilistic formalism allows us to derive a novel estimation technique for the coupling terms from phase variable measurements. We show that this technique performs robustly in high dimensions, and we introduce a regularization on the solution that provides better estimates under limited data. We close with a discussion of the technique as a statistical and physical model and express the limitations of the physical interpretation imposed by the model's assumptions.

## 2. Phase Coupling Estimation

### 2.1. A Model of Coupled Phase Oscillators.

*d*identical coupled oscillators with intrinsic frequency ω. In the limit of weak coupling, the amplitude of the oscillators can be assumed to be constant, and the equations of motion can be formulated in terms of

*d*phase variables , . A popular choice for the dynamics of such a system is given by the Kuramoto model (Kuramoto, 1984), which has constant coupling between oscillators. Such models have been used to simulate neural systems (Vicente, Arenas, & Bonilla, 1996; Hoppensteadt & Izhikevich, 1995). We can generalize this model to include inhomogeneous couplings and inhomogeneous phase offsets between oscillators. The dynamic equation is then given by where is the coupling strength and is the preferred phase between two oscillators

*i*and

*j*. We consider only the case of symmetric coupling (, ). The constant intrinsic frequency, denoted by ω, is identical for all oscillators. The noise fluctuations, , are zero-mean gaussian distributed with δ covariance functions and variance , corresponding to the temperature of the system:

*d*-torus in the presence of the potential given by where is now a

*d*-dimensional vector with components and

**K**is the -dimensional parameter matrix with elements . Note that by applying the transformation to equation 2.1, we can assume without loss of generality.

**x**and

**K**is the Hermitian matrix defined above. This energy function, equation 2.3, is closely related to the XY-model (Barouch, McCoy, & Dresden, 1970), which has only homogeneous nearest-neighbor couplings (

*k*= const.) and no phase offsets ( 0). This generalization is analogous to the extension of the homogeneous Ising model to spin glasses.

_{ij}### 2.2. Phase Correlations.

*I*

_{0}(

*x*) and

*I*

_{1}(

*x*) denote the modified Bessel functions of zeroth and first order, respectively. The value of is related to the coupling parameters

**K**through equation 2.8. Therefore, there is a nontrivial relationship between the measured phase correlations and the coupling parameters.

Because of this nontrivial relationship, pairwise measurements can often lead to false interpretations of the true coupling. To demonstrate this point, we simulated three model systems and present the results in Figure 1. Each system demonstrates a case in which taking phase correlations at face value will lead to false conclusions about the network connectivity. The first system (see row 1, column 1 in Figure 1) is a network of three coupled oscillators in which the two oscillators labeled *A* and *B* are not directly coupled (). Although these oscillators are not directly coupled, the empirical distribution of their phase difference shows a strong peak (second column, ). In the third column, we depict the network coupling that is implied by using phase correlations as a proxy for underlying network coupling. In this case, phase correlations indicate that oscillators *A* and *B* are directly coupled when their true coupling is zero.

Phase correlations can also indicate that there is no coupling when the true system is coupled (see Figure 1 row 2) and can indicate phase lead when the true system interacts with a phase lag (see Figure 1 row 3). In the second example, oscillators *A* and *B* are coupled with . However, the pairwise empirical distribution shows no significant coupling (), and inference of the network connectivity using phase correlations (column 3) leads to the incorrect inference that oscillators *A* and *B* are not coupled. In the final example, oscillators *A* and *B* are coupled with and . However, the empirical distribution indicates a phase offset of the opposite sign (), and the phase correlation indicates that *A* leads *B*. These incorrect inferences arise because phase correlations are a measurement and do not provide a model of the interactions within the system. In other words, phase correlations alone fail to account for the interactions between oscillators that arise through intermediate oscillators and detailed network connectivity.

### 2.3. Closed-Form Solution.

We wish to solve the inverse problem for the general case of coupled phase oscillators in equation 2.7. Stated explicitly, the problem is to infer the distribution's parameters (coupling terms and phase offsets ) from measurements of the network's state, .

*N*-independent measurements and a likelihood distribution , which is the probability distribution under the model for a given set of parameters

**K**, the log-likelihood function is defined as the joint density distribution of the observations considered as a function of

**K**: where denotes the expectation value taken over the data distribution. The maximum likelihood of the observed data with respect to the distribution parameters can be computed by setting the derivative of the average log-likelihood function to zero: where and denote the expectation values taken over the data and model distribution, respectively. In our situation, a closed-form solution to this equation does not exist. However, we can find a solution by iteratively descending the gradient. This procedure has a number of drawbacks: the procedure is inherently iterative, estimating the expectation under the estimated distribution in equation 2.11 involves a computationally expensive sampling procedure, and the sampling procedure may suffer from a variety of problems due to the landscape of the energy function.

To avoid the pitfalls of the maximum likelihood approach, we now derive a closed-form solution to the inverse problem for phase-coupled oscillators. We use the score matching method introduced by Hyvärinen (2005, 2007). Score matching allows the fitting of probability distributions of the exponential form for real-valued data without computation of the normalization constant *Z*. If the energy depends linearly on the distribution parameters, the solution can be computed in closed form by setting the derivative of the score function to zero (Hyvärinen, 2007).

**K**is given by , and the score function

*J*

_{SM}(

**K**) is given by with the expectation value, , taken over the data distribution. Using the quadratic form of the energy in equation 2.7 and the Jacobian , we compute where denotes complex conjugation,

**M**

^{T}denotes transposition, and denotes Hermitian conjugation of the matrix

**M**. The estimator is computed by setting the derivative of the score function to zero. This produces a system of linear equations, where the expectation values are defined as and . By setting the diagonal elements of

**K**to zero, we can remove the corresponding equations where

*i*=

*j*from the system of equations. We can solve the resulting system of linear equations using standard techniques.

### 2.4. Regularized Solution.

**A**are referenced by the index

*I*≔(

*i*,

*j*) where , and the columns are referenced by the index

*K*≔(

*k*,

*l*) where : The notation vec

**M**is the column vector defined by the index

*I*≔(

*i*,

*j*):

*k*are chosen from a Rayleigh distribution, and the preferred phase offsets are uniformly distributed or, equivalently, the real and imaginary components of

_{ij}*K*are independent, zero-mean gaussian random variables.

_{ij}## 3. Results

### 3.1. Coupling Estimation in Small Networks of Oscillators.

In the following, we show that phase coupling estimation recovers the parameters of simulated coupled oscillator models. We begin by estimating the coupling parameters of the example systems shown in Figure 1. Given samples of the simulated phase variables , we compute the correlations, *C _{ij}*, and required four-point functions,

*C*, and invert the linear system in equation 2.12. This produces an estimate of the coupling parameters. Phase coupling estimation recovers the true coupling parameters as shown in the fourth column of Figure 1. In each of the three example systems, phase coupling estimation recovers the true system parameters when a direct interpretation of phase correlations would lead to false inferences about the network (as shown in column 3). This is a result of the fact that phase coupling estimation makes explicit an underlying model of the dynamical system and recovers the parameters of the model that are necessary to produce the observed phase correlations.

_{ijkl}### 3.2. Performance of Phase Coupling Estimation.

We now systematically analyze the performance of phase coupling estimation: the ability of the technique to recover the distribution parameters from data. The procedure is as follows. We begin by sampling a set of distribution parameters **K**. Given these parameters, we then sample phase variables by numerically integrating equation 2.1. We then estimate the parameters given the sampled data using equation 2.12. The real and imaginary entries of the complex matrix **K** are sampled from a normal distribution, , and the diagonal entries are set to zero: *K _{ii}*=0. Note that this produces a dense coupling matrix.

In the first column of Figure 2, we graphically display the element-wise amplitude and phase of a sample matrix **K** where *d*=16. Using this matrix, we sampled *N*=2560 phase vectors. The recovered parameters are shown in the second column of Figure 2. While it is clear that these matrices are visually similar, we quantified the error using two different metrics. First, we calculate the mean squared error of the matrix elements, , where is the estimated parameter. In the third column of Figure 2 we display the element-wise error before averaging. We also computed a metric indicating the quality of the recovered parameters borrowed from Timme (2007): , where , *u* is the Heaviside step function, and *K*_{max} is the maximum absolute value of all matrix entries *K _{ij}* and . For the example in Figure 2, mse=0.02, and

*Q*

_{.95}=0.75.

We computed these error metrics over a range of dimensions and samples per dimension. The error metrics for each dimension and samples per dimension were averaged over 20 trials and are plotted in Figure 3. The algorithm achieves highly accurate parameter recovery for as few as 100 samples per dimension and achieves full recovery of parameters as the number of samples per dimension reaches 1000. This indicates that recovery of true parameters is quite feasible in many real-world settings.

### 3.3. Performance of Regularized Phase Coupling Estimation.

We next examine the performance of the regularized phase coupling estimator in equation 2.16. For systems with 12, 16, and 20 oscillators, we again sampled coupling parameters from a normal distribution: , and the diagonal entries are set to zero: *K _{ii}*=0. We then simulated the dynamical system given by equation 2.1 and randomly drew phase samples to give 40 samples per dimension,

*N*=40

*d*. We then solved the regularized system given by equation 2.16 for a range of scale parameters λ. We evaluated the performance of the estimate using the mse of the matrix elements. The results are shown in Figure 4. For graphical display, we normalized the scale parameter λ by the dimensionality of the system and graph the mse as a fraction of the mse for the unregularized solution (). We leave the methodology of choosing an optimal value of λ to future work, but standard resampling-based procedures that optimize the distance of the estimated parameters to parameters estimated using randomized surrogates may be used (see, e.g., Biessmann et al., 2010).

## 4. Discussion

In this section, we discuss the interpretation of the model and other approaches to solving the inverse problem in networks of oscillators. We may interpret the model and estimation technique from a statistical and a physical systems perspective. In practice, the distribution and estimator may be applied in any situation where multivariate phase measurements are taken. There are two important aspects to the interpretation of the resulting parameters. First, the estimated parameters may be viewed as purely statistical quantities, and second, under certain assumptions, the parameters may correspond to interactions of an underlying physical process.

### 4.1. Statistical Interpretation.

The pairwise distribution in equation 2.7 provides the most parsimonious statistical model of the joint multivariate phase distribution given only pairwise phase measurements. The corresponding estimator in equation 2.12 provides the unique maximum entropy solution. Maximum entropy solutions serve as the least biased estimate of the distribution possible and can be used when the true joint distribution is unknown. For this reason, phase coupling estimation can potentially provide a contribution in a variety of situations of interest to scientists and engineers. In applications where the true joint distribution is unknown, it is possible to test the validity of the pairwise maximum entropy assumption by comparing higher-order moments of the empirical distribution to the maximum entropy distribution. Such tests would indicate to what extent the true joint distribution is captured by the pairwise model.

#### 4.1.1. Other Statistical Models of Phase.

Our multivariate phase distribution and estimation technique can be compared to a number of previous efforts to characterize statistical dependencies in circular phase variables. Examples from the statistics community include the von Mises distribution, the von Mises–Fisher distribution (Fisher, 1953), the Kent distribution (Kent, 1982), and the cosine model (Mardia, Taylor, & Subramaniam, 2007). The von Mises distribution is a widely used distribution when dealing with circular variables. It is a univariate, unimodal distribution for a circular variable that is analogous to a univariate gaussian for a scalar variable. The von Mises–Fisher distribution is a directional distribution that extends the univariate von Mises distribution to the hypersphere. Unfortunately, the von Mises–Fisher distribution does not model densities on the *d*-dimensional torus (instead it models distributions on the *d*-dimensional sphere), and it fails to capture the pairwise dependencies in dynamical systems models of coupled oscillators. The cosine model (Mardia et al., 2007) describes the pairwise statistics between two phase variables. This distribution has a similar functional form to the one in equation 2.7, however, it models only two variables and does not have an efficient estimation procedure. The generalized cosine model (Mardia, Hughes, Taylor, & Singh, 2008) does formalize particular pairwise phase relationships, but estimation techniques for more than two phase variables have not been produced. Other work in the statistics community has addressed univariate phase distributions by extending the unimodal von Mises distribution to multimodal distributions (Mardia, 1975; Jammalamadaka & Sengupta, 2001). This work may be of interest for further alterations to the dynamical system model we have proposed, but it is not directly relevant because they do not address multivariate distributions.

### 4.2. Physical Interpretation.

Physical models can bridge the gap between measurements and physical interactions by formulating the transformation from physical process to measurement. Using such a model, it is then possible to provide concrete predictions that can either advance our understanding or falsify the model. Inevitably, a physical model must make assumptions about the underlying physical processes.

The procedure we have described takes a model of a physical system, which makes specific assumptions, finds the corresponding steady-state distribution, and estimates the parameters of the distribution. The attribution of the parameters in the probabilistic model to physical processes or interactions is predicated on the validity of the assumptions made by the physical model. The mapping from dynamical systems to steady-state distributions is not unique, and thus many physical systems can produce the same estimated coupling parameters.

Although the assumptions made by the physical model in equation 2.1 may be violated in real-world situations, central aspects of the system may still be captured. On a case-by-case basis, the assumptions and predictions of the physical model must be tested to determine if the attribution of the coupling parameters to physical processes is valid. As the assumptions of the model are violated, the degree to which the model parameters correspond to the true physical interactions will deteriorate. We discuss the assumptions imposed by the model and limitations related to physical interpretation in the next section.

#### 4.2.1. Assumptions and Limitations.

The assumptions of the physical model of coupled oscillators fall into three categories. First, the model assumes that there is no structured external input to the system. In equation 2.1, the external input is modeled as unstructured white noise. Generally it is not possible to predict how structured external inputs would affect the interactions of the oscillators and the estimate of their coupling terms. It may be possible to explicitly model unknown input as unobserved oscillators that interact with the observed oscillators. (For a relevant statistical approach, see Zemel, Williams, & Mozer, 1995.)

Second, the model assumes that the oscillators interact symmetrically and instantaneously. Because our model captures only the steady-state distribution, it is blind to time reversal and cannot model directed interactions. Therefore, the model cannot be used for inferring directionality of interactions. However, extending the analysis to include multiple time slices may provide an extension capable of modeling directed interactions (for a *d* = 2 example, see Rosenblum & Pikovsky, 2001). Closely related to modeling directed interactions is the study of causality. Like all other statistical models, the distribution and model we have introduced here does not directly address the issue of causality. However, our distribution and estimation technique may be instrumental in attempts to infer causality in high-dimensional phase spaces.

Finally, the model in its current form assumes that the oscillators all share the same intrinsic frequency (ω) and interact through a cosine function. Because differences in the intrinsic frequencies enter the dynamic equations in a similar way as structured external inputs, it is likely that they would lead to misestimation of the network interactions. (For a technique that does attempt to estimate the interaction functions and frequencies of the oscillators, see Tokuda, Jain, Kiss, & Hudson, 2007).

### 4.3. Inverse Problems.

Despite great progress in solving the inverse problem for coupled phase oscillators, current methods suffer from a number of drawbacks that our method addresses. Some methods require intervention by perturbing a single isolated oscillator (Kuramoto, 1984; Sakaguchi, Shinomoto, & Kuramoto, 1988; Kiss, Zhai, & Hudson, 2005) or the entire system (Timme, 2007). Such intervention techniques may not be feasible in many real-world situations. Other techniques model only two-oscillator systems and do not generalize to arbitrarily large systems (Galán, Ermentrout, & Urban, 2005; Miyazaki & Kinoshita, 2006; Kralemann, Cimponeriu, Rosenblum, Pikovsky, & Mrowka, 2008). The technique most related to our own is that of Tokuda et al. (2007). This technique addresses the inference of coupling from multivariate statistics; however, in its current form it is restricted to homogeneous coupling.

## 5. Conclusion

In this letter, we have introduced a closed-form solution to the inverse problem for systems of coupled phase oscillators using a maximum entropy approach. We derived the solution by formulating a dynamical system and showed that the estimation technique recovers the parameters of the model in high dimensions and from noisy measurements. Phase coupling estimation provides a rigorous framework for analyzing neural systems that exhibit oscillatory dynamics and phase synchronization. Because phase coupling estimation successfully recovers the true network connectivity of the specified dynamical system in situations where phase correlations alone lead to false conclusions, this technique may be instrumental in elucidating the presence and role of dynamic functional networks in cortex.

## Acknowledgments

We thank Matthias Bethge, Chris Hillar, Martin Lisewski, Bruno Olshausen, Jascha Sohl-Dickstein, and Marc Timme for helpful comments and discussions. This work has been supported by NSF grants IIS-0705939, IIS-0713657, and IIS-0917342 and NGA grant HM1582-05-1-2017.