Abstract

Recent studies have been using graph-theoretical approaches to model complex networks (such as social, infrastructural, or biological networks) and how their hardwired circuitry relates to their dynamic evolution in time. Understanding how configuration reflects on the coupled behavior in a system of dynamic nodes can be of great importance, for example, in the context of how the brain connectome is affecting brain function. However, the effect of connectivity patterns on network dynamics is far from being fully understood. We study the connections between edge configuration and dynamics in a simple oriented network composed of two interconnected cliques (representative of brain feedback regulatory circuitry). In this article our main goal is to study the spectra of the graph adjacency and Laplacian matrices, with a focus on three aspects in particular: (1) the sensitivity and robustness of the spectrum in response to varying the intra- and intermodular edge density, (2) the effects on the spectrum of perturbing the edge configuration while keeping the densities fixed, and (3) the effects of increasing the network size. We study some tractable aspects analytically, then simulate more general results numerically, thus aiming to motivate and explain our further work on the effect of these patterns on the network temporal dynamics and phase transitions. We discuss the implications of such results to modeling brain connectomics. We suggest potential applications to understanding synaptic restructuring in learning networks and the effects of network configuration on function of regulatory neural circuits.

1  Introduction

1.1  Network Architecture and Dynamics

The study of networks has been the subject of great interest in recent research. Many natural systems are organized as networks, in which the nodes (be they cells, individuals, or web servers) interact in a time-dependent fashion.

One of the particular points of interest has been the question of how the hardwired structure of a network (its underlying graph) affects its function, for example, in the context of optimal information storage or transmission between nodes along time (Bullmore & Sporns, 2009). It has been hypothesized that there are two key conditions for optimal function in such networks: a well-balanced adjacency matrix (the underlying graph should appropriately combine robust features and random edges) and well-balanced connection strengths, driving optimal dynamics in the system. A subsequent line of study is to understand the effects of connectivity patterns on the temporal behavior of the network, seen as a dynamical system in which the node variables are coupled according to a connectivity scheme that obeys certain deterministic constraints but also incorporates random aspects. One can then investigate how the phase space dynamics (and the phase transitions that the system undergoes under perturbation) are affected when perturbing the underlying adjacency graph.

While the general aim of our work is to study the relationship between a network’s hardwired circuitry and its dynamics, this article focuses primarily on understanding the robustness of certain graph-theoretical features as the network is perturbed or as its size increases—and, to a lesser extent, discusses their potential to further affect the vulnerability or robustness of the system’s dynamics (the subject of a related paper: Rădulescu & Verduzco-Flores, 2015). To fix out ideas, we investigate here a bimodular, oriented graph, in which the nodes of two modules connect through fixed numbers of random edges within each module, as well as across modules. Using analytical and numerical computations, we aim to establish whether and why, when fixing the number of both intra- and intermodular edges, the adjacency spectrum of the network remains in general sufficiently robust under particular edge configurations (geometries). This is important, since it would suggest that certain dynamic algorithms in this type of networks may also remain unaffected by such constrained geometry changes.

A very large body of work addresses properties of random matrices (Tao, 2012), that is, matrices whose entries are drawn independently out of a given (typically normal) probability distribution. If, in addition, the matrix represents the adjacency of a random graph, so that each entry equals 1 with a given probability, there are classical methods used when looking for properties of the spectrum (e.g., spectral radius or spectral density). For example, a popular and well-known generative model developed in the early 1980s (Holland, Laskey, & Leinhardt, 1983), whose extensions for directed and weighted edges are still widely used, is the stochastic block model (SBM; Aicher, Jacobs, & Clauset, 2013; Larremore, Clauset, & Jacobs, 2014). SBMs are a powerful way of encoding specific assumptions about the way unknown parameters interact to create edges, and offer many advantageous features. We argue, however, that generative models do not account (mathematically or realistically) for all potential network connectivity schemes that arise in natural systems.

A lot remains to be clarified in terms of how individual (rather than stochastic) changes in simple parameters affect the properties of a graph. Such deterministic constraints are important, since they are likely to appear in natural systems in conjunction with the stochastic aspects controlling the particular edge distribution. Our model differs from most of the generative approaches in that it conserves the number of edges within and between modules rather than fixing independently the probability of having an edge that connects two given nodes in the same or different modules. We chose the edge density as our parameter of interest, since fixing the total number of node-to-node connections represents a typical normalization condition in networked systems (e.g., a constant sum of weighted edges is a popular normalization scheme in the context of synaptic updating in a neural network (von der Malsburg, 1973)).

While classical results (such as Wigner’s semicircle law) apply in the stochastic case with independent and identically distributed edges, various extensions have been worked out for models that do not necessarily abide by these properties. Consider, for example, the configuration model (Farkas, Derényi, Barabási, & Vicsek, 2001), whose spectral properties have been addressed by numerous studies. Since its edges are not statistically independent, a direct analytical approach is very difficult; existing results range from approximating the full spectrum (Dorogovtsev, Goltsev, Mendes, & Samukhin, 2003) to formally deriving the expected values of the leading eigenvalue, but only in the large N limit (Chung, Lu, & Vu, 2003). In a recent paper, Nadakuditi and Newman (2013) took an indirect approach: they considered a model with the same degree sequence as the configuration model, but in which the number of edges between any two nodes was drawn independently from a Poisson distribution. Then the spectra of the two models was shown to agree in the large N limit. In section 4, we apply this idea in our case by carrying out a large N limit comparison between our model and its probabilistic counterpart, with independent, stochastic edges, as Nadakuditi and Newman (2012) considered.

1.2  Brain Function from Graph Theory to the Connectome

In the context of the brain, a network of nodes connected by oriented weighted edges may constitute a valid representation of neural architecture at more than one spatiotemporal scale (the brain “fractal” possibly reusing similar organizational and optimization principles at multiple complexity levels). For example, one may think of the nodes as individual neurons connected by synapses, whose activity (e.g., measured as variations in membrane potential of single cells) is determined by the external input together with the pattern and strengths of synaptic coupling. At a coarser level, one may view a node as a synchronized population of neurons, whose activity (e.g., measured as mean field firing rate) is determined by the external input, as well as its mean field connections from other—excitatory or inhibitory—populations. At the macroscopic level, consistent with imaging techniques such as MRI or EEG, one may think of the nodes as anatomical brain regions (e.g., amygdala, prefrontal cortex) whose activity (e.g., measured as event-related potentials or blood-oxygen-level dependent signals) is determined by the external stimulus and the interregional connectivity patterns and connection strengths. The way in which various parts of the brain (from the microscale of neurons to the macroscale of functional regions) are wired together is one of the great scientific challenges of this century, currently being addressed by large-scale research collaborations, such as the Human Connectome Project (Toga, Clark, Thompson, Shattuck, & Van Horn, 2012; Craddock et al., 2013).

Recent studies have used graph-theoretical approaches to investigate brain networks, not only in the context of learning, memory formation, and cognitive performance, but also to understand more general organizational and functional principles used by the brain and interpret the effects of different connectivity patterns between the brain’s coupled components (Bullmore & Sporns, 2009; Sporns, 2002, 2011). With nodes and edges defined at various scales, according to different empirical modalities (Sporns, 2010), these studies support certain generic topological properties of brain architecture, such as modularity, small worldness, the existence of hubs, and other connectivity density patterns (He & Evans, 2010). These properties, if proven consistent with physiological, behavioral, or genetic factors, may provide us with a better understanding of neural processes and may be effective as biomarkers for behavioral traits or neuropsychiatric conditions.

One thought of potential importance to us is that while the brain itself is a gigantic and relatively densely connected network of billions of neuron nodes (each receiving and providing input to tens of thousands of other nodes), it may be both realistic and computationally advantageous to view the brain as a highly hierarchic network in which the behavior of each “node” at a certain complexity level integrates the behavior of a collection of lower-level nodes. Hence, at each complexity level, the size of the networks we need to study experimentally, represent theoretically, or simulate numerically may be in fact relatively small (a few hundred nodes). For example, at the macroscopic level, compatible with imaging techniques in humans, a small region such as the amygdala is (within typical fMRI acquisitions parameters) as large as 100 to 200 voxel-nodes. For relatively small networks, the traditional large size limit results obtained in random graph theory may no longer apply directly, and new approaches need to be created to extend the results (see section 4.1 for a more detailed discussion).

A recent trend in human imaging research has been oriented toward developing and using graph-theoretical network measures in conjunction with empirical data in order to identify the effects of abnormal connectivity patterns on the efficiency of brain function. By inferring graph-theoretical information from tractography data and temporal time series, various studies have been investigating the sensitivity of systems to removing or adding nodes or edges to different places in the network structure. Data sets have been mined for global (e.g., characteristic path length, clustering coefficient, small-world ratio parameter) and local (e.g., nodal-betweenness centrality, nodal path length, nodal clustering coefficient) network measures that would optimally differentiate between subject behavioral profiles (GadElkarim et al., 2014; Ajilore et al., 2013). For example, both resting-state tractography-derived measures have been used to understand behavioral impairments in subjects with compromised connectivity due to existing lesions (Corbetta, 2012) or group differences between healthy controls and patients with mental illnesses associated with abnormal feedback circuitry (Fekete et al., 2013; Leow et al., 2013).

However, no matter how well designed or statistically powerful, purely empirically based analyses cannot explain in and of themselves the mechanisms by which connectivity patterns actually act to change the system’s dynamics, and thus the observed behavior. Substantial research effort is being directed toward constructing an underlying network model that is tractable theoretically or numerically and could therefore be used in conjunction with the data toward interpreting the empirical results and making further predictions. To this aim, the theoretical dependence of dynamics on connectivity (e.g., in the context of stability and synchronization in networks of coupled neural populations) has been investigated both analytically and numerically, in a variety of contexts—from biophysical models (Gray & Robinson, 2009) to simplified systems (Siri, Quoy, Delord, Cessac, & Berry, 2007). These analyses revealed a rich range of potential dynamic regimes and transitions (Brunel, 2000), shown to depend as much on the coupling parameters of the network as on the arrangement of the excitatory and inhibitory connections (Gray & Robinson, 2009).

The successful construction of a useful (biophysical) computational model seems therefore contingent on our understanding of the control that a network’s architecture exercises on its functional dynamic regime. In our work, we are trying to address this question starting with simpler networks and investigating which properties are preserved or emerge in increasingly complex systems. A relatively simple yet general example that we have been considering in previous work is that of an oriented network of two interconnected cliques. For this type of graph geometry and for nodes acting as nonlinear oscillators, we studied how the two intermodular edge densities affect dynamics (Rădulescu & Verduzco-Flores, 2015).

We used this architecture because it is representative of two interacting excitatory and inhibitory densely packed populations, a feedback scheme that provides the underlying control for many brain regulatory loops. For example, this setup successfully informed our human imaging results in the amygdala-prefrontal circuit regulating human emotion (Rădulescu & Mujica-Parodi, 2013). Also, while there are clearly better measures of architecture complexity in a network than edge density, our work was directly motivated by existing hypotheses that relate network functional efficiency precisely to the density of projections between subsets of network nodes. Our analysis is an attempt to provide a formal framework for existing empirical studies, with the potential to reconcile results that may otherwise seem counterintuitive, even mutually contradictory. For example, some studies found that a lack of adequate amygdalar projections to prefrontal regions may be responsible for trait anxiety (Kim & Whalen, 2009; Kim et al., 2011), while other studies correlated the same phenomenon with major depression (Dannlowski et al., 2009). A formal model investigating the effects of density on dynamics seemed therefore an appropriate starting point for addressing these ambiguities in a quantifiable framework.

While our dynamic results have been promising and clinically informative, one important step (and the centerstone of this work) is to better understand their source. In our two related papers (one studying the graph properties for fixed edge densities, and the other studying their relationship with network dynamics; Rădulescu & Verduzco-Flores, 2015), we investigate the underpinnings of the observed robustness of coupled dynamics to certain changes in the network architecture and its vulnerability to others, as well as the differences between updating connection strengths versus perturbing connection density or geometry. We aim to clarify that this robustness is not a parameter-dependent property or a numerical artifact but rather an intrinsic feature based on network hardwiring. It is the robustness of certain network architectural features (in this case, the narrow distribution of the adjacency spectrum) that reflects the robustness of the temporal systemic dynamics.

Although there are clearly more complex generative models discussed in the literature on random graphs, random matrices, and complex systems, we found our model interesting to analyze and discuss for a few reasons. While its simple structure may lead to understating its mathematical behavior, we found that understanding spectral properties of configurations with fixed-edge density poses, even in this simple case, an interesting analytical problem and numerical challenges that increase with the size of the network. The question has surfaced before in the literature in a very similar form (Juhasz, 1990) but has been dismissed with partial results that do not completely explain our observations, presented here with systematic simulations and additional analytic justifications. In light of natural systems’ predilection for choosing simple schemes to produce complex behavior, we hypothesize that such a setup may contribute, possibly in conjunction with stochastic components, to computational algorithms performed in complex networks such as human brain circuits. Especially in light of our incomplete knowledge of how plastic brain networks perform normalization, it seems as likely to have a global normalization scheme (choosing one network configuration from a distribution of possible options with a certain property) as it is to have local normalization (e.g., assigning a certain probability for each pair of nodes to be joined by an edge).

Even in its simplest form, the model opens questions on the dependence of dynamics on coupling parameters in a network with variable architecture (addressed primarily in Rădulescu & Verduzco-Flores, 2015) and the properties of a bimodular graph with variable edge geometry (e.g, the distribution of adjacency and Laplacian spectra, addressed in this article). In our work, we continue to study more complex architectures (connected hubs and strong components) and other types of coupled dynamic schemes (e.g., discrete map iterations).

1.3  Network Dynamics from Spectral Measures

1.3.1  The Adjacency Spectrum

A variety of studies have examined random graphs with a general given expected degree distribution and have established bounds or other descriptions of their adjacency spectra. While it is well known that the largest eigenvalue of a graph’s adjacency matrix is determined by its maximum degree m together with the weighted average of the squares of the expected degrees (Chung & Lu, 2006), work on random matrices has delivered more accurate estimates. For example, Chung et al. (2003) investigated an ensemble of random uncorrelated, nonoriented networks and found that in the large N limit, the expected largest eigenvalue is determined by the ratio of the second to first moment of the average degree distribution , together with the expected largest degree . More generally, for directed (oriented) networks without edge degree correlations, a first-order approximation to the leading eigenvalue is given by , where and are, respectively, the in- and out-degrees of the graph, and (Restrepo, Ott, & Hunt, 2007).

It is therefore clear that the in- and out-degrees, as well as their correlations, have crucial effects on the leading eigenvalue. In general, a graph’s defining feature is its distribution of edges. Among other properties, edge density, edge clustering, and presence of hubs have been intensely studied. Detecting and interpreting the modularity of a network (i.e., the presence of community structures within the graph, defined as densely connected groups of nodes, with sparser inter-group connections) has been of particular interest recently (Mucha, Richardson, Macon, Porter, & Onnela, 2010; Chauhan, Girvan, & Ott, 2009; Nadakuditi & Newman, 2012, 2013; Sarkar, Henderson, & Robinson, 2013). Whether the graph represents the architecture of a social (Gilbert, Simonetto, Zaidi, Jourdan, & Bourqui, 2011), climate (Donges, Zou, Marwan, & Kurths, 2009), transportation (Zanin & Lillo, 2013) or disease (Barabási, Gulbahce, & Loscalzo, 2011; Van Mieghem, 2011; Supekar, Menon, Rubin, Musen, & Greicius, 2008) network, modularity reflects into adjacency properties of the network, controlling the structural and functional properties, and implicitly the temporal behavior of the system.

1.3.2  The Graph Laplacian Spectrum

The Laplacian matrix L of a graph is defined as the difference between the node degree matrix and the adjacency matrix. In the case of directed graphs, either the in-degree or out-degree can be used, depending on the application. Laplacian dynamics is perhaps the most studied representation of networked systems and is also known as the consensus protocol (Olfati-Saber, Fax, & Murray, 2007), in which the network aims to reach agreement on a certain quantity of interest. Although this model has been explored in more elaborate contexts (Olfati-Saber & Murray, 2004; Rahmani, Ji, Mesbahi, & Egerstedt, 2009), in its simplest form, the dynamics of each node is driven by the sum of differences between its own state and its neighbors’ states, as defined by the adjacency graph. Then the dynamic evolution of the entire system can be appropriately captured by the linear equation: .

While the consensus protocol has attracted a lot of attention and effort (Wu, 2013), it is not a complete representation of all the recent work on networked dynamic systems. For example, relative sensing networks are an important class of systems whose control has been described using both their incidence matrix (Smith & Hadaegh, 2007), as well as more completely in terms of spanning trees in the connection topology (Sandhu, Mesbahi, & Tsukamaki, 2005). In fact, the dynamical stability of certain networks seems to remain most successfully defined in terms of quantities derived from the eigenspectrum of the adjacency matrix (Small, Judd, & Stemler, 2012).

In our own work, we considered a bimodal oriented network of coupled nodes, each acting as a Wilson-Cowan–type nonlinear oscillator (Rădulescu & Verduzco-Flores, 2015). Even for such a network, one cannot expect either adjacency or the Laplacian spectrum to be fully predictive of the system’s dynamics. Indeed, both cospectral graphs and Laplacian cospectral graphs may produce different phase and parameters-space behavior in the corresponding system (examples of this correspondence are shown in appendix  A). A stronger requirement for the graphs to be isomorphic would most likely lead to identical coupled dynamics, but while isomorphic graphs are cospectral and Laplacian cospectral respectively, the converse is not true in either case (Barghi & Ponomarenko, 2009; Zelazo, 2008). These being said, however, both adjacency and graph Laplacian matrices have properties that reflect into the network dynamic behavior.

Throughout this article, we will be working with oriented graphs composed of two interconnected modules X and Y, each composed in turn of N nodes. Within both X and Y, the edge density is fixed to the same fraction (out of the possible maximum of N2). The density of the X-to-Y edges is fixed to a fraction of the N2 possible X-to-Y connections and the density of the Y-to-X edges is fixed to of the N2 possible Y-to-X connections. The parameters , and can take any values of the form , where k is an integer between zero and N2 (not necessarily requiring that ). In this setup, when , the modules are totally disconnected, and when , the modules are fully connected (cliques). Most of this article is dedicated to studying interconnected cliques.

As discussed in section 1.2, this graph structure was used in previous work as a schematic architectural representation of a neural circuit, in which X and Y represent the excitatory (resp. inhibitory) modules of a neural feedback loop, so that X projects to Y through a fraction of excitatory connections, and Y in turn modulates X through a fraction of feedback, inhibitory connections. In such a circuit, the overall connectivity density may remain constant during a cognitive process such as learning, even though the network may exhibit high plasticity and constantly inspect a variety of edge geometry combinations. Throughout the process, the connectivity profile is constantly remodeled, with existing connections being silenced or disappearing, while new connections are being created or activated.

The adjacency matrix of such an oriented graph is a binary block matrix of the form , where the blocks and have a fixed fraction of 1 versus 0 entries (i.e., edge density), while and have densities and , respectively. Here, we study the sensitivity and robustness properties of the adjacency and Laplacian spectra for our specific class of oriented graphs. We focus in particular on understanding, for increasing size N, how the eigenvalues are perturbed when changing the density profile and when changing only the edge distribution while keeping densities fixed. We use a combination of analytical and numerical methods to understand the distribution (mean and standard deviation) of each eigenvalue in the adjacency and Laplacian eigenspectrum. In a separate paper (briefly previewed in section 4.2), we investigate the connections between graph properties and the dynamics of a corresponding system of coupled node oscillators.

1.4  Organization of the Article

Our work is organized as follows. In the following two sections, we study properties of the adjacency and the Laplacian matrix of the graph. In section 2, we focus on the behavior and robustness of the adjacency spectrum when changing the edge density and configuration. (In the text, we restrict ourselves to the case of two interconnected cliques, . However, in appendix  B, we relax the full connectedness requirement to and analyze how the properties of the spectrum change with the trimming of intramodular edges.) In section 3, we investigate numerically, by looking at increasing network sizes and variable edge densities, whether the same robustness is characteristic to the spectrum of the graph Laplacian. In section 4, we put our results in the context of the existing work on eigenspectra of random graphs. As a preview to our subsequent work in Rădulescu and Verduzco-Flores (2015), we briefly explore connections with the temporal behavior of a coupled dynamical system and discuss the feasibility of dynamic classification based on classes of adjacency or Laplacian spectra. Finally, we discuss the significance of our results in light of neural connectivity and learning plasticity.

2  Dependence of Adjacency Spectrum of Edge Density and Network Size

When considering the case of fully connected cliques X and Y (see Figure 1), the diagonal blocks of the adjacency matrix are (where is the appropriate size matrix with all entries equal to one). Note that this scenario includes self-loops at all nodes; eliminating loops is equivalent to subtracting the identity from the adjacency matrix, with the only effect of shifting all the eigenvalues, and preserving the eigenvectors. The off-diagonal blocks and are binary matrices, with fractions and, respectively, of ones.

Figure 1:

Schematic representation of the network for nodes per module, as used in our application. Module X is shown on the left; module Y is shown on the right. Both are fully connected, local subgraphs of the full network. The dotted red arrows represent the X-to-Y connections, and the dotted blue arrows represent the Y-to-X connections, generated randomly for low-connectivity densities , to maintain the clarity of the illustration (Rădulescu & Verduzco-Flores, 2015).

Figure 1:

Schematic representation of the network for nodes per module, as used in our application. Module X is shown on the left; module Y is shown on the right. Both are fully connected, local subgraphs of the full network. The dotted red arrows represent the X-to-Y connections, and the dotted blue arrows represent the Y-to-X connections, generated randomly for low-connectivity densities , to maintain the clarity of the illustration (Rădulescu & Verduzco-Flores, 2015).

By discussing the effects of edge density we mean analyzing how the spectrum of changes when the values of and are varied; we will represent these changes in the form of surface plots with respect to pairs . By discussing the effects of geometry, we mean understanding the effects on the spectrum of the edge configuration, under the constraint of fixed densities and . We measure these effects by estimating the mean and standard deviation of the eigenvalues of over the edge geometries admissible by any fixed density pair . We call , the eigenvalues of , ordered in decreasing order of their magnitudes: .

Let us notice here that is guaranteed to be real by an extended version of the Perron-Frobenius theorem, for matrices with nonnegative entries. It is easy to see that the graph with adjacency is strongly connected, with every two nodes connected by at least one oriented path (unless or , i.e., if there are no X-to-Y or Y-to-X connections). Hence the matrix is irreducible anywhere except on the boundary of the density square . Moreover, since the length of the shortest path between two nodes is 1, it follows that the matrix is also aperiodic. An extension of Perron-Frobenius for irreducible matrices guarantees that there is a unique real positive eigenvalue equal to the spectral radius and that the other eigenvalues are complex. We will be referring to and as the two leading eigenvalues of .

Definition 1.

For fixed , we call the distribution of adjacency matrices with off-diagonal blocks and having densities and , respectively. We call the corresponding distribution of each of the eigenvalue real parts (with ).

It is easy to see that the cardinality . While in general the exact eigenvalues of depend on the representative (i.e., on the actual exact positions of the 1s within the blocks and ), all are trivial on the boundary (i.e., for or in ).

Lemma 1.

Fixing fixes the eigenvalues of , so that , for all . More precisely, the eigenvalues of any are given by (from largest to smallest in absolute value) , , and . Similarly, for , the eigenvalues of any are given by , , and .

Proof.
We calculate directly, for , the eigenvalues and eigenvectors (where and ):
formula
Call and , and = the jth row of the block matrix , with being the number of 1s in that row. We then have that
formula
If , then , implying that . It follows that for all . By summing up and using the fact that , we get
formula
Clearly ; otherwise, for all j, and . We then have that , hence .

In conclusion, any matrix has one largest eigenvalue , with eigenvector given by , , and a second largest eigenvalue , with eigenvector given by , . The rest of eigenvalues are zero. Note that in the case of , then and as well.

Lemma 2.

Fixing fixes the eigenvalues of so that for all j. The eigenvalues of any are given by , . Similarly, for , the eigenvalues of any are given by , .

Proof.

The proof is similar to that of lemma 2.

Clearly, the distributions are not trivial in general. If we restricted our interest to finding only the leading eigenvalue of the matrix , a variety of existing tools can assist us. However, even the computations involved in a task such as expanding the powers (equivalent to finding all paths of length exactly k in the graph) or in approximating the leading eigenvalue using perturbation theory become very complex quite fast (see section 2.2 and appendix  B). It is in this light that we now proceed numerically to support a few conjectures.

Our goal is to obtain descriptions of for all values of ; in particular, we want to estimate their means and standard deviations and observe how these depend on the values of and and on the size N of the network. For small network sizes (), the mean and standard deviation of the entire distribution , for each , , and j, can be computed directly quite efficiently (see Figures 2A and 2B). However, for larger values of N, the factorial increase in the distribution size makes inspecting all configurations computationally very expensive (e.g., for and , we have configurations, although some will produce identical spectra). So for larger Ns, we estimated the means and standard deviations based on a sample of the distribution. Figures 2B and 2C show a comparison between the whole-distribution and sample-based computations of the standard deviation for for . Even for larger values of N, considering samples of size , or produced numerically consistent results (as explained later in this section).

Figure 2:

Mean and standard deviation for the leading eigenvalue of T for , as functions of the densities and . (A) For each pair , the mean of the leading eigenvalue real part was calculated over all T (i.e., over all possible combinatorial configurations with the given densities). (B) For each pair , the corresponding standard deviation was calculated over all combinatorial configurations in each . (C) For each pair , the standard deviation of the leading eigenvalue was also calculated using a sample of the distribution, obtained by choosing randomly configurations for T.

Figure 2:

Mean and standard deviation for the leading eigenvalue of T for , as functions of the densities and . (A) For each pair , the mean of the leading eigenvalue real part was calculated over all T (i.e., over all possible combinatorial configurations with the given densities). (B) For each pair , the corresponding standard deviation was calculated over all combinatorial configurations in each . (C) For each pair , the standard deviation of the leading eigenvalue was also calculated using a sample of the distribution, obtained by choosing randomly configurations for T.

2.1  Numerical Estimates of Eigenvalue Distributions

A few contexts in the literature on eigenspectra of random graphs relate to our problem. The eigenspectrum of the adjacency matrix of a network with communities is known to have leading eigenvalues that are well separated from the rest of the spectrum (Chauhan et al., 2009).

A result more qualitatively related to our question is due to Juhász (1990). Viewed in Juhász’s general framework, the adjacency matrix is a block matrix with (weighted) density matrix , whose eigenvalues are . According to the main theorem in Juhász’s, has two eigenvalues that are large (of order N) in magnitude; the other eigenvalues are close to zero. More precisely, in probability, while the other eigenvalues are of order in probability (for any ).

A first thought is that may provide in our case the exact formal expressions for the means in terms of the densities and . The formulas look particularly promising, since they seem to naturally extend the boundary expressions obtained in the two lemmas (for or ) and since they match tightly our numerical results (as shown in Figures 2A and 3A). Simple direct computations of the spectra for immediately reveal, however, that the formulas do not give the exact means for the leading eigenvalue magnitudes, although this may be the case only for finite sizes N and the estimates may be in fact improving with increasing size and may become exact in the limit . An interesting question to address is that of understanding not only the shape of the leading eigenvalue distributions but also the source of the error terms in their means compared to , and their own behavior with respect to the size N.

Figure 3:

Mean and standard deviation of eigenvalue magnitudes for , estimated numerically for each pair of densities by considering a random sample of matrices T. (A) The mean eigenvalue real parts are represented as surfaces with respect to . The two top surfaces fit very closely the expressions ; the other surfaces are all close to zero. For each of the first (panel B), second (panel C), and third (panel D) leading eigenvalues (in magnitude), we represent the corresponding standard deviation as a surface with respect to .

Figure 3:

Mean and standard deviation of eigenvalue magnitudes for , estimated numerically for each pair of densities by considering a random sample of matrices T. (A) The mean eigenvalue real parts are represented as surfaces with respect to . The two top surfaces fit very closely the expressions ; the other surfaces are all close to zero. For each of the first (panel B), second (panel C), and third (panel D) leading eigenvalues (in magnitude), we represent the corresponding standard deviation as a surface with respect to .

For the rest of the section, we gain a numerical insight, for size up to , and provide a few numerically based conjectures on the behavior of the spectrum as the size increases. In section 2.2, we back up analytically some of the conjectures speculated in this section, based on our simulations.

Figure 2 illustrates, for size , the standard deviation of as a function of the densities. For each pair , we computed the standard deviation of over all configurations in (see Figure 2B), as well as over a random sample of representatives for . Figures 3B to 3D show similar results for ; for each pair , we used samples for to estimate numerically the standard deviations of , and . In all cases, the surfaces decrease toward the edges, illustrating the narrowing of the corresponding distributions when gets closer to the boundary of the unit square. We point out the possible confound that the numerical scheme may be introducing by considering the same cardinality () for sampling the larger distributions in the center, as well as the slimmer distributions near the boundary (i.e., the underestimation due to sampling may be more pronounced around the center of the surface than towards the boundary).

For a fixed N, the distribution for each eigenvalue is clearly largest at intermediate values of and . Following the same logic (“higher cardinality likely produces higher variance”), one would expect standard deviations to increase when the size N is increased (recall that , which increases factorially with N). Juhász’s estimate goes along the same lines, claiming an almost everywhere correction term of magnitude , which increases with N. This means that there are almost no outliers out of the Juhász range, even though the spread of each remains quite large, of order , as discussed in section 2.2.

In Figure 4, we illustrate specifically the outcome of our numerical simulations of how the standard deviations behave with increasing N (with approximation algorithms based on sample distributions). In Figures 4A to 4C, we show, for , the standard deviations for the three leading eigenvalues, each represented as a surface with respect to density pairs . Figure 4D tracks the behavior of the maximum of the surface corresponding to each of the first four eigenvalues over the unit square. Our estimates suggest that for , the standard deviations of increase as a power function of N (with the power ). This is not surprising in light of the existing results already described. However, interestingly, the simulations suggest a decreasing power rule for the standard deviation of and a logarithmic increase for the standard deviation of , implying that for the two large eigenvalues, Juhász’s result can be greatly refined in terms of standard deviations. This is a useful fact to investigate, since narrowness of the distributions with N ensures better separation between the leading eigenvalues and the rest of the spectrum, and subsequently more “recognizable” modularity properties (as discussed in section 4.1). This feature can become quite important when the graph operates as a functional network (e.g., as a brain feedback circuit).

Figure 4:

Illustration of the evolution of the standard deviation of when increasing the network size N. (A) Each surface represents the standard deviation of with respect to , for a different size N; from top to bottom: , , , and . (B) Each surface represents the standard deviation of for a different size N; from lowest to highest: , , and . (C) Each surface represents the standard deviation of , for a different size N; from top to bottom: , , , and . In order to make all surfaces visible, the figure is vertically flipped (we show minus the standard deviation). (D) The dotted plots show how the global maximum value of each surface evolves when increasing the size up to . For each curve, we used a Levenberg-Marquardt algorithm to determine the best functional fit, shown as a solid line (in some cases, the solid line is hard to see because of its almost perfect overlap with the simulation data). Top: The maximum of decreases with N (dotted green curve), as (yellow solid curve), with residuals norm = 0.0048. Middle: The maximum of increases with N (dotted blue curve), as (cyan solid curve), with residuals norm = 0.0013. Bottom: The maxima of (dotted red curve) and (dotted black curve) increase as (solid pink) and (solid purple), with residual norms = 0.0034 and 0.0014, respectively. The estimates for panels A, B, and C are based on samples of size . The estimates for panel D are based on samples of size 400.

Figure 4:

Illustration of the evolution of the standard deviation of when increasing the network size N. (A) Each surface represents the standard deviation of with respect to , for a different size N; from top to bottom: , , , and . (B) Each surface represents the standard deviation of for a different size N; from lowest to highest: , , and . (C) Each surface represents the standard deviation of , for a different size N; from top to bottom: , , , and . In order to make all surfaces visible, the figure is vertically flipped (we show minus the standard deviation). (D) The dotted plots show how the global maximum value of each surface evolves when increasing the size up to . For each curve, we used a Levenberg-Marquardt algorithm to determine the best functional fit, shown as a solid line (in some cases, the solid line is hard to see because of its almost perfect overlap with the simulation data). Top: The maximum of decreases with N (dotted green curve), as (yellow solid curve), with residuals norm = 0.0048. Middle: The maximum of increases with N (dotted blue curve), as (cyan solid curve), with residuals norm = 0.0013. Bottom: The maxima of (dotted red curve) and (dotted black curve) increase as (solid pink) and (solid purple), with residual norms = 0.0034 and 0.0014, respectively. The estimates for panels A, B, and C are based on samples of size . The estimates for panel D are based on samples of size 400.

We summarize our initial theoretical and numerical observations in the case of two connected cliques in the form of a conjecture, which remains open to a more rigorous investigation:

Conjecture 1.

In the case of fully connected modules (i.e., ), the spectrum of the matrix varies with respect to the intermodular densities and of the blocks and as follows:

  1. For , the spectrum has two eigenvalues and whose mean magnitudes are large, while the other have small mean magnitudes (close to zero). As , the second largest eigenvalue as well.

  2. For each size N and each density pair , the mean real parts of the two leading eigenvalues (over all adjacency configurations corresponding to ), are given approximately by , with error terms approaching zero as .

  3. For any size N, the standard deviation of each eigenvalue’s real part is a “unimodal” surface, with a point of maximum in the open square , and which is zero when or .

  4. For the leading eigenvalue , the standard deviations for all are very small. Moreover, the standard deviation of decreases monotonically with N for each fixed pair . The maximum attainable standard deviation of over decreases approximately as . (This transcends qualitatively the corresponding Juhász estimate.)

  5. For the second eigenvalue , the maximum attainable standard deviation of over increases logarithmically with N. (This transcends quantitatively the corresponding Juhász estimate.)

  6. For the rest of the eigenvalues , , the maximum attainable standard deviation of over increases approximately as . (This is the same as the rate of the almost everywhere error term previously obtained by Juhász.)

Remark 1.

We are in particular interested in understanding the robustness of the leading eigenvalues to changes in configuration once the densities have been fixed. First, one might suspect that this robustness is due to a large extent to the existence of the two fully connected cliques in our graph. In appendix  B, we investigate how results change when we relax the fully connectedness condition. Second, recall that we are ultimately interested in whether robust features in the adjacency spectrum translate into robustness in dynamics (if we consider the corresponding network of coupled oscillators). In our follow-up paper (briefly previewed in section 4.2 and in appendix  A), we discuss this aspect further, and the potential connections between adjacency and dynamics classes.

2.2  Estimating the Maximal Eigenvalue

One of our goals is to understand the source of the standard deviation of the maximal eigenvalue when considered over the matrix distribution . There are a few ways in which one typically proceeds to approximate the maximal eigenvalue of a matrix. In this technical section, we present what two such methods entail in our case and justify our preference for a numerical approach to computing the mean and standard deviation. First, we use Taylor series to expand the eigenvalue and corresponding eigenvector around their values for a matrix with simple known spectrum. Second, we use the power method, initiated at the vector , to obtain the leading eigenvector and then compute the corresponding eigenvalue.

Throughout this section, denotes the matrix with all entries equal to 1, denotes the column vector with all entries 1, and denotes the function that computes the sum of all entries, for any arbitrary size matrix.

2.2.1  Taylor Expansion Approach

The adjacency matrix for our graph is of the form , where and . At the start of section 2, we found the spectrum of when is on the boundary of the unit square. The spectrum is also easy to find for the matrix , a nonbinary matrix that averages out all configurations for a fixed pair .

Lemma 3.

The matrix has eigenvalues:

  • , with corresponding eigenvector

  • , with corresponding eigenvector

  • , with corresponding eigenspace spanned by the vectors , where and are column vectors with , for .

Proof.

The proof is direct, and will be omitted.

We are interested in the spectrum of the matrix , which we write in the form , where the difference has . More generally, we consider the matrix family (so that , for ). The leading eigenvalue (and its corresponding eigenvector) of can then be expanded as a Taylor series around the original leading eigenvalue of (with corresponding original eigenvector ):
formula
2.1
(we can assume, without loss of generality, that is perpendicular to , is perpendicular to both and , and so on). One can easily compute the first term (of order ) in the expansion of by considering the terms :
formula
2.2
Expanding with respect to and identifying the coefficients of , we get
formula
In the basis of eigenvectors of , one can write and the matrix as , so that . Then our equation becomes
formula
Solving in components, this gives us:
  • , for

  • , for

Given the form of the eigenvectors in the basis , we easily can calculate some of the zjks that are most useful to continue our computation. For example, on one hand,
formula
and on the other hand, in components,
formula
Recall that , for all . Applying the operator separately over the first the top and bottom N entries, we get, respectively,
formula
This implies that , and subsequently . Hence the order one Taylor polynomial in of the leading eigenvalue of is (the same as the leading eigenvalue of ). One can continue computing higher-order terms. For example, we can include terms of order in the expansions of both eigenvalue and eigenvector from equation 2.1, and aim to calculate :
formula
2.3
Identifying the coefficients of and and using the fact that , we get two more equations, which can be used to completely determine and :
formula
2.4
Writing equation 2.4 in components, we have
formula
Projecting in the direction of , replacing , for all , and also recalling that , we have that
formula
2.5
We additionally know that and that , for . Hence:
formula
2.6
But is in fact nothing but the first component of the matrix , written in the basis . In other words, if we write in components , then . To calculate A1, we can use the fact that , for all and calculate:
formula
On the other hand, , so that
formula
Combining the two, we get
formula
hence .
In conclusion, we obtained the quadratic Taylor polynomial in for the eigenvalue :
formula
2.7
Keep in mind, however, that the difference between and (the exact leading eigenvalue of ) is the second-order Taylor remainder evaluated at , which we have not yet reason to believe is small (that, assuming the Taylor series converges at ). While obtaining additional higher-order terms could provide increasingly confident estimates , the mean and standard deviation of become harder to calculate (involving more complex, mixed products of the matrices and ). This would make an exact analytic expression of the mean and standard deviation intractable using this particular method. To illustrate, we study the statistics of over the distribution and estimate how much of the mean and standard deviation of are captured by its quadratic “approximation” :
Proposition 1.

The mean of over all matrix configurations in is zero.

Proof.

Consider two binary matrices and with densities and, respectively, . Then , where E represents the mean over all configurations in . Indeed, we have and , for all . Since the matrices and are independent, we can easily compute .

Since the second correction term in the eigenvalue expansion has zero mean, does not account for any of the difference between the mean of and the value , which we observed numerically and have been investigating. This difference must be therefore entirely contained within the higher-order terms in the expansion. This is not true for the standard deviation , which in fact seems to explain much of the standard deviation of . Indeed:
formula
We can express . The mean for a product of the type is nontrivial to express analytically (unlike and and , which are easy to determine). We compute the standard deviation of for and all pairs . In Figure 5, we compare it with the standard deviation of the leading eigenvalue for all values of .
Figure 5:

Comparison between the standard deviations of the leading eigenvalue of (left) and its quadratic Taylor polynomial (right). The computations were done for , for all pairs .

Figure 5:

Comparison between the standard deviations of the leading eigenvalue of (left) and its quadratic Taylor polynomial (right). The computations were done for , for all pairs .

While the computation of the standard deviation can get complicated fast, one can more easily calculate bounds for the leading eigenvalue around its mean, by noticing that:

Proposition 2.

.

Proof.
Consider two binary matrices and with densities of ones and, respectively, , that is, and . We have that
formula
where is the sum of the elements in the ith column of and is the sum of the elements in the ith row of , hence and . Using the Cauchy-Schwartz inequality, we can see that
formula
Furthermore, each . Similarly, each , hence .

In conclusion, . We use finally use the fact that and to compute .

The inequality in proposition 7 is not helpful, however, since it estimates a maximal variability for the eigenvalue around . While we desire to narrow the estimates as N increases, this bound increases like with size (a similar problem as with Juhasz’s estimate) and fails to explain our numerical observations. However, this bound cannot be improved to a lower order of N since, for all N, one can always find outliers in the distribution at a distance from . An explanation that reconciles both observations, as well as Juhász’s almost everywhere bounds, is that these outliers are less representative as N increases, causing the distribution to remain narrow, with a small standard deviation that decreases with N.

2.2.2  Power Iteration Approach

It can be shown that the recurrent sequence of vectors obtained from the power algorithm , initiated at , converges in our case to the leading eigenvector of . Then the leading eigenvalue can be computed as the Rayleigh quotient . To estimate a term like , we calculate powers of the matrix and then compute the limit of . As with the Taylor method, this computation quickly becomes intractable when facing the combinatorial problem of evaluating the function for mixed products of the blocks , and their transposes.

2.2.3  Numerical Approach

Due to these apparent difficulties to support the conjectured statements with analytic results for the standard deviation, we resorted to numerical illustrations. These present different but surmountable difficulties, such as the increased potential for inaccuracy in computing the standard deviation based on fixed-size sample distributions. Indeed, recall that the size of increases factorially with N, making it unrealistic to explore all configurations in this distribution. Hence any computationally tractable approach based on sample distributions can only increase the sample sizes with N at a much slower rate than the rate at which the actual size of increases, making these samples potentially less and less reliable with larger sizes. For a brief illustration of the appropriateness of our sample-based computations, we compare in Figure 6 the histogram of (containing, for and , a total of configurations) with that produced by a sample of configurations.

Figure 6:

Comparison between the distribution and a sample-based distribution. (A) Histogram of the distribution , for , , . (B) Histogram for a random subset of values in , for , , , computed based on a sample of size configurations out of the total of 12,870 configurations.

Figure 6:

Comparison between the distribution and a sample-based distribution. (A) Histogram of the distribution , for , , . (B) Histogram for a random subset of values in , for , , , computed based on a sample of size configurations out of the total of 12,870 configurations.

3  Dependence of Laplacian Spectrum of Network Size and Edge Densities

For our oriented graph with adjacency matrix , we consider the in-node degree diagonal matrix , with
formula

so that the corresponding Laplacian matrix is given by .

The Laplacian eigenvalue spectrum has been used as a measure of system dynamics. For example, the algebraic connectivity, defined as the second smallest eigenvalue of the discrete Laplacian matrix, is known to play an important role on synchronization dynamics and network robustness. In an effort to study the effect of interdependent topologies on the mutual synchronization of networks, Martin-Hernandez, Wang, Van Mieghem, and D’Agostino (2013) focused on computing and approximating the algebraic connectivity of two interdependent networks and on showing that it experiences a phase transition with the addition of a sufficient number of links among two interdependent networks. Here, we study the dependence of the Laplacian eigenvalues on the densities .

Following the same numerical scheme as in section 2, we computed the Laplacian eigenvalues for a sample of configurations, chosen randomly from the large distribution of all configurations corresponding to any fixed density pair . Based on this sample, we estimated for each the mean and standard deviation of the real part of the spectrum, as illustrated in Figures 7 and 8 for .

Figure 7:

Mean of Laplacian eigenvalue real parts for , estimated numerically for each pair of densities by considering the same random sample of adjacency configurations as in Figure 3A. Ordered by their magnitude: (A) The leading eigenvalues (1–6), (B) The following eigenvalues (7–14). (C) The two smallest eigenvalue (15 and 16). The smallest eigenvalue is zero (the Laplacian matrix is always rank degenerate).

Figure 7:

Mean of Laplacian eigenvalue real parts for , estimated numerically for each pair of densities by considering the same random sample of adjacency configurations as in Figure 3A. Ordered by their magnitude: (A) The leading eigenvalues (1–6), (B) The following eigenvalues (7–14). (C) The two smallest eigenvalue (15 and 16). The smallest eigenvalue is zero (the Laplacian matrix is always rank degenerate).

Figure 8:

Standard deviations of Laplacian eigenvalue real parts for . The panels represent, from top to bottom and left to right, the standard deviations for the eigenvalues 1, 2, 3, 7, 10, and 14 (ordered by magnitudes).

Figure 8:

Standard deviations of Laplacian eigenvalue real parts for . The panels represent, from top to bottom and left to right, the standard deviations for the eigenvalues 1, 2, 3, 7, 10, and 14 (ordered by magnitudes).

The behavior of the standard deviations for the real parts of the Laplacian eigenvalues with respect to the density pair is very different from that of the standard deviations for the adjacency spectrum. While the adjacency standard deviation surfaces were unimodal on the domain , decreasing from a central peak toward the boundary, in the case of the Laplacian, the surfaces are rippled (Figure 8), with the amplitude and distribution of the ripples depending on a variety of factors (as illustrated in Figure 10 and discussed below).

Such variability in the standard deviation values makes it easier for the system to switch from robust regimes (with a narrow distribution of eigenvalues) to more scattered regimes (with a wider distribution of potential eigenvalues) by introducing a small change in the density . Scattered regimes are more sensitive to configuration, since wide changes in the Laplacian spectrum (and implicitly in Laplacian-driven dynamics) are accessible even under the same density pair by slightly altering the configuration. This could be in principle viewed as an adaptability feature that makes Laplacian driven a desirable type of dynamics.

However, the emergent robustness observed in the case of the adjacency leading eigenvalue (standard deviation of the real part decreasing with the size N) does not hold in the case of the leading Laplacian eigenvalue. In fact, the maximim standard deviations over the domain seem to increase as powers of N for all the eigenvalues in the Laplacian spectrum after an initial transient phase for very small N (see Figures 9, 10, and 11). As N increases, the central regions of the surface, which raise with N, smoothen out and in the process push the ripples toward the borders.

Figure 9:

Illustration of the evolution of the standard deviation of the Laplacian eigenvalue real parts, when increasing network size N. (A) Surfaces for the first eigenvalue, computed for (lower surface) and (higher surface). (B) Surfaces for the second eigenvalue, for (lower surface) and (higher surface).

Figure 9:

Illustration of the evolution of the standard deviation of the Laplacian eigenvalue real parts, when increasing network size N. (A) Surfaces for the first eigenvalue, computed for (lower surface) and (higher surface). (B) Surfaces for the second eigenvalue, for (lower surface) and (higher surface).

Figure 10:

Illustration of the evolution of the standard deviation of the Laplacian eigenvalue real parts when increasing the network size N. The surfaces for the first second and third eigenvalues are shown top to bottom as pcolor plots for (left), (center), and (right).

Figure 10:

Illustration of the evolution of the standard deviation of the Laplacian eigenvalue real parts when increasing the network size N. The surfaces for the first second and third eigenvalues are shown top to bottom as pcolor plots for (left), (center), and (right).

Figure 11:

Curve fitting that illustrates the evolution of the Laplacian eigenvalue standard deviation as the size increases. The dotted plots show how the global maximum value of each surface evolves when increasing the size up to . For each curve, we used a Levenberg-Marquardt algorithm to determine the best functional fit, shown as a solid line. Top: The maximum mean real part for the first eigenvalue increases with (dotted green curve), as (yellow solid curve), with residuals norm . Middle: The maximum for the second eigenvalue increases with (dotted blue curve), as (cyan solid curve), with residuals norm . Bottom: The maxima for the third (dotted red curve) and fourth eigenvalues (dotted black curve) increase as (solid pink) and (solid purple) for and respectively, with residual norms . The estimates are based on samples of size .

Figure 11:

Curve fitting that illustrates the evolution of the Laplacian eigenvalue standard deviation as the size increases. The dotted plots show how the global maximum value of each surface evolves when increasing the size up to . For each curve, we used a Levenberg-Marquardt algorithm to determine the best functional fit, shown as a solid line. Top: The maximum mean real part for the first eigenvalue increases with (dotted green curve), as (yellow solid curve), with residuals norm . Middle: The maximum for the second eigenvalue increases with (dotted blue curve), as (cyan solid curve), with residuals norm . Bottom: The maxima for the third (dotted red curve) and fourth eigenvalues (dotted black curve) increase as (solid pink) and (solid purple) for and respectively, with residual norms . The estimates are based on samples of size .

If comparing the behavior of the two (adjacency and Laplacian) spectra when changing and increasing N, one could say that the desirable feature of the adjacency model is robustness of the leading eigenvalue, which increases with size, while the feature of the Laplacian model is swiftness between robust and loose regimes, which degrades with increasing size.

4  Discussion

4.1  Comparison with Random Graphs Approaches to Modularity

Nadakuditi and Newman (2012) considered a stochastic, nonoriented network with two communities and computed the ensemble means for the two large eigenvalues of its symmetric adjacency matrix in the large N limit. The method involved first finding the eigenvalues of the modularity matrix, then showing that these are identical in the large N limit to the eigenvalues of the adjacency matrix. Their asymptotic expressions z1 and z2 were computed in terms of and (where the notations in the original text are n for the matrix size, for the probability of two nodes within a module to be directly connected, and for the probability of two nodes that are not in the same module to be directly connected). More precisely:
formula
With our notation, , and the adjacency matrix is symmetric (). Accounting for the presence of loops for all the nodes in our network (which were excluded in the Nadakuditi-Newman model), we get:
formula
so that if . In Figure 12, we show a comparison between our results and those of Nadakuditi and Newman (2012) when applied to a non-oriented graph with two fully connected communities by illustrating on the same axes as the formal means , and their close approximations obtained earlier as . The approximations approach exactness in the large N limit, at least for values of (this is the density where z1 has its global minimum, after which it shoots up, detaching from the graph of ).
Figure 12:

Comparison between our results and those of Nadakuditi and Newman (2012) in the case of a bimodular, nonoriented graph. We compare the values of computed by Nadakuditi and Newman (solid curves in purple and brown, respectively) with the formal means (dotted curves in blue and green) and their close approximations (solid curves in yellow and cyan). (A) Comparison for . (B) Comparison for . Here, we used only the approximations , since the formal means are computationally too expensive.

Figure 12:

Comparison between our results and those of Nadakuditi and Newman (2012) in the case of a bimodular, nonoriented graph. We compare the values of computed by Nadakuditi and Newman (solid curves in purple and brown, respectively) with the formal means (dotted curves in blue and green) and their close approximations (solid curves in yellow and cyan). (A) Comparison for . (B) Comparison for . Here, we used only the approximations