Abstract

Living systems such as gene regulatory networks and neuronal networks have been supposed to work close to dynamical criticality, where their information-processing ability is optimal at the whole-system level. We investigate how this global information-processing optimality is related to the local information transfer at each individual-unit level. In particular, we introduce an internal adjustment process of the local information transfer and examine whether the former can emerge from the latter. We propose an adaptive random Boolean network model in which each unit rewires its incoming arcs from other units to balance stability of its information processing based on the measurement of the local information transfer pattern. First, we show numerically that random Boolean networks can self-organize toward near dynamical criticality in our model. Second, the proposed model is analyzed by a mean-field theory. We recognize that the rewiring rule has a bootstrapping feature. The stationary indegree distribution is calculated semi-analytically and is shown to be close to dynamical criticality in a broad range of model parameter values.

1 Introduction

A unique and ubiquitous feature of living systems is information processing. For example, in a gene regulatory network, each gene receives inputs from other genes as protein binding patterns on its regulatory sequence, and this information is integrated into the expression level of the protein that it encodes. In a neuronal network, each neuron processes chemical signals and converts them to its electrical activity.

It has been suggested that the information-processing activity in living systems is in a sense balanced at the whole-system level: It is adjusted close to dynamical criticality [22, 27]. Although there has been criticism of this “edge of chaos” hypothesis [39], recent studies based on real-world data seem to support it, at least in some gene regulatory networks [2, 42, 44] and neuronal networks [3, 43, 25] (see [18] for further references). The advantages of criticality for information-processing optimality at the whole-system level have been studied extensively: optimal computational ability [4, 12], maximal sensitivity to external stimuli [24], maximal memory capacity [15], maximal information coordination [45], and optimal balance between information transfer and storage [30], to name but a few.

However, information processing proceeds and is adjusted at each unit level in living systems such as gene regulatory and neuronal networks. Lizier et al. [31] proposed a method to quantify the local information transfer associated with information processing at each unit level by localizing the transfer entropy [48]. In [31], however, the local transfer entropy is measured by an external observer watching the whole system from outside. Indeed, it has been applied to filter coherent structures in cellular automata patterns to capture their information-theoretic meaning for us external observers. The technique of localizing information-theoretic quantities has been extended in order to identify other features of information processing, such as information storage [33] and information modification [32]. In order to further extend it for quantifying the information transfer for each individual unit, the local information transfer measure should be internalized. Namely, it should be calculated by using only information that can be available to each individual unit through its links to other units.

How is the local information transfer at each individual-unit level related to information-processing optimality at the whole-system level? In this article, we introduce the internal adjustment process of the local information transfer in the framework of adaptive networks [13, 14] and examine whether it can give rise to optimal information processing at the whole-system level, namely, dynamical criticality. We propose an adaptive random Boolean network model in which each unit balances its connectivity based on the local information transfer pattern measured by the internalized local mutual information. A similar model based on the internalized local transfer entropy was proposed by the author previously [17]. It has been shown that random Boolean networks self-organize toward near dynamical criticality by numerical simulation in that model. However, its theoretical analysis has been elusive so far. In order to make theoretical approaches feasible, here we employ the mutual information instead of the transfer entropy. Another difference from our previous work [17] is that we probabilistically address the case when the measurement of the local information transfer pattern is not informative in the present model.

We adopt a random Boolean network [21, 10] for a test bed because of its simplicity and nontrivial behaviors. Random Boolean networks (RBNs) are a simple model of a system consisting of many units interacting with each other, such as gene regulatory networks and neuronal networks. An RBN consists of N units that can take two states, 0 and 1. We denote the state of the ith unit at time step t by xi(t). Each unit i receives inputs from several units in the system. Let us denote their number by ki. The N units form a directed network in which they are nodes and the input-output relationships are arcs. The number ki is called the indegree of node i. The average value of ki over the all nodes is called the average indegree and is denoted by z. The state of the ith unit at time step t + 1 is determined by a Boolean function fi : {0, 1}ki → {0, 1}:
formula
Each Boolean function can be chosen randomly from a specified ensemble of Boolean functions [10]. Here, we consider the simplest case, that fi is chosen randomly with a bias p (0.5 ≤ p < 1) in the following way: For each input x ∈ {0, 1}ki, the output fi (x) is 1 with probability p and 0 with probability 1 − p. The Boolean functions fi are quenched. That is, once chosen, they are fixed over all the time steps and can be run from multiple initial conditions. All units are updated simultaneously.

It is known that the RBN dynamics undergo a continuous phase transition from ordered to chaotic regimes in the thermodynamic limit N → ∞ when the underlying network is given randomly. For example, if the underlying network is generated by the configuration model [41] (random networks with a specified degree distribution), the condition for criticality can be obtained by a mean-field theory [28, 50]. The rewiring rule proposed in Section 2 does not make a correlation between the indegree and the outdegree of each node; it is reasonable to assume that the indegree and the outdegree of each node are independent in our mean-field theory. In this case, the criticality is simply determined by the average indegree [9, 34]. The RBN dynamics are ordered when z < , in the sense that any perturbation applied to an initial state eventually dies out. When z > zc, perturbations spread indefinitely and the RBN dynamics are called chaotic. The critical condition is given by z = zc. If this condition is satisfied, then a perturbation to a node propagates to exactly one succeeding node on average.

In the following, first we numerically investigate our model and show that it can self-organize toward near dynamical criticality. Second, by appealing to a mean-field approximation of random Boolean dynamics, we derive a master equation governing the evolution of the indegree distribution. We show that the average indegree semi-analytically calculated from the stationary indegree distribution has a value close to criticality in a broad range of the model parameter values. We also discuss the meaning of the rewiring rule of the proposed model and the difference from the previous work on self-organization of networks toward dynamical criticality.

2 Network Evolution Model

In our model, the change in the network topology occurs on a longer time scale than the one for the RBN dynamics on a fixed topology. The network is rewired based on a local measurement of the RBN dynamics [6]. Here, we propose a network rewiring algorithm such that the RBN dynamics are evaluated by the internalized local mutual information. Rewiring occurs so that information processing at each unit balances with respect to local information transfer. The time unit of network topology evolution is called the epoch [29].

Let X1 and X2 be stochastic variables. The mutual information between X1 and X2 is defined by [7]
formula
where p(x, y) is the joint probability that X1 = x1 and X2 = x2, and p1(x1) and p2(x2) are the marginal probabilities for the first and second variables, respectively. I(X1 : X2) measures the reduction of uncertainty about the value of X1 when the value of X2 is known.
The local mutual information considers only the log ratio on the right-hand side of Equation 2 [31]. Here, we make the local mutual information internal in the following sense. Let us consider a unit i in an RBN and its input neighbors i1, i2, …, iki. The internal local mutual information (ILMI) between unit i and ij at time step t is given by
formula
where the joint probability on the right-hand side is calculated from the frequency of each pair of states among all the pairs (i, ij), j = 1, 2, …, ki in the attractor reached from an initial condition. We emphasize that the value of ILMI is assigned to each pair of nodes connected by an arc at each time step. The joint probability distribution could be calculated from the frequency of each pair of states from all the pairs (i, j) such that i receives inputs from j, where i and j range over all nodes according to [31]. In that work, the information on the frequency of occurrence of a pair of states associated with each interaction is integrated by an external observer watching the whole system and is distributed to each instance. Contrary to this, here we calculate the joint probability distribution locally in order to define information transfer for a hypothesized internal observer sitting at unit i, who can access the other units only through its incoming arcs.

A remarkable feature of the local information-theoretic quantities (whether external or internal) is that they can take negative values. A negative value of the local mutual information can be interpreted as a misleading information transfer [31]. Here, we speculate that it signals an excess of information transfer to unit i from its incoming arcs, because ILMI(iji, t) < 0 is equivalent to p(xi (t + 1)|xij (t)) < p(xi (t + 1)), which means that knowing the state of ij at time step t increases the uncertainty of the state of unit i at time step t + 1, a form of destabilization. Thus, in order to balance the stability of information processing on unit i, it is natural to introduce a process of adjustment in which detection of a negative value of ILMI leads to elimination of one of the incoming arcs to unit i. One could eliminate multiple incoming arcs to the unit at once. However, here we focus on the simplest case: that the number of arcs changes one by one, as in previous work [6, 29]. This makes theoretical analysis feasible and is also natural in terms of physical means to achieve the network rewiring process. For example, in gene regulatory networks, one major factor in rewiring of regulatory relations is changes in cis-regulatory sequences [51], and thus one-by-one changes are the most probable. On the other hand, if no negative local information transfer is found, then unit i is supposed to have the capacity to increase the information to be processed. Thus, the counterpart of the balancing process is that unit i gets a new incoming arc in the absence of negative values of ILMI. However, if ILMI(iji, t) = 0 for all j and t, that is, if inputs to unit i and its outputs are independent, then the measurement of local mutual information is not informative for unit i. In this case, it is natural to choose addition or elimination of an incoming arc to unit i with equal probability. However, to see the effect of this neutral drift, we parametrize the probability of addition or elimination in this case (see below).

Formally, our network rewiring algorithm is given as follows:

  • We generate an initial RBN with uniform indegree ki = k0 for all units i = 1, 2, …, N.

  • An initial state (x1(0), x2(0), …, xN(0)) of the RBN is specified randomly. RBN dynamics are iterated until a dynamical attractor is reached or we cannot find an attractor within a preassigned number of time steps [29]. If no attractor is reached in the first 2Tmax + T′ time steps, the last Tmax steps are considered as an attractor. In the following, we set Tmax = 1000 and T′ = 100 for efficient numerical simulation. The attractor length is denoted by Γ.

  • A unit i is chosen randomly. ILMI(iji, t) is calculated for all j = 1, 2,…, ki on the attractor.

  • If ILMI(iji, t) < 0 for at least one j and one time step t on the attractor, then we eliminate one randomly chosen incoming arc to node i. The case when ILMI(iji, t) ≥ 0 for all j = 1, 2, …, ki and time step t is divided into two cases: If ILMI(iji, t) > 0 for some j and t, then we create a new arc pointing to node i. If ILMI(iji, t) = 0 for all j and t (inputs and outputs are independent, the neutral case), then node i gets a new incoming arc with probability r and loses a randomly chosen existing incoming arc with probability 1 − r. The source node of a new arc is chosen randomly.

  • A new Boolean function is reassigned randomly for the chosen unit i.

  • Go back to step (ii).

We impose the reflecting boundary condition at ki = 1. Namely, if the chosen unit i has ki = 1, then it always gets a new incoming arc. An alternative procedure at step (v) could be reassigning Boolean functions for all units [29]. We have checked that this alternative rule update produces essentially the same numerical results as those presented in Section 3.

3 Numerical Results

In this section, we numerically investigate the behavior of our network evolution model and show that it self-organizes toward near dynamical criticality. We performed numerical simulation of the model for p = 0.5, 0.6, and 0.7. The parameter r is set to the most natural choice, r = 0.5.

Figure 1 shows the time evolutions of the average indegree from five different initial uniform indegree values averaged over 100 simulation runs for N = 100, 200, and 400. The stationary average indegree z = zth predicted from the mean-field theory in Section 4 and the critical average indegree z = zc are also shown as lines (zth ≈ 1.98, 2.11, and 2.47 and zc = 2.0, 2.08 …, and 2.38 … for p = 0.5, 0.6, and 0.7, respectively). For all three values of p, the average indegree approaches a constant value slightly above both zth and zc.

Figure 1. 

The time evolution of the average indegree from five different initial uniform indegree values. (a) p = 0.5. (b) p = 0.6. (c) p = 0.7. See the main text for details.

Figure 1. 

The time evolution of the average indegree from five different initial uniform indegree values. (a) p = 0.5. (b) p = 0.6. (c) p = 0.7. See the main text for details.

In Figure 2, we plot the deviation of the average indegree z from zth. Here, z was calculated in the following way. For each run, the first 4000 epochs were discarded and the average indegrees in the succeeding 1000 epochs were averaged. This averaged average indegree was again averaged over 100 runs of the model. We found that the deviation slowly decreases as N becomes large for each value of p (zzthN−0.093, N−0.093, and N−0.073 for p = 0.5, 0.6, and 0.7, respectively).

Figure 2. 

The dependence on system size N of the deviation of the average indegree z from the stationary average indegree zth predicted from the mean-field theory in Section 4. (a) p = 0.5. (b) p = 0.6. (c) p = 0.7. The values of other parameters are k0 = 3 and r = 0.5.

Figure 2. 

The dependence on system size N of the deviation of the average indegree z from the stationary average indegree zth predicted from the mean-field theory in Section 4. (a) p = 0.5. (b) p = 0.6. (c) p = 0.7. The values of other parameters are k0 = 3 and r = 0.5.

In Figure 3, the numerically obtained stationary in- and outdegree distributions are compared with the theoretical predictions. The numerical degree distributions were averaged over 100 simulation runs at epoch 4000, starting from k0 = 3. The system size is N = 400. The value of the average indegree is z ≈ 2.26, 2.36, and 2.67 for p = 0.5, 0.6, and 0.7, respectively. The numerical stationary outdegree distribution almost perfectly matches the Poisson distribution with the same average outdegree for each p. This is due to the uniform random choice of the source node of a newly created arc and can be explained by the mean-field analysis of the stationary outdegree distribution [17]. On the other hand, the numerical stationary indegree distribution deviates quantitatively from the theoretical prediction by the mean-field theory in Section 4. One reason for the deviation will be the finite-size effect. However, the qualitative feature that they are both narrower than the Poisson distribution with the same average is common.

Figure 3. 

Numerical and theoretical stationary in- and outdegree distributions for (a) p = 0.5, (b) p = 0.6, and (c) p = 0.7. Numerical simulation was performed with parameter values N = 400, k0 = 3, and r = 0.5. The numerical distributions were averaged over 100 evolved RBNs. The theoretical indegree distribution was obtained from the mean-field theory presented in Section 4.

Figure 3. 

Numerical and theoretical stationary in- and outdegree distributions for (a) p = 0.5, (b) p = 0.6, and (c) p = 0.7. Numerical simulation was performed with parameter values N = 400, k0 = 3, and r = 0.5. The numerical distributions were averaged over 100 evolved RBNs. The theoretical indegree distribution was obtained from the mean-field theory presented in Section 4.

To check that our network evolution algorithm drives finite-size RBNs close to dynamical criticality, we evaluated the slope of the so-called Derrida curve at the origin for evolved RBNs. The Derrida curve examines how the size of a perturbation evolves [9, 22]. For critical RBNs, the slope of the Derrida curve at the origin is 1. If the slope is less than 1, then perturbations will eventually die out. This is a defining characteristic of the ordered phase. On the other hand, if it is greater than 1, then the Derrida curve has nonzero intersection with the diagonal line. The nonzero value of the intersection is equal to the average size of perturbations that remain indefinitely and thus indicates the chaotic phase.

Here, the Derrida curve was produced in the following procedure. 100 RBNs with N = 400 were evolved over 4000 epochs from k0 = 3. For each evolved RBN and each size of perturbation d(t) (fraction of nodes whose state is flipped), 200 random initial conditions were prepared. For each initial condition, a random perturbation of the given size is applied. Both the original and the perturbed initial conditions were evolved by one time step. The relative difference size between the resulting two states, d(t + 1), was averaged and plotted against d(t). In Figure 4, the average of 100 curves obtained by this procedure is plotted for each value of p. We also show theoretical curves obtained from the conventional mean-field theory of RBNs [23]: d(t + 1) = 2p(1 − p) ∑kPin(k)[1 − (1 − d(t))k], where Pin(k) are the numerical stationary indegree distributions shown in Figure 3. We can see that the numerically constructed Derrida curves and the mean-field theoretic curves agree well. The average slope at the origin of the numerically constructed Derrida curve is 1.13 ± 0.08, 1.13 ± 0.09, and 1.12 ± 0.09 for p = 0.5, 0.6, and 0.7, respectively. The mean-field-theoretic curves have the same values of the slope at the origin as the average slopes at the origin of the numerically constructed Derrida curves. This result suggests that the evolved RBNs reside close to dynamical criticality for the chosen values of the parameters.

Figure 4. 

Averaged Derrida curves for evolved RBNs for p = 0.5, 0.6, and 0.7. The corresponding theoretical curves are also shown. The values of other parameters are N = 400, k0 = 3, and r = 0.5.

Figure 4. 

Averaged Derrida curves for evolved RBNs for p = 0.5, 0.6, and 0.7. The corresponding theoretical curves are also shown. The values of other parameters are N = 400, k0 = 3, and r = 0.5.

4 Mean-Field Theory

In this section, we employ a mean-field approximation of RBN dynamics in each epoch and derive a master equation describing the time evolution of the indegree distribution of the underlying network. We will numerically evaluate the stationary indegree distribution and show that the average indegree has a value close to the critical one for a broad range of the model parameter.

We consider the following mean-field approximation of RBN dynamics. For a chosen unit i in an epoch, we fix a randomly assigned Boolean function fi with bias p over the epoch. However, Boolean functions at the other units are chosen at random with bias p at each time step t in the epoch. In particular, Boolean functions at input units to unit i are randomly changing over all time steps. Since we cannot speak of attractors in this mean-field theory, the joint probability distribution is calculated from the ensemble of all input patterns such that the state of each input is 1 with probability p independently, whereas the decision whether the target unit gets a new incoming arc or loses an existing arc is made on a specific input pattern sampled from the ensemble.

Let Pin(k, e) be the probability that a randomly chosen unit has indegree k in epoch e. The master equation describing the time evolution of Pin(k, e) is
formula
for k ≥ 1, and
formula
for k = 0, where pk is the probability that the chosen unit with indegree k gets a new incoming arc when it is selected in step (iii) of the network rewiring algorithm in Section 2. Equation 4 can be interpreted as follows: The left-hand side represents the probability that the chosen unit has indegree k in epoch e + 1. This probability has three contributions from what occurs at epoch e, represented in the right-hand side. The first term of the right-hand side means that the unit is of indegree k at epoch e and is not selected in the network rewiring algorithm (probability 1 − 1/N), and thus it remains of indegree k at epoch e + 1. The second term comprises the following sequence of events: The unit has indegree k − 1 at epoch e; it is selected in step (iii) of the network rewiring algorithm (probability 1/N) and gets a new incoming arc according to step (iv) of the network rewiring algorithm (probability pk−1), and consequently is of indegree k at epoch e + 1. Finally, in the third term, the unit with indegree k + 1 at epoch e is selected (probability 1/N), loses one of its incoming arcs (probability 1 − pk+1), and thus is of indegree k at epoch e + 1.
The stationary indegree distribution Pins(k) is obtained by putting Pin(k, e + 1) = Pin(k, e) = Pins(k) in Equations 4 and 5. It follows that Pins(k) satisfies
formula
for k = 0, 1, 2, ….
We assume that p0 = 1. Since p1 = 1 by the boundary condition introduced in Section 2, Pins(0) = 0 follows from Equation 6. For k ≥ 1, we obtain
formula

The problem is now to compute pk. Suppose that chosen unit i has indegree ki = k in an epoch. We denote the probability that xi(t + 1) = ω by p(ω), and the conditional probability that xi(t + 1) = ω given xij (t) = ω′ by p(ω|ω′) for 1 ≤ jk and ω, ω′ = 0, 1. In the following, we assume that input units to unit i are indexed from 1 to k, for simplicity. A random Boolean function assigned to unit i is denoted by f : {0, 1}k → {0, 1}. For a k-array binary vector α = (α1, …, αk) ∈ {0, 1}k, we define nα = ∑j=1k αj, that is, nα is the number of entries that are 1. The correspondence α ↦ nα can be seen as a function from the set {0, 1}k to the set of natural numbers, and we will refer to the complete function as n.

By the mean-field assumption, we have
formula
and
formula
After some algebra, we get
formula
where Cov(n, f) = 〈nf〉 − 〈n〉 〈f〉 is the covariance between n and f. The average 〈⋯〉 is taken over all random inputs with bias p. For example, we have . Here we regard n and f as random variables on the discrete probability space ({0, 1}k, μ) where the probability measure μ is given by for α ∈ {0, 1}k.

From Equation 10, we have the following observations. If Cov(n, f) > 0, then p(ω|ω′) > p(ω) holds if and only if ω = ω′. If Cov(n, f) < 0, then p(ω|ω′) > p(ω) holds if and only if ω ≠ ω′. Inputs and outputs are independent if and only if Cov(n, f) = 0.

Let us consider the following condition for a pair of a Boolean function f : {0, 1}k → {0, 1} and a Boolean vector α ∈ {0, 1}k:
formula
When Cov(n, f) ≠ 0, the pairs (f, α) contributing to pk are exactly those satisfying the condition (11). Indeed, by the mean-field assumption mentioned at the beginning of this section, we have ILMI (ji, t) = log2 (p(f(α)|αj)/p(f(α))). According to step (iv) of the network rewiring algorithm in Section 2, it is necessary that ILMI(ji, t) ≥ 0 for all j = 1, …, k for unit i to get a new incoming arc. However, since Cov(n, f) ≠ 0, we have p(ω|ω′) ≠ p(ω) for all ω,ω′ ∈ {0, 1} by Equation 10. Thus, when Cov(n, f) ≠ 0, we have that ILMI(ji, t) ≥ 0 for all j = 1, …, k implies Equation 11. The converse is obvious.

We observe that the condition (11) cannot hold when 0 < nα < k, regardless of the sign of Cov(n, f). Hence, given a Boolean function f : {0, 1}k → {0, 1} with Cov(n, f) > 0, the probability that the condition (11) holds is (i) (1 − p)k if f(0) = 0 and f(1) = 0, (ii) pk + (1 − p)k if f(0) = 0 and f(1) = 1, (iii) 0 if f(0) = 1 and f(1) = 0, and (iv) pk if f(0) = 1 and f(1) = 1. This is because it must hold that f(α) = αj for all j = 1, …, k so that α satisfies the condition (11) according to the observation on Equation 10 above. For example, in case (i), α = 0 is the sole input satisfying the condition (11). On the other hand, in case (ii), only α = 0 and α = 1 fulfill the condition (11).

Similarly, when Cov(n, f) < 0, the probability that the condition (11) holds is (v) pk if f(0) = 0 and f(1) = 0, (vi) 0 if f(0) = 0 and f(1) = 1, (vii) pk + (1 − p)k if f(0) = 1 and f(1) = 0, and (viii) (1 − p)k if f(0) = 1 and f(1) = 1. We note that this calculation reveals a meaning of a negative value of ILMI: It is the discrepancy between an individual input-output behavior of f and the average tendency of f (namely, positive or negative correlation with n).

pk can be computed by summing the probabilities of (f, α) such that Cov(n, f) ≠ 0 satisfying the condition (11) and the probability of Cov(n, f) = 0 multiplied by r. This can be done by brute force, but checking the sign of Cov(n, f) for all 22k functions is cumbersome even for small k such as k = 5, 6. However, one can reduce the computational cost by obtaining an appropriate expression for Cov(n, f). In the Appendix, we will derive an expression for pk that is computationally tractable at least for k ≤ 6. By substituting pk for Equation 7, we obtain the stationary indegree distribution Pins(k).

In Figure 5a, the average indegree calculated from Equation 7 for generic values of p and three values of r (r = 0.0, 0.5, and 1.0) is compared with the known critical curve z = zc = 1/(2p(1 − p)) [9, 34]. Here, the values of pk up to k = 12 are used to obtain the average indegree from Equation 7. For k ≤ 6, pk is computed by Equation 20 derived in the Appendix, and it is calculated approximately from 106 randomly sampled Boolean functions for 7 ≤ k ≤ 12.

Figure 5. 

The average indegree (a) and the average sensitivity (b) predicted from our mean-field theory. The average indegree is calculated from Equation 7. The known mean-field critical curve z = zc = 1/(2p(1 − p)) is shown in (a).

Figure 5. 

The average indegree (a) and the average sensitivity (b) predicted from our mean-field theory. The average indegree is calculated from Equation 7. The known mean-field critical curve z = zc = 1/(2p(1 − p)) is shown in (a).

By the term “generic” here we mean a value of p chosen uniformly at random from an interval [a, b] with a < b. In Figure 5, we randomly chose a value of p for each non-overlapping interval of length 5.0 × 10−3. For generic values of p, the equality Cov(n, f) = 0 holds if and only if every coefficient is zero when the right-hand side of Equation 16 is regarded as a polynomial in p. For a non-generic value of p (for example, p = 0.5), Cov(n, f) can become zero even when some of the coefficients are not zero: namely, when terms of different degree of p can cancel each other. This can result in a large deviation from zc = 1/(2p(1 − p)) in the value of the average indegree calculated from Equation 7 (data not shown).

In the range displayed (0.5 < p < 0.8), the critical curve is contained in the region between r = 0.0 and r = 1.0. In particular, the result with r = 0.5 remains close to the critical curve for small values of p. In Figure 5b, the same data are represented in terms of the average sensitivity s = 2p(1 − p)z [49], which is the average expansion rate of perturbations and whose value is s = 1 at criticality. The deviation from s = 1 becomes larger as p increases. However, it is rather small (within 10 percent) for r = 0.5 for 0.5 < p < 0.8. Thus, we expect that choosing a value of r close to r = 0.5 will result in self-organization of RBNs close to dynamical criticality for a broad range of p.

One is aware of small jumps in Figure 5 at some values of p such as p = 2/3 and p = 3/4. This is due to the change of the sign in each term of the right-hand side of Equation 16 as p varies. Thus, the deviation from the known critical curve for r = 0.5 is yielded neither by the numerical error nor by the truncation error. It is a systematic one.

5 Discussion

In the network evolution algorithm proposed in this article, the criterion for whether an arc is eliminated or created is internally generated according to the local information transfer pattern measured by ILMI. This is in contrast to the previous work based on the activity-based rewiring, where the criterion is given externally [6, 29].

As we noted in Section 4, the criterion is related to the average behavior of the Boolean function at each individual unit. If every input-output relationship on the attractor reached is in accord with the average behavior, then the output unit gets a new incoming arc. Otherwise, it loses an incoming arc. This bootstrapping way of rewiring is reminiscent of the autonomous nature of living systems [37].

So far, many theoretical models have been proposed, based on a variety of ideas on network evolution toward dynamical criticality, from simpler to more realistic ones: activity-dependent rewiring [6, 29], Hebbian learning [5, 8], spike-timing dependent plasticity [38, 47], homeostatic plasticity [11], stability-related control of feedback loops [35], and so on. It has been suggested that one general mechanism common to some of these models is self-organized criticality (SOC) [1, 19] achieved by the time-scale separation between activation and inactivation processes [18, 36]. However, the mechanism of self-organization toward near dynamical criticality in our model seems not to fit the SOC picture proposed in previous work [18, 36], because the time-scale separation between creation and elimination processes of arcs does not hold. Rather, near criticality seems to be achieved through a stochastic balancing process around criticality. Indeed, the probabilities of arc elimination and creation are positive in the subcritical phase and in the supercritical phase, respectively. Further investigation of this issue is left as future work.

As we have seen, the stationary outdegree distribution in our model is Poisson, due to the uniform random choice of the source node of a new arc. However, it can be arbitrarily controlled by incorporating the algorithm proposed in [17]. This will not change the result in this article, because the average indegree determines the phase of random Boolean networks when the indegree and the outdegree of a node are uncorrelated as in our model.

In conclusion, we have proposed an adaptive network model based on internal adjustment of the local information transfer for stabilizing information processing at each unit level. We showed numerically that our model can drive random Boolean networks close to dynamical criticality. By introducing a mean-field theory of random Boolean dynamics, we showed semi-analytically that the theoretical stationary average indegree is close to the critical value in a broad range of the model parameter. Our result promotes the significance of the internal adjustment of local information processing in networks to form near-critical dynamics that are relevant to functioning of whole living systems.

The mean-field theory proposed in this article can be applied to other Boolean dynamics, such as those involving threshold functions [26, 46] or canalizing functions [16, 20, 40], with slight modification. This development is now under investigation and will be presented elsewhere.

Appendix: Deriving an Expression for pk

In this Appendix, we obtain an expression for pk that can be computed in a reasonable time, at least for small values of k such as k ≤ 6.

As in Section 4, let f be a Boolean function f : {0, 1}k → {0, 1}. The value of Cov(n, f) is determined by a (k + 1)-tuple of numbers (m0, m1, …, mk). For l = 0, 1, …, k, ml is the number of α ∈ {0, 1}k such that f(α) = 1 and the number of 1's in its components is l. That is,
formula
Note that . For such a Boolean function f : {0, 1}k → {0, 1}, we say that it has type (m0, m1,…, mk). Now, 2k inputs α can be divided into k + 1 classes depending on the number of 1's in its components, and the number of α's contained in the class Cl whose members have exactly l components equal to 1 is . For each l = 0, 1, …, k, a Boolean function of type (m0, m1, …, mk) must output 1 (probability p) for ml inputs α among members of Cl and output 0 (probability 1 − p) for the other inputs in Cl. Thus, the probability that a Boolean function of type (m0, m1, …, mk) is chosen is
formula
For a Boolean function f of type (m0, m1, …, mk), we have
formula
and
formula
Thus,
formula
Denoting Cov(n, f) by cm0, …, mk, we define
formula
formula
and
formula
Note that m0 = ω is equivalent to f(0) = ω, and mk = ω is equivalent to f(1) = ω, for ω = 0, 1. Thus, Equation 17 and Equation 18 correspond to cases (i) to (iv) and cases (v) to (viii) in Section 4, respectively.
Finally, we get an expression for the probability pk that the chosen unit with indegree k acquires a new incoming arc when it is selected in step (iii) of the network rewiring algorithm in Section 2:
formula

Thus, generating 22k Boolean functions f : {0, 1}k → {0, 1} in computing pk by brute force is reduced to generating of numbers (m0, m1, …, mk) in computing pk via Equation 20. The two numbers 22k and up to k = 7 are compared in Table 1.

Table 1. 

Comparison between 22k and for 1 ≤ k ≤ 7.

k22k
16 12 
256 64 
65536 700 
4294967296 17424 
1.84 ⋯ × 1019 1053696 
3.40 ⋯ × 1038 160579584 
k22k
16 12 
256 64 
65536 700 
4294967296 17424 
1.84 ⋯ × 1019 1053696 
3.40 ⋯ × 1038 160579584 

Acknowledgments

This work was partially supported by JSPS KAKENHI Grant 25280091. The author expresses his gratitude to the two anonymous reviewers for their comments, which greatly improved the manuscript.

References

1
Bak
,
P.
(
1996
).
How nature works
.
New York
:
Copernicus
.
2
Balleza
,
E.
,
Alvarez-Buylla
,
E. R.
,
Chaos
,
A.
,
Kauffman
,
S.
,
Shmulevich
,
I.
, &
Aldana
,
M.
(
2008
).
Critical dynamics in genetic regulatory networks: Examples from four kingdoms
.
PLoS ONE
,
3
,
e2456
.
3
Beggs
,
J. M.
, &
Plenz
,
D.
(
2003
).
Neuronal avalanches in neocortical circuits
.
The Journal of Neuroscience
,
23
,
11167
11177
.
4
Bertschinger
,
N.
, &
Natschläger
,
T.
(
2004
).
Real-time computation at the edge of chaos in recurrent neural networks
.
Neural Computation
,
16
,
1413
1436
.
5
Bornholdt
,
S.
, &
Röhl
,
T.
(
2003
).
Self-organized critical neural networks
.
Physical Review E
,
67
,
066118
.
6
Bornholdt
,
S.
, &
Rohlf
,
T.
(
2000
).
Topological evolution of dynamical networks: Global criticality from local dynamics
.
Physical Review Letters
,
84
,
6114
6117
.
7
Cover
,
T. M.
, &
Thomas
,
J. A.
(
1991
).
Elements of information theory
.
Hoboken, NJ
:
Wiley
.
8
de Arcangelis
,
L.
,
Perrone-Capano
,
C.
, &
Herrmann
,
H. J.
(
2006
).
Self-organized criticality model for brain plasticity
.
Physical Review Letters
,
96
,
028107
.
9
Derrida
,
B.
, &
Pomeau
,
Y.
(
1986
).
Random networks of automata: A simple annealed approximation
.
Europhysics Letters
,
1
,
45
49
.
10
Drossel
,
B.
(
2008
).
Random Boolean networks
. In
H. G.
Schuster
(Ed.),
Reviews of Nonlinear Dynamics and Complexity
.
Weinheim, Germany
:
Wiley-VCH
.
11
Droste
,
F.
,
Do
,
A.-L.
, &
Gross
,
T.
(
2013
).
Analytical investigation of self-organized criticality in neural networks
.
Journal of the Royal Society Interface
,
10
,
20120558
.
12
Goudarzi
,
A.
,
Teuscher
,
C.
,
Gulbahce
,
N.
, &
Rohlf
,
T.
(
2012
).
Emergent criticality through adaptive information processing in Boolean networks
.
Physical Review Letters
,
108
,
128702
.
13
Gross
,
T.
, &
Blasius
,
B.
(
2008
).
Adaptive coevolutionary networks: A review
.
Journal of the Royal Society Interface
,
5
,
259
271
.
14
Gross
,
T.
, &
Sayama
,
H.
(Eds.) (
2009
).
Adaptive networks: Theory, models and applications
.
Heidelberg
:
Springer Verlag
.
15
Haldeman
,
C.
, &
Beggs
,
J. M.
(
2005
).
Critical branching captures activity in living neural networks and maximizes the number of metastable states
.
Physical Review Letters
,
94
,
058101
.
16
Harris
,
S. E.
,
Sawhill
,
B. K.
,
Wuensche
,
A.
, &
Kauffman
,
S.
(
2002
).
A model of transcriptional regulatory networks based on biases in the observed regulation rules
.
Complexity
,
7
,
23
40
.
17
Haruna
,
T.
, &
Tanaka
,
S.
(
2014
).
On the relationship between local rewiring rules and stationary out-degree distributions in adaptive random Boolean network models
. In
H.
Sayama
,
J.
Rieffel
,
S.
Risi
,
R.
Doursat
, &
H.
Lipson
(Eds.),
Artificial life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems
(pp.
419
426
).
Cambridge, MA
:
MIT Press
.
18
Hesse
,
J.
, &
Gross
,
T.
(
2014
).
Self-organized criticality as a fundamental property of neural systems
.
Frontiers in Systems Neuroscience
,
8
,
1
14
.
19
Jensen
,
H. J.
(
1998
).
Self-organized criticality: Emergent complex behavior in physical and biological systems
.
Cambridge, UK
:
Cambridge University Press
.
20
Kauffman
,
S.
,
Peterson
,
C.
,
Samuelsson
,
B.
, &
Troein
,
C.
(
2004
).
Genetic networks with canalyzing Boolean rules are always stable
.
Proceedings of the National Academy of Sciences of the United States of America
,
101
,
17102
17107
.
21
Kauffman
,
S. A.
(
1969
).
Metabolic stability and epigenesis in randomly constructed genetic nets
.
Journal of Theoretical Biology
,
22
,
437
467
.
22
Kauffman
,
S. A.
(
1993
).
The origins of order: Self-organization and selection in evolution
.
New York
:
Oxford University Press
.
23
Kesseli
,
J.
,
Rämö
,
P.
, &
Yli-Harja
,
O.
(
2006
).
Iterated maps for annealed Boolean networks
.
Physical Review E
,
74
,
046104
.
24
Kinouchi
,
O.
, &
Copelli
,
A. M.
(
2006
).
Optimal dynamical range of excitable networks at criticality
.
Nature Physics
,
2
,
348
352
.
25
Kitzbichler
,
M. G.
,
Smith
,
M. L.
,
Christensen
,
S. R.
, &
Bullmore
,
E.
(
2009
).
Broadband criticality of human brain network synchronization
.
PLoS Computational Biology
,
5
,
e1000314
.
26
Kürten
,
K. E.
(
1988
).
Critical phenomena in model neural networks
.
Physics Letters A
,
129
,
157
160
.
27
Langton
,
C. G.
(
1990
).
Computation at the edge of chaos: Phase transitions and emergent computation
.
Physica D
,
42
,
12
37
.
28
Lee
,
D.-S.
, &
Rieger
,
H.
(
2007
).
Comparative study of the transcriptional regulatory networks of E. coli and yeast: Structural characteristics leading to marginal dynamic stability
.
Journal of Theoretical Biology
,
248
,
618
626
.
29
Liu
,
M.
, &
Bassler
,
K. E.
(
2006
).
Emergent criticality from coevolution in random Boolean networks
.
Physical Review E
,
74
,
041910
.
30
Lizier
,
J. T.
,
Prokopenko
,
M.
, &
Zomaya
,
A. Y.
(
2008
).
The information dynamics of phase transitions in random Boolean networks
. In
S.
Bullock
,
J.
Noble
,
R.
Watson
, &
M. A.
Bedau
(Eds.),
Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems
(pp.
374
381
).
Cambridge, MA
:
MIT Press
.
31
Lizier
,
J. T.
,
Prokopenko
,
M.
, &
Zomaya
,
A. Y.
(
2008
).
Local information transfer as a spatiotemporal filter for complex systems
.
Physical Review E
,
77
,
026110
.
32
Lizier
,
J. T.
,
Prokopenko
,
M.
, &
Zomaya
,
A. Y.
(
2010
).
Information modification and particle collisions in distributed computation
.
Chaos
,
20
,
037109
.
33
Lizier
,
J. T.
,
Prokopenko
,
M.
, &
Zomaya
,
A. Y.
(
2012
).
Local measures of information storage in complex distributed computation
.
Information Sciences
,
208
,
39
54
.
34
Luque
,
B.
, &
Solé
,
R. V.
(
1997
).
Phase transitions in random networks: Simple analytic determination of critical points
.
Physical Review E
,
55
,
257
260
.
35
MacArthur
,
B. D.
,
Sánchez-García
,
R. J.
, &
Ma'ayan
,
A.
(
2010
).
Microdynamics and criticality of adaptive regulatory networks
.
Physical Review Letters
,
104
,
168701
.
36
Marković
,
D.
, &
Gros
,
C.
(
2014
).
Power laws and self-organized criticality in theory and nature
.
Physics Reports
,
536
,
41
74
.
37
Maturana
,
H. R.
, &
Varela
,
F. J.
(
1980
).
Autopoiesis and cognition: The realization of the living
.
Dordrecht
:
Reidel
.
38
Meisel
,
C.
, &
Gross
,
T.
(
2009
).
Adaptive self-organization in a realistic neural network model
.
Physical Review E
,
80
,
061917
.
39
Mitchell
,
M.
,
Hraber
,
P. T.
, &
Crutchfield
,
J. P.
(
1993
).
Revisiting the edge of chaos: Evolving cellular automata to perform computations
.
Complex Systems
,
7
,
89
130
.
40
Moreira
,
A. A.
, &
Amaral
,
L. A. N.
(
2005
).
Canalizing Kauffman networks: Nonergodicity and its effect on their critical behavior
.
Physical Review Letters
,
94
,
218702
.
41
Newman
,
M. E. J.
,
Strogatz
,
S. H.
, &
Watts
,
D. J.
(
2001
).
Random graphs with arbitrary degree distributions and their applications
.
Physical Review E
,
64
,
026118
.
42
Nykter
,
M.
,
Price
,
N.
,
Aldana
,
M.
,
Ramsey
,
S. A.
,
Kauffman
,
S. A.
,
Hood
,
L. E.
,
Yli-Harja
,
O.
, &
Shmulevich
,
I.
(
2008
).
Gene expression dynamics in the macrophage exhibit criticality
.
Proceedings of the National Academy of Sciences of the United States of America
,
105
,
1897
1900
.
43
Petermann
,
T.
,
Thiagarajan
,
T. C.
,
Lebedev
,
M. A.
,
Nicolelis
,
M. A. L.
,
Chialvo
,
D. R.
, &
Plenz
,
D.
(
2009
).
Spontaneous cortical activity in awake monkeys composed of neuronal avalanches
.
Proceedings of the National Academy of Sciences of the United States of America
,
106
,
15921
15926
.
44
Rämö
,
P.
,
Kesseli
,
J.
, &
Yli-Harja
,
O.
(
2006
).
Perturbation avalanches and criticality in gene regulatory networks
.
Journal of Theoretical Biology
,
242
,
164
170
.
45
Ribeiro
,
A. S.
,
Kauffman
,
S. A.
,
Lloyd-Price
,
J.
,
Samuelsson
,
B.
, &
Socolar
,
J. E. S.
(
2008
).
Mutual information in random Boolean models of regulatory networks
.
Physical Review E
,
77
,
011901
.
46
Rohlf
,
T.
, &
Bornholdt
,
S.
(
2002
).
Criticality in random threshold networks: Annealed approximation and beyond
.
Physica A
,
310
,
245
259
.
47
Rubinov
,
M.
,
Sporns
,
O.
,
Thivierge
,
J.-P.
, &
Breakspear
,
M.
(
2011
).
Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons
.
PLoS Computational Biology
,
7
,
e1002038
.
48
Schreiber
,
T.
(
2000
).
Measuring information transfer
.
Physical Review Letters
,
85
,
461
464
.
49
Shmulevich
,
I.
, &
Kauffman
,
S. A.
(
2004
).
Activities and sensitivities in Boolean network models
.
Physical Review Letters
,
93
,
048701
.
50
Squires
,
S.
,
Ott
,
E.
, &
Girvan
,
M.
(
2012
).
Dynamical instability in Boolean networks as a percolation problem
.
Physical Review Letters
,
109
,
085701
.
51
Tirosh
,
I.
,
Reikhav
,
S.
,
Levy
,
A. A.
, &
Barkai
,
N.
(
2009
).
A yeast hybrid provides insight into the evolution of gene expression regulation
.
Science
,
324
,
659
662
.

Author notes

Department of Planetology, Graduate School of Science, Kobe University, 1-1 Rokkodaicho, Nada, Kobe 657-8501, Japan. E-mail: tharuna@penguin.kobe-u.ac.jp