Abstract

Given two subsets A and B of nodes in a directed graph, the conduciveness of the graph from A to B is the ratio representing how many of the edges outgoing from nodes in A are incoming to nodes in B. When the graph's nodes stand for the possible solutions to certain problems of combinatorial optimization, choosing its edges appropriately has been shown to lead to conduciveness properties that provide useful insight into the performance of algorithms to solve those problems. Here we study the conduciveness of CA-rule graphs, that is, graphs whose node set is the set of all CA rules given a cell's number of possible states and neighborhood size. We consider several different edge sets interconnecting these nodes, both deterministic and random ones, and derive analytical expressions for the resulting graph's conduciveness toward rules having a fixed number of non-quiescent entries. We demonstrate that one of the random edge sets, characterized by allowing nodes to be sparsely interconnected across any Hamming distance between the corresponding rules, has the potential of providing reasonable conduciveness toward the desired rules. We conjecture that this may lie at the bottom of the best strategies known to date for discovering complex rules to solve specific problems, all of an evolutionary nature.

1 Introduction

Ever since Wolfram first introduced his four-class qualitative categorization of elementary cellular automata (CA) [23], the problem of distinguishing CA update rules in quantitative terms within both his classification scheme and others (e.g., [16, 17]), with the special aim of identifying the so-called complex rules, has been a central one [12, 22, 24, 26]. Some of the notable approaches have been Langton's edge-of-chaos parameterization of the rule space (through the fraction, denoted by λ, of “non-quiescent” entries in a rule) [15, 17] and Wuensche's input entropy (through estimates, along traces of CA evolution, of the rate at which the various rule entries are used) [2, 25]. Despite criticism (e.g., [18]), these two approaches have remained emblematic, because they have brought important insight into the problem while occupying fundamentally different niches: While the former attempts quantification by focusing on static properties of the rule in question, the latter focuses on the rule's dynamic response over time.

The larger issue, of course, is the identification of complex rules that display specific patterns of behavior or solve specific problems, and in this regard none of the classification-related quantifications seems to have had unequivocal impact. At bottom, what really is behind the search for specific complex rules is an intricate problem of combinatorial optimization that can easily become unmanageable as the cells' possible states go beyond the binary case or their neighborhoods get larger (either with the addition of extra dimensions or otherwise). Not surprisingly, then, so far the success cases have all harnessed nature-inspired stochastic methods, particularly those of evolutionary inspiration [3–7, 10, 19, 21], to navigate the rule space.

The use of “navigate” here is very appropriate, because it evokes with great clarity what combinatorial-optimization methods do, which is precisely to move in a seemingly unstructured solution space seeking its optima. There is structure, however, at least insofar as the method's optimization strategy can be said to establish a relationship among the possible solutions as it moves from one to another. There is also a more elemental type of structure connecting the various solutions together, generally related to transforming one solution into another by means of some simple alteration. Although this latter structure need not be related to any given algorithm's navigation of the solution space, for some problems it has been shown to provide the solution space with certain “conduciveness” characteristics that do nevertheless affect that algorithm's performance [1].

The problems in question are those of coloring an undirected graph's nodes optimally and of finding one of the graph's largest subsets of nodes that only contain non-neighbors (a so-called maximum independent set), both computationally difficult in the sense of NP-hardness. For these two problems, an underlying structure unrelated to the best existing heuristics has been shown to account for intriguing performance transitions that are known to occur as the graph's size changes. Specifically, right before such a transition it is significantly harder to solve the problem than right past it. What happens at the transition is that the aforementioned underlying structure suddenly becomes much more conducive from nonoptimal to optimal solutions.

The notion of conduciveness we refer to is precise and can be formalized as follows [1]. Let D be a directed graph whose nodes stand for solutions to the optimization problem at hand and whose edges reflect the said underlying structure. Given two node subsets, call them A and B, the conduciveness of D from A to B is the fraction of edges that, out of all those that are outgoing from a node of A, are incoming to a node of B. Put differently, if m(A) is the number of edges whose tail nodes are in A, and m(A, B) is the number of edges with tail nodes in A and head nodes in B, then the conduciveness of D from A to B is m(A, B)/m(A). Conduciveness, then, is necessarily a number in the interval [0, 1], since every edge counted in m(A, B) is also counted in m(A). In the two examples mentioned above, A and B partition the node set of D and stand, respectively, for nonoptimal and optimal solutions to the optimization problem being considered.

Here we examine the rule space of CA from the standpoint of some directed graphs that can be viewed as providing an underlying structure interconnecting all possible rules. As in the case of the graph problems mentioned above, such structures need not have anything to do with possible algorithms to find specific rules. Instead, we study their conduciveness properties in search of some hint as to why evolutionary approaches to discover specific complex rules have succeeded while others have barely been attempted. Our conclusions will point at certain random structures whose expected conduciveness foreshadows the existence of deterministic structures with the potential of being at least reasonably conducive.

Of course, analyzing any graph's conduciveness requires a precise definition of the sets A and B. In the case of CA this can be really tricky. Say, for example, that we are looking for a complex rule to solve a specific problem. The sets A and B might then be defined in terms of some quantitative description of how well each possible rule solves that problem. This would amount to simply carrying over, to the context of CA rule space, the very same simulation-based approach that was used in the graph-coloring and independent-set problems mentioned above. While we had success in those cases, mainly because scaled-down versions of the problems still exhibit the same transition phenomena we wished to explain, nothing of the sort is expected to happen in the case of CA. In other words, we would be left with impossibly large rule spaces and would never be able to characterize conduciveness properly.

The alternative we adopt in this article is to settle for some characterization of the rule space that, while retaining the ability to relate to a rule's “complexity” to some extent, is also amenable to an analytical portrayal of conduciveness that can be used in lieu of computer simulations. The advantage, clearly, is that entire rule spaces can be examined, at least in some nontrivial cases. Our choice has been to use Langton's parameter λ, so rules in set B are characterized by having the same number of non-quiescent entries. The set A is then the complement of B with respect to the entire rule space. The disadvantage we have to cope with is, naturally, the loss in power to describe complexity, has also been a problem in Langton's approach.

One curious aspect of this new use of graph conduciveness is that, unlike the original use in [1], the nonoptimal–optimal dichotomy between sets A and B is hardly meaningful. In fact, part of the aforementioned criticism of Langton's criterion has been the nonexistence of a necessary or sufficient condition for any fixed number of non-quiescent entries to qualify a rule as complex (i.e., to be optimal at the classification task). So, throughout our study, we vary this number widely and track its effect on conduciveness, particularly around the value that, had Langton's criterion been indisputably correct, would characterize complexity.

We proceed in the following manner, following a brief review of the CA rule space in Section 2. First we introduce, in Section 3, the CA-rule graphs to be studied. Then we derive analytical expressions for their conduciveness in Section 4 and study them with the aid of selected plots in Section 5. We discuss the most relevant properties and findings in Section 6 and conclude in Section 7.

2 The CA-Rule Space

We initially consider a one-dimensional CA of binary cells. We assume, moreover, that a cell's input originates from its radius-r neighborhood for some r > 0. In the most general case, the cell's next state is a function of its own current state and those of its 2r neighbors. This function reflects the CA rule in use, which is essentially a table with 22r+1 entries, one for each of the possible inputs. Readily, the number of possible rules is 222r+1, depending on how the CA is set to epending on how the CA is set to respond to each possible input.

In terms of Wolfram's categorization [23], and assuming r > 1, most of this rapidly growing number of rules lead to chaotic spatiotemporal patterns of evolution or to “uninteresting” patterns that simply repeat themselves over and over indefinitely. The remaining sliver of the rule set, however, leads to nontrivial, often “long-lived” patterns that evoke the presence of some lifelike principle giving rise to complex behavior. The study of rules of the latter type has lain at the heart of artificial life research since the field's inception.

But how does one identify, and at a later stage synthesize, such complex rules? If already for the simplest of CA models (binary, one-dimensional) the number of possible rules grows so rapidly with the size of a cell's neighborhood, how does one go about coping with the myriad of problems caused by such combinatorial explosion? An early approach to the identification problem that still echoes strongly is the one offered by Langton [15, 17]. This approach looks for occurrences of the nonquiescent state in the length-22r+1 string of bits that defines a rule, and attempts to characterize complexity on the basis of the fraction of 22r+1 such rule entries represent. In broad brushstrokes, at the extremes of the possible range of values (i.e., near 0 or 1) we expect to find uninteresting, periodic behavior, while around 0.5, chaotic patterns dominate. Complexity, Langton suggested, is to be found somewhere near the transition from the former to the latter regime, the so-called edge of chaos.

Another, fundamentally different approach to identifying complex rules was given by Wuensche in terms of his input entropy [2, 25]. This approach considers a sliding window on the spatiotemporal pattern formed by the CA's evolution. As the window is moved forward in time, the frequencies with which the various inputs to the cells occur are registered and used to compute the Shannon entropy for that position of the window. Wuensche's suggestion was to use the mean and the variance of the various entropy figures thus obtained to characterize complexity. In his scheme, low means and variances characterize those rules that merely lead to dull periodicity, while high means and low variances indicate the presence of chaotic behavior. Complex rules, in turn, are to be found amid the rules of medium-valued means and high variances.

As for the problem of synthesizing complex rules, neither Langton's approach nor Wuensche's offers a direct solution. Both provide some degree of inspiration, however, and as indicated in Section 1, it soon became apparent that evolutionary search methods loosely guided by the notions they introduced, as well as by others, were to dominate the scene. But while such methods are motivated by well-founded Darwinian principles, can the rule space be said to possess some underlying structure that explains their success (and theirs alone)? This article is about answering this question, aided by the notion of a graph's conduciveness. We do so in a context more general than that of this section, specifically by dispensing with the need for cells to be binary or arranged one-dimensionally.

Moreover, as anticipated in Section 1, we use Langton's fraction of non-quiescent entries in a rule to partition the rule space into two sets, one of them containing all rules whose fraction has a certain fixed value. Turning rules into the nodes of a graph allows us to study the graph's conduciveness toward that specific set of rules for which Langton's fraction is fixed. What remains to be specified for such a graph, of course, is its edge set. We continue by first turning to this issue.

3 CA-Rule Graphs

We consider CA in which a cell's state is one of the integers in {0, 1,…, s − 1} for some s ≥ 2. We assume that the cell's neighborhood, including the cell itself, has size δ for some δ ≥ 2. It follows that the rule governing the behavior of the CA can be regarded as an L-entry table for L = sδ and that the number of possible rules is sL. Cells may be arranged with respect to one another one-dimensionally or otherwise, as this is of no concern for what follows. The same holds true of how a cell's δ − 1 neighbors are spatially arranged about it.

We focus on the directed graph having one node for each possible rule and edges that join nodes according to one of three criteria. Two of them are deterministic and result in an edge existing from one node to another if and only if that edge's antiparallel counterpart also exists. Using an undirected graph instead would then be entirely acceptable, but we refrain from doing so to adhere to the definition of conduciveness and to maintain compatibility with the third, probabilistic criterion.

The first criterion joins two nodes if and only if the corresponding rules differ in exactly one entry (i.e., if the Hamming distance between them is exactly 1). This is the case of the traditional hypercube, which we denote by H. In H every node has exactly L(s − 1) out-neighbors. The second criterion generalizes the first one by allowing two nodes to be joined if and only if the Hamming distance between the corresponding rules is exactly h for some h ≥ 1. The resulting graph is a generalized hypercube, here denoted by H+. In H+ every node has out-neighbors, since this is the number of ways in which its rule can be modified by altering exactly h entries.

The third criterion to define the graph's edge set is to allow any two nodes to be joined probabilistically to each other as a function of the Hamming distance between their rules. This is done independently for each of the two possible directions, so two nodes need no longer be joined by an antiparallel edge pair. The result is a random-graph model of the interconnections among the rules. The random graph is denoted by Hr and depends on a probability parameter, call it p. In Hr an edge exists from one node to another with probability ph, where h is the Hamming distance between the nodes' rules. That is, although any Hamming distance is allowed between the rules of two nodes joined by an edge, higher Hamming distances make it exponentially less likely that the edge indeed exists. For fixed h ≥ 1 we expect a node to have out-neighbors separated from it by a Hamming distance of h, so overall the expected number of a node's out-neighbors is
formula

For each of H, H+, and Hr, and for each ℓ such that 0 ≤ ℓ ≤ L, we partition the graph's node set into the two sets A and B, the latter containing all (and only) nodes whose rules have exactly ℓ non-quiescent entries. It follows that B comprises nodes. We then calculate each graph's conduciveness from set A to set B, denoted respectively by C, C+, and Cr. Owing to the random nature of Hr, Cr is the expected conduciveness from A to B.

4 Conduciveness Formulas

We begin with the hypercube H. In this case the total number of edges outgoing from nodes in set A is the product of the set's cardinality and the number of out-neighbors of each of its nodes, that is, . Some of these edges are incoming to nodes in set B, belonging to one of two categories.

Edges in the first category go out from nodes of A whose rules have exactly one non-quiescent entry too few when compared to those of B, provided ℓ > 0. The number of such nodes is , each one accounting for (L − ℓ + 1) (s − 1) B-bound edges, since s − 1 is the number of possibilities to turn each of the L − ℓ + 1 quiescent entries into a non-quiescent one. The second category of B-bound edges comprises edges outgoing from nodes in A that have exactly one non-quiescent entry too many with respect to B, provided ℓ < L. There are such nodes, each one contributing ℓ + 1 to the total of B-bound edges, this being the number of non-quiescent entries, each affording one single possibility to be turned into a quiescent one. It then follows that C is given by
formula
where each of δℓ>0 and δℓ<L equals 1 if the corresponding inequality holds, and 0 otherwise.

When we move to the generalized hypercube H+. The number of edges outgoing from nodes in A becomes , and we are left with the task of calculating how many of them are incoming to nodes in B. Again we categorize these edges as a function of their end nodes on the A side, but now we require a nonnegative integer parameter, call it k, to proceed.

Each value of k corresponds to nodes in A whose rules have exactly ℓ − h + 2k non-quiescent entries, and consequently L − ℓ + h − 2k quiescent entries (provided kh/2, in which case we would have a B node, not an A node). Simultaneously altering h entries, k of them from non-quiescent to quiescent and the remaining hk from quiescent to non-quiescent, clearly leads to a node in B, since the number of non-quiescent entries is thus changed to ℓ by the subtraction of k − (hk) off the original value, ℓ − h + 2k. We denote the number of such nodes in A by f(k); therefore
formula
Each of these nodes allows for possibilities to effect the said alterations, each possibility accounting for (s − 1)hkB-bound edges. Denoting by g(k) the overall number of B-bound edges outgoing from a given node in A yields
formula
We then have
formula
where the possible values of k are carefully controlled to take account of the forbidden cases of k ∉ [0, h] and k = h/2. Note, incidentally, that letting h = 1 causes the numerator of Equation 5 to have at most two summands, one for k = 0 and one for k = 1, in such a way that f(0)g(0) and f(1)g(1) are precisely the summands in the numerator of Equation 2, respectively the leftmost one and the rightmost.
In the case of the random graph Hr, the expected number of edges outgoing from nodes in A is . We calculate how many of these edges are expected to be B-bound by simply summing, on h, the corresponding number we found in the case of the generalized hypercube H+ (i.e., for the fixed Hamming distance h). In this sum every edge is weighted by the probability ph that defines its existence. We obtain
formula
Note that, in the limit as p → 0, Cr tends to C+ for h = 1, that is, the conduciveness C of the hypercube H. To see this, first note that, as the limit is approached, the only value of h still contributing to the numerator of Equation 6 is h = 1. The resulting simplification leads to Equations 2 through 5, once we realize that
formula

5 Conduciveness Plots

In this section we present plots of the hypercube conduciveness C, the conduciveness C+ of the generalized hypercube, and the random-graph conduciveness Cr, as per Equations 2, 5, and 6, respectively. In all plots we normalize the abscissas to lie in the [0, 1] interval by plotting the conduciveness values against λ = ℓ/L, the Langton parameter.

Conduciveness values can be extremely low, depending on the parameters involved, which requires some care in both handling the generation of the data to be plotted and the plotting itself, and even so constrains the parameter values that can be used. We have used a C program to generate the data as longdouble numbers (96-bit numbers for gcc-4.4.6-3) and gnuplot-4.2.6-2 to do the actual plotting. As gnuplot-4.2.6-2 does not appear to handle numbers of the same precision as those we generated via gcc-4.4.6-3, and also to avoid the use of an automatic logarithmic scale while plotting (we think this facilitates reading figures off the plots), a conduciveness value c is output as LL(c) = log10(−log10c) for plotting. That is, reading an ordinate LL(c) = y off a plot implies a conduciveness value c = 10−10y.

Plots for C are shown in Figures 1 and 2 for s = 2 and s = 3, respectively, and a variety of δ values. Plots for C+ are given in Figures 3 and 4, respectively, for s = 2 and s = 3 as well, now for δ fixed at δ = 7 with a variety of h values. Plots for Cr appear in Figures 5 and 6, once again for s = 2 and s = 3, respectively, again for δ = 7 but now varying p. All three figures corresponding to the same value of s have one plot in common: the C plot for δ = 7, which is the same as the C+ plot for δ = 7 with h = 1, which in turn is visually indistinguishable from the Cr plot for δ = 7 with p = 0.0001 (by virtue of the limit given in Equation 7). For ease of reference, note that the integer ordinates 0, 1, 2, and 3 appearing in all figures correspond to conduciveness values of 10−1, 10−10, 10−100, and 10−1000, respectively. Lower ordinates, therefore, indicate higher conduciveness values.

Figure 1. 

Conduciveness C of the hypercube H for s = 2. Data are given against λ = ℓ/L.

Figure 1. 

Conduciveness C of the hypercube H for s = 2. Data are given against λ = ℓ/L.

Figure 2. 

Conduciveness C of the hypercube H for s = 3. Data are given against λ = ℓ/L.

Figure 2. 

Conduciveness C of the hypercube H for s = 3. Data are given against λ = ℓ/L.

Figure 3. 

Conduciveness C+ of the generalized hypercube H+ for s = 2 and δ = 7. Data are given against λ = ℓ/L.

Figure 3. 

Conduciveness C+ of the generalized hypercube H+ for s = 2 and δ = 7. Data are given against λ = ℓ/L.

Figure 4. 

Conduciveness C+ of the generalized hypercube H+ for s = 3 and δ = 7. Data are given against λ = ℓ/L.

Figure 4. 

Conduciveness C+ of the generalized hypercube H+ for s = 3 and δ = 7. Data are given against λ = ℓ/L.

Figure 5. 

Expected conduciveness Cr of the random graph Hr for s = 2 and δ = 7. Data are given against λ = ℓ/L.

Figure 5. 

Expected conduciveness Cr of the random graph Hr for s = 2 and δ = 7. Data are given against λ = ℓ/L.

Figure 6. 

Expected conduciveness Cr of the random graph Hr for s = 3 and δ = 7. Data are given against λ = ℓ/L.

Figure 6. 

Expected conduciveness Cr of the random graph Hr for s = 3 and δ = 7. Data are given against λ = ℓ/L.

6 Discussion

One common term in Equations 2, 5, and 6 is the number of nodes whose rules contain exactly ℓ non-quiescent entries, given by . It is easy to prove that this number is maximized by choosing ℓ = ℓ∗, where
formula
which is precisely the probability of randomly picking a non-quiescent entry in a rule where all s values are equally represented. In his analysis of elementary CA [15, 26], Langton associated the resulting λ∗ = ℓ∗/L with the occurrence of chaotic behavior. Moreover, deviating from the optimal value to either side might first lead to complex rules and eventually to trivial fixed points and limit cycles.

As it happens, it can also be proven that setting ℓ = ℓ∗ maximizes C as well. This is illustrated clearly in Figures 1 and 2, where λ∗ = 0.5 in the former case and λ∗ = 2/3 in the latter, regardless of the value of δ. Thus, if Langton's scheme were to hold as originally proposed, the hypercube H would be much more conducive to chaotic-rule nodes than to those of rules leading to fixed points or limit cycles, with the conduciveness to complex-rule nodes lying somewhere in between.

Figures 1 and 2 also reveal that, for fixed δ, the value of C falls quickly as ℓ is moved to either side of its optimal value, ℓ∗. In fact, this fall eventually leads to staggeringly low conduciveness values for the higher values of δ. Curiously, though, for ℓ = ℓ∗ the decrease in C for increasing δ seems headed toward a limiting value. However, this can be seen to be illusory by examining the case of s = 2 (thus ℓ∗ = L/2 = 2δ−1). In this case, we can rewrite Cℓ∗ as
formula
whose limit as δ → ∞ is infinity.

The generalized hypercube H+, to which Figures 3 and 4 refer, represents an attempt to increase a node's number of out-neighbors in the graph from the L(s − 1) out-neighbors that it has in the hypercube H to for h > 1. This increase is not steady with h, though: As in the characterization of ℓ∗ above, this number of out-neighbors peaks at h = L(1 − 1/s) and then decreases as h continues to grow toward h = L.

In any event, Figures 3 and 4 indicate that C+ is not improved with respect to C by simply increasing the Hamming distance between the rules of two interconnected nodes. On the contrary, as s is increased from 2 to 3, we see that conduciveness values worsen dramatically as h is increased, in a clear indication that h = 1 remains the best choice. We also remark that, although for s = 3 the lowering of C+ values occurs monotonically with increase of h, the case of s = 2 is altogether different. Specifically, all C+ values are confined between those for h = 1 and h = 2, with those for odd h coinciding with those of h = 1 and those for even h increasing steadily toward those of h = 1 as well (this can be seen more clearly in the inset to Figure 3).

Similar observations apply to the random graph Hr, as shown in Figure 5. Note initially that here too there has been an attempt to increase a node's number of out-neighbors in the graph, though in the sense of probabilistic expectation and allowing a random mixture of Hamming distances between a node's rule and those of its out-neighbors. In fact, this expected number of out-neighbors, given by [p(s − 1) + 1]L − 1, can be seen to increase steadily with increasing p. However, increasing the expected number of out-neighbors of a node does not contribute to improving the behavior of Cr, whose values are seen to fall precipitously as p is increased for s = 3 (see Figure 6). The case of s= 2, shown in Figure 5, is sort of an oddity, with all conduciveness values confined between those for a very low value of p and those for about p = 0.02. We show no further plots than those of these constraining values of p, to avoid cluttering the figure, but remark that Cr first decreases as p is increased from p = 0.0001, then increases back toward its initial value after p = 0.02 is reached.

It might then seem that the best conduciveness is provided by graph H, the hypercube, since CC+ for any value of h and CCr for any value of p. The caveat, of course, is that the latter inequality requires careful interpretation, since Cr is the expected conduciveness of all graphs modeled by the random graph Hr, not the conduciveness of a specific graph. The graphs to which the expected value refers include any graph one may come up with, because Hr allows edges to exist between any two nodes, in either of the two possible directions, regardless of the Hamming distance between their rules. This means that the conduciveness distribution to which the expected value refers, although unknown, spreads toward lower conduciveness values very widely, as shown in Figure 6, for s = 3. The inescapable conclusion is that Hr also models graphs whose conduciveness is higher than C. All we know about these graphs, though, is that they allow mixed Hamming distances between interconnected nodes' rules to coexist and that the best improvements in conduciveness should occur for low values of p.

Allowing diverse Hamming distances to occur in the same graph is more of a key property of Hr than it may at first seem. To see that this is so, let us consider another random-graph model, viz., a directed variation of the Erdős-Rényi model [11, 13], henceforth referred to as DER. In this model, an edge exists between any two distinct nodes, in each of the two possible directions, independently with probability p. In our setting this leads to an expected number of out-neighbors of p(sL − 1). The expected conduciveness of the DER model can be obtained from that of Hr in Equation 6 by substituting p for ph in the numerator and p(sL − 1) for [p(s − 1) + 1]L − 1 in the denominator. The resulting expression is independent of p, being in fact identical to Cr for p = 1. The latter, of course, is precisely the special case of Hr that is no longer a random graph but the complete graph instead, that is, the graph in which every node has every other node as an out-neighbor. So, although the DER graph also allows for conduciveness values that spread around the expected value and in fact encompass the conduciveness of any other graph, this expected value is as bad as the conduciveness of Hr for p = 1. Therefore the two random-graph models, Hr for low values of p and DER, have expected conduciveness values corresponding to the upper and lower conduciveness extremes of Figure 6, respectively.

7 Conclusions

Applying the notion of a graph's conduciveness when the graph's node set is the solution space of some combinatorial problem and its edge set reflects some elemental relationship among the various solutions is a technique for discovering whether the graph possesses some inherent property that explains the behavior of algorithms to search for specific nodes in it. The idea is very new, dating from its first use in [1], so it is no surprise that we have little more than a phenomenological understanding of how conduciveness relates to search algorithms that in general use totally different sets of edges while seeking nodes belonging to a particular set, say B. One tantalizing interpretation is that, as such an algorithm traverses the node set, occasionally the two edge sets will coincide, and, if the graph is conducive toward B from outside B, then the possibility of reaching B presents itself.

The study contained in [1] seems to support this interpretation, and so does the present one, which has been about traversing the rule space of CA searching for some degree of complexity that, for the sake of permitting an analytical formulation of conduciveness in all graph types investigated, we assumed to be related to the rules' density of non-quiescent entries. Our main conclusion has been that a sparse random-graph topology allowing nodes to be interconnected regardless of the Hamming distance separating the rules they stand for has the potential of providing reasonable conduciveness toward the desired rules, particularly if these rules' number of non-quiescent entries is located not too far from L(1 − 1/s) in the sequence 0, 1,…, L. We think this may be well in line with the success of some evolutionary approaches in locating complex rules to solve specific problems: Although the recombine-and-mutate procedure of such approaches leads them to follow routes of their own through rule space, its stochastic character is bound to allow for successful jumps into the set B whenever the expected conduciveness is sufficiently high.

We find it important to note that, while this study has focused on the most general of cases as far as the rule space is concerned—namely, that in which every configuration of a cell's input constitutes a different rule entry—nothing prevents the same approach to be used, say, in the case of the so-called totalistic rules. In these rules, it is not the configuration of a cell's inputs (i.e., the individual state of each and every cell in the cell's neighborhood) that matters, but rather the sum of all δ states in the neighborhood. In the case of totalistic rules, a rule's number of entries is given by L = (s − 1)δ + 1, since we need an entry for each and every possible value of the sum of the δ states (0 through (s − 1)δ). But given this alternative value of L for the totalistic case, all conduciveness formulas given in Section 4 continue to hold, and one may use them to investigate the conduciveness properties of CA-rule graphs in this case as well.

We conclude by noting that conduciveness studies like this one also constitute a link between the study of CA and that of the so-called complex networks, which over the past decade have been applied so successfully to such a wide range of domains as reported in [8, 9, 20]. As demonstrated by the recent study in [14], the field of artificial life has much to gain from the broadly applicable, essentially stochastic tools that researchers on complex networks have amassed for the analysis of very large ensembles of interconnected elements. Our study of the conduciveness of CA-rule graphs constitutes another example.

Acknowledgments

We acknowledge partial support from CNPq, CAPES, and a FAPERJ BBP grant.

References

1. 
Barbosa
,
V. C.
(
2010
).
Network conduciveness with application to the graph-coloring and independent-set optimization transitions
PLoS ONE
,
5
,
e11232
.
2. 
Barbosa
,
V. C.
,
Miranda
,
F. M. N.
, &
Agostini
,
M. C. M.
(
2006
).
Cell-centric heuristics for the classification of cellular automata.
Parallel Computing
,
32
,
44
66
.
3. 
Bilotta
,
E.
, &
Pantano
,
P.
(
2010
).
Cellular automata and complex systems.
Hershey, PA
:
Medical Information Science Reference
.
4. 
Bilotta
,
E.
, &
Pantano
,
P.
(
2011
).
Artificial micro-worlds. Part II: Cellular automata growth dynamics.
International Journal of Bifurcation and Chaos
,
21
,
619
645
.
5. 
Bilotta
,
E.
, &
Pantano
,
P.
(
2011
).
Artificial micro-worlds. Part III: A taxonomy of self-reproducing 2D CA species.
International Journal of Bifurcation and Chaos
,
21
,
1233
1263
.
6. 
Bilotta
,
E.
, &
Pantano
,
P.
(
2011
).
Artificial micro-worlds. Part IV: Models of complex self-reproducers.
International Journal of Bifurcation and Chaos
,
21
,
1501
1521
.
7. 
Bilotta
,
E.
,
Pantano
,
P.
, &
Vena
,
S.
(
2011
).
Artificial micro-worlds. Part I: A new approach for studying life-like phenomena.
International Journal of Bifurcation and Chaos
,
21
,
373
398
.
8. 
Bollobás
,
B.
,
Kozma
,
R.
, &
Miklós
,
D.
(Eds.). (
2009
).
Handbook of large-scale random networks.
Berlin
:
Springer
.
9. 
Bornholdt
,
S.
, &
Schuster
,
H. G.
(Eds.). (
2003
).
Handbook of graphs and networks.
Weinheim, Germany
:
Wiley-VCH
.
10. 
Crutchfield
,
J. P.
,
Mitchell
,
M.
, &
Das
,
R.
(
2003
).
Evolutionary design of collective computation in cellular automata.
In J. P. Crutchfield & P. Schuster (Eds.)
,
Evolutionary dynamics
(pp.
361
411
).
Oxford, UK
:
Oxford University Press
.
11. 
Erdős
,
P.
, &
Rényi
,
A.
(
1959
).
On random graphs.
Publicationes Mathematicae (Debrecen)
,
6
,
290
297
.
12. 
Ilachinski
,
A.
(
2001
).
Cellular automata.
Singapore
:
World Scientific
.
13. 
Karp
,
R. M.
(
1990
).
The transitive closure of a random digraph.
Random Structures and Algorithms
,
1
,
73
93
.
14. 
Khor
,
S.
(
2010
).
Concurrency and network disassortativity.
Artificial life
,
16
,
225
232
.
15. 
Langton
,
C. G.
(
1990
).
Computation at the edge of chaos: Phase transitions and emergent computation.
Physica D
,
42
,
12
37
.
16. 
Li
,
W.
, &
Packard
,
N.
(
1990
).
The structure of the elementary cellular automata rule space.
Complex Systems
,
4
,
281
297
.
17. 
Li
,
W.
,
Packard
,
N.
, &
Langton
,
C. G.
(
1990
).
Transition phenomena in CA rule space.
Physica D
,
45
,
77
94
.
18. 
Mitchell
,
M.
,
Crutchfield
,
J. P.
, &
Hraber
,
P. T.
(
1994
).
Dynamics, computation, and the “edge of chaos”: A re-examination.
In G. Cowan, D. Pines, & D. Meltzer (Eds.)
,
Complexity
(pp.
497
513
).
Reading, MA
:
Addison-Wesley
.
19. 
Mitchell
,
M.
,
Hraber
,
P. T.
, &
Crutchfield
,
J. P.
(
1993
).
Revisiting the edge of chaos: Evolving cellular automata to perform computations.
Complex Systems
,
7
,
89
130
.
20. 
Newman
,
M.
,
Barabási
,
A.-L.
, &
Watts
,
D. J.
(Eds.). (
2006
).
The structure and dynamics of networks.
Princeton, NJ
:
Princeton University Press
.
21. 
Rocha
,
L. M.
, &
Hordijk
,
W.
(
2005
).
Material representations: From the genetic code to the evolution of cellular automata.
Artificial life
,
11
,
189
214
.
22. 
Sutner
,
K.
(
2009
).
Classification of cellular automata.
In R. A. Meyers (Ed.)
,
Encyclopedia of complexity and systems science
(pp.
755
768
).
Berlin
:
Springer
.
23. 
Wolfram
,
S.
(
1984
).
Universality and complexity in cellular automata.
Physica D
,
10
,
1
35
.
24. 
Wolfram
,
S.
(
2002
).
A new kind of science.
Champaign, IL
:
Wolfram Media
.
25. 
Wuensche
,
A.
(
1999
).
Classifying cellular automata automatically: Finding gliders, filtering, and relating space-time patterns, attractor basins, and the Z parameter.
Complexity
,
4
,
47
66
.
26. 
Wuensche
,
A.
, &
Lesser
,
M.
(
1992
).
The global dynamics of cellular automata.
Reading, MA
:
Addison-Wesley
.

Author notes

Universidade Federal do Rio de Janeiro, Programa de Engenharia de Sistemas e Computação, COPPE, Caixa Postal 68511, 21941-972 Rio de Janeiro–RJ, Brazil. E-mail: valmir@cos.ufrj.br