Abstract

In the realm of cellular-automata-based artificial life, configurations that self-reproduce employing signals are a more advanced form than those that reproduce holistically by simple fission. One might view those signals as a very rudimentary genetic code, since they guide the formation of the “child” from its “parent.” In principle, the signals could mutate to deliver a child better suited to reproduction in this artificial world. But even the simplest signal-based replicator discovered so far requires 58 specific CA transition rules that have been carefully hand-crafted to exactly meet the requirement of self-replication. Could such a system emerge without human design? This article considers how that might occur. Specifically, it demonstrates that the application of two heuristics can increase the probability that self-replication will emerge when needed transition rules are completed at random. The heuristics are using minimum total resources (parsimony) and maintaining structural continuity. Finally, the article suggests why parsimony is effective in catalyzing the emergence of self-replication.

1 Introduction

The emergence of a genetic code—that is, the machinery to encode, store, and employ the information needed to synthesize a self-replicating system—was a pivotal step in the development of life. Prior to such a code there was no efficient method to preserve random changes in system components and enable beneficial variations to accumulate and gradually become the norm. Progress was still possible; for how else would the genetic code develop? But it must have been very slow. This article is concerned with how such an unlikely event as the development of a rudimentary genetic code might have been possible, given that the current code uses many large and complex parts that jointly cooperate to achieve the needed goal, but individually are not sufficient. This is a version of a classical problem, arising long before there were chickens or eggs.

Since the early stages leading to a self-replicating system do not seem to have been preserved, insights must come from indirect means, with computer simulation a fruitful technique. One approach is to set up a simple, abstract model of a system that is capable of self-replication, but does not initially have that property, to see what conditions favor the emergence of self-replication. The system studied in this article makes no attempt to represent the chemistry and physics of the prebiotic world; it is purely an investigation into how a certain abstract system is able to improve the odds of finding self-replication.

The current study uses the well-known technique of cellular automata (CA) developed originally by John von Neumann in 1941 [13] and refined by others [2, 6, 7]. In brief, a rectangular space is subdivided into a set of rows and columns, each of which contains either an inactive symbol (often represented by a space ‘ ’ or by ‘.’) or an active symbol drawn from a set of graphical symbols, such as {‘O’, ‘>’, ‘V’, ‘<’, ‘^’, ‘L’, ‘#’}.

Time is divided into discrete clock ticks. Starting with a given arrangement of symbols in the lattice at time T, the content of a successor lattice at time T + 1 is computed point by point, using a set of rules of the form:

IF a cell currently has symbol CT and its near neighbors have symbols {NT1, NT2, NT3,…}, then that center cell will have symbol CT+1 at time T+1.

Researchers have employed different ways to define “neighboring cells”; von Neumann used the four nondiagonal cells (North, East, South, West) with respect to the Center. This study uses the von Neumann neighborhood:
formula
For example, one such rule can be represented as
formula
In the above example, the symbols have specific values. A metasymbol, such as ‘_’, can be used to mean “match any symbol.”

The transition rules are invariant to one or more quarter turns of the neighbors. As a result, one primary rule can give rise to three additional rules unless the quarter turns do not change the neighborhood due to symmetry.

We will use the term configuration to mean a set of active cells within a CA lattice that have at least one active neighbor in the same set. There may be a single configuration in a lattice (as in Figure 1) or multiple configurations (as in Figure 2). These configurations were taken from Reggia et al. [10, Figure 3].

Figure 1. 

Lattice with one configuration.

Figure 1. 

Lattice with one configuration.

Figure 2. 

Lattice with two configurations.

Figure 2. 

Lattice with two configurations.

We define a rule set as the set of rules that govern the transition from the content of the lattice at one instant of time to the next. All cells share the same rule set at a given time tick. In most CA applications, the rule set is complete in the sense that it covers every possible arrangement of symbols that could be placed within the lattice. Often this is accomplished with a default rule, such as

IF no specific rule applies, the center is preserved.

In this study, however, completeness is not a requirement; it is sufficient that the rule set handle only those arrangements of active cells that actually arise when an initial arrangement goes through a sequence of time steps. Thus, for the current work the rule set may change over time (the rule set expands). The number and identity of the rules required within a rule set depend only upon the initial arrangement and the number of subsequent time steps.

Over the years various configurations (and corresponding rule sets) have been designed or discovered that self-replicate, that is, produce copies of themselves. “Produce copies of themselves” means that an initial configuration goes through a series of time steps (a life cycle) resulting in a lattice containing more than one copy of that initial configuration. The copies may have a different orientation from the original. The original configuration may survive to become one of the copies, or may be lost in the process of producing two or more new copies. Since the original and copies continue to go through their life cycle, it may be that the two copies are not simultaneously at the same stage of that cycle, and at any moment may not be able to be superimposed upon each other (the usual meaning of “copies”).

We will call a combination of a configuration and rule set an object.

In these terms, self-replicating objects fall into three general classes. The first reproduces by fission: The reproduction is simple (can be achieved with a small configuration and few rules) and is fast (has a short life cycle). It is also holistic in the sense that all of the cells participate in the reproduction simultaneously. An example discovered by the author is given in Figure 3.

Figure 3. 

An example of reproduction by fission.

Figure 3. 

An example of reproduction by fission.

Self-replication by fission is easy to achieve, but does not seem to be subject to evolution by mutation and selection. It lacks a genetic code—a means to encode, store, and employ the information needed to drive its own replication. Even if we identify the rule set itself as functionally equivalent to a genetic code, it seems unlikely that a simultaneous mutation to just one or a few rules would generate a new object that would then continue to replicate.

In stark contrast, there are objects, such as certain implementations of von Neumann's universal computer-constructor (UCC) introduced by Codd [2] and the various simplifications described by Hutton [6] and others, that can replicate under the control of a string of genelike symbols (tape). In these objects, there is a set of signals generated from the tape that move within a sheathed pathway, instructing the end of the pathway to build the new configuration. General gene-based replicators are universal constructors capable of producing any configuration, with self-replication being just a special case. They are the highest level of artificial self-replication.

Intermediate between fission-based replicators and general gene-based replicators are those objects that can reproduce themselves via genelike information, but are not capable of general manufacture. Among the best known of these are Langton's loops [7] and various alternative versions described by Byl [1], Reggia et al. [10], and Tempesti [12]. This article uses Reggia's minimal configuration UL06W8V to typify the class. (The initial configuration is shown above in Figure 1; the final configuration is in Figure 2.)

Reproduction in UL06W8V is not holistic; it is controlled by a series of moving signals that direct the construction of the copy. To be more precise, reproduction occurs in two distinct phases. The first phase utilizes signals circulating through the configuration's four-symbol loop to supply the information that directs the construction of a copy of the object at the end of a two-symbol construction arm protruding from one corner of the loop. Once a new child loop is completed in this way, signals travel through the structure and immediately adjacent to it so that new construction arms are formed and activated in both the original and nascent loops.

We can refer to this class as signal-based replicators. For UL06W8V and similar objects the signals play the dual role of genome and implementer. Compared with fission-based replicators, signal-based ones are larger, require many more rules, are considerably slower, and are harder to create. For example, UL06W8V needs 58 specific rules and takes 9 cycles to form and activate the copy. Compared with general genome-based replicators, signal-based ones are smaller, considerably faster, and easier to create. But, in principle, signal-based replicators start to have the ability to evolve by mutation. (For UL06W8V, however, the ability to evolve may be an illusion; the system is so tightly integrated that even the slightest change in the object forfeits the ability to replicate.) Nevertheless, understanding signal-based replicators is essential if we are to comprehend the early stages of life.

2 Research Hypothesis and Goals

This article asks: How might a signal-based replicator—in particular, UL06W8V—arise starting from nonviable origins using random processes? More specifically, we ask: Are there any guiding or heuristic metaprinciples that might be applied to improve the odds that random selection will lead to self-replication?

In a previous article [11] the author demonstrated that completing needed CA rules by invoking the principle of parsimony could be used to spontaneously create objects that would glide through the lattice, much like the glider of Conway's Game of Life [5]. Parsimony was implemented by forcing the new rules at each time step to minimize the need for further rules in subsequent steps. In other words, next rule symbols were selected so that the resources needed to run the system (the total number of rules) were minimal.

In the current study, the principle of parsimony (make choices that minimize the need for further additional rules) coupled with continuity (make choices that tend to preserve the current configuration) have been found to catalyze the emergence of self-replication.

Note that in the current study we do not directly encourage self-replication to emerge by making that a specific goal, as in the work of Pan and Reggia [9]. Instead we encourage the developing object to use minimum resources, to see if that helps lead to self-replication.

While the simulation results demonstrate that applying parsimony (with or without continuity) significantly increases the speed at which self-replication emerges in the particular case studied, it was not clear when the research started why this would be so. Parsimony is not a basic law of chemistry or physics; it is a guiding principle that suggests what information should be added to an ongoing dynamic process as it requires additional information. (You can choose to ignore parsimony; you cannot choose to suspend the law of conservation of mass-energy.) We took as a subsidiary goal of this study to try to understand how parsimony works.

Section 3 describes the computational methodology used within this study; Section 4 summarizes the results of running the computations. Finally, in Section 5 we address the question of how parsimony and continuity work. The  Appendix shows what was found to be the minimal rule set for UL06W8V.

3 Methodology

The study proceeded through three sets of simulations.

3.1 Parsimony Alone

In the first, a single copy of UL06W8V (Figure 1) was placed within an otherwise empty lattice. Initially, there was only one trivial rule: An inactive center surrounded by inactive neighbors remains inactive (“.....” → ‘.’). The CA was run for one time tick to determine NR0, the number of new rules needed to complete this initial time step. The CA system retained the identity (center and neighbors) of each new rule in the order needed; the missing information was only the next center symbol associated with each rule. These symbols formed a string, Stest, of length NR0. If NR0 was less than 6 (an arbitrary value), all possible strings with that many symbols were evaluated for fitness; otherwise the system evaluated a parametric number of random test strings. (The parameter was fixed at 100,000, a number found experimentally to be large enough that the final results were not significantly altered by going even higher.)

To evaluate the fitness of a string, an initial time step using the symbols in Stest and then three subsequent lookahead time steps were run, making random choices for any new symbols needed for the lookahead steps. The total number of new rules needed to complete all four generations, NR3(Stest), was converted to a fitness value; the smaller the value of NR3(Stest), the larger the fitness of Stest:
formula
The specific numerical values, 1000, 2, and 5, were chosen to make the fitness a smooth, inverse function of NR3.

We must dwell for a moment on how rules were counted in arriving at NR3. Even though the preservation of the center as a default was not explicitly invoked in this study, we assumed that after all the rules needed to achieve self-replication of a single UL06W8V had been discovered and placed in the rule set, we would add that default. (The default was found necessary to be able to build colonies from UL06W8V). With this in mind, and in harmony with [10], only those rules that change the center were considered to contribute information to the rule set and should count toward NR3. Furthermore, if two or more specific rules could be condensed into a single rule by using the “match any” metasymbol, such a condensation counted as a single rule. This seems to be the fairest way to interpret “using minimum resources or adding a minimum of new information.”

This procedure of running four time steps to arrive at a fitness value was repeated a parametric number of times (also 100,000), after which the highest value was taken as an estimate of the fitness of the test string. Evaluation of fitness based on lookahead generations (rather than just the next one) mirrors the procedure used in chess to gauge the quality of a potential move.

As each test string was evaluated for fitness, Stest and Fitness(Stest) were placed in an array (pool). The pool was sorted in decreasing order of fitness. Since the pool was of finite size, the elements of lowest fitness were discarded to accommodate larger values.

Once all the test strings were evaluated, the system was ready to select the NR0 new rule symbols needed to complete the initial time step. We used a procedure known as “roulette-wheel” or “wheel of fortune” selection [3, Section 6.3.4]: The pool elements were divided into groups having the same fitness. Denote the common fitness for group i as FitnessGi, and the number of members in the group as NGi. The pool was then truncated to the 10 groups of highest fitness (the 10 was arbitrary). The pool was then conceptually mapped onto a wheel whose 10 segments were each proportional in size to the total fitness of a group (NGi × FitnessGi). The probability of selecting group i (p-Fitness Gi) was then
formula
where
formula
Since all NGi string elements within group i have exactly the same probability of being selected, the probability of a particular string in that group, Si, being selected was
formula
In the tabulation of computational results shown in Table 1 and Table 2, this formula was specifically applied to SD, the string whose new rule symbols correspond to the desired rules within the known published rule set for UL06W8V:
formula
Table 1. 

Improvement ratio achieved by using parsimony alone.

Time stepNumber of new rulesNumber of groupsp-Random SDp-Fitness SDImprovement ratio
10 10 9.31e−10 8.08e−05 8.67e+04 
3.81e−06 3.63e−03 9.51e+02 
10 5.96e−08 3.36e−04 5.64e+03 
10 4.77e−07 1.08e−03 2.26e+03 
10 3.05e−05 1.42e−04 4.64e+00 
10 4.77e−07 2.03e−03 4.27e+03 
10 7.45e−09 2.01e−03 2.70e+05 
10 3.05e−05 1.15e−03 3.76e+01 
1.25e−01 3.63e−01 2.90e+00 
Time stepNumber of new rulesNumber of groupsp-Random SDp-Fitness SDImprovement ratio
10 10 9.31e−10 8.08e−05 8.67e+04 
3.81e−06 3.63e−03 9.51e+02 
10 5.96e−08 3.36e−04 5.64e+03 
10 4.77e−07 1.08e−03 2.26e+03 
10 3.05e−05 1.42e−04 4.64e+00 
10 4.77e−07 2.03e−03 4.27e+03 
10 7.45e−09 2.01e−03 2.70e+05 
10 3.05e−05 1.15e−03 3.76e+01 
1.25e−01 3.63e−01 2.90e+00 
Table 2. 

Improvement ratio achieved by using parsimony and continuity.

Time stepNumber of groupsp-Random SDp-Fitness SDImprovement ratio
10 9.31e−10 9.28e−04 9.97e+05 
10 3.81e−06 1.07e−03 2.82e+02 
10 5.96e−08 4.26e−04 7.15e+03 
10 4.77e−07 1.47e−03 3.08e+03 
10 3.05e−05 4.09e−03 1.34e+02 
10 4.77e−07 4.60e−03 9.64e+03 
10 7.45e−09 2.86e−03 3.84e+05 
10 3.05e−05 2.52e−02 8.27e+02 
1.25e−01 1.64e−01 1.31e+00 
Time stepNumber of groupsp-Random SDp-Fitness SDImprovement ratio
10 9.31e−10 9.28e−04 9.97e+05 
10 3.81e−06 1.07e−03 2.82e+02 
10 5.96e−08 4.26e−04 7.15e+03 
10 4.77e−07 1.47e−03 3.08e+03 
10 3.05e−05 4.09e−03 1.34e+02 
10 4.77e−07 4.60e−03 9.64e+03 
10 7.45e−09 2.86e−03 3.84e+05 
10 3.05e−05 2.52e−02 8.27e+02 
1.25e−01 1.64e−01 1.31e+00 
Now suppose, instead of being selected by the above parsimony-based procedure, the NR0 possible new symbol strings were just selected purely at random. In that case, the probability of selection would be 1 divided by the number of possible strings of length NR0:
formula
where NS is the number of active and inactive symbols, 8. The improvement in selecting the new rules' string symbols by parsimony-based fitness rather than pure chance is then
formula
This is the information we sought to tabulate.

Once the improvement ratio was computed for the initial time tick, 0, the rules corresponding to SD were added to the rule set, and time tick 0 was rerun to produce a new configuration. To continue, the entire process was then repeated with the now expanded rule set and new configuration in the lattice as the basis for tick 1. Table 1 shows the results for all time ticks. For some ticks, fewer than 10 groups could be formed.

3.2 Parsimony Augmented by Continuity

In the second set of simulation runs, strings of new symbols were excluded from the pool if they produced a new configuration at step N + 1 that differed from the parent configuration at step N at more than a given number of locations. “Differed” had a precise meaning: Trading one active symbol for another (at a certain point) was not counted as a difference, while going from inactive to active or the reverse was counted. The maximum number of such differences was arbitrarily set at 3.

The reason for forcing this level of step-by-step continuity will be explained in Section 4.

3.3 Simulations with Neither Parsimony nor Continuity Imposed

In the final set of simulation runs, each trial started with the base case (UL06W8V with only the one trivial rule) and continued step after step, adding new rules as needed by making a random choice of the next symbol. Each trial was halted when no further new rules were needed for a parametric number (40) of successive steps. (There were times when several consecutive steps needed no new rules, and then later—due to an expansion or motion within some part in the lattice—a collision caused the need to handle a novel center-neighbor combination.)

After the steps were finished, the content of the lattice was automatically classified as either:

  • (0) 

    Empty—having no active points.

  • (1) 

    Stable—having a constant number of rules and an essentially constant number of active points. However, sometimes the lattice included an object that proceeded through a periodic cycle, eventually returning to its original configuration, but having a different number of active points in some phases of the life cycle. This caused minor fluctuations around a fixed total number of active points in an otherwise stable configuration.

  • (2) 

    Growing—having an increasing number of active points, but requiring no additional rules. The colony of UL06W8Vs shown in Figure 3 of [10] is of this class.

Of course, there was no guarantee that steadiness would ever be achieved; for the vast majority of cases, both the number of active points and the total number of rules continued to increase. When the total number of rules reached an arbitrary parameter value (500), the program stopped running new steps, declaring the lattice:

  • (3) 

    Expanding—having an ever increasing number of active points and rules.

Once the classification was made, the program recorded the class, the step number at which the last new rule was needed, the minimum number of rules required to express the final rule set, and some internal data. Table 3 shows the tally of each class for a set of 10,000 trials. Only 337 final lattices were not expanding.

Table 3. 

Number of final configurations in class.

ClassNumber of final configurations in class
Empty 178 
Stable 133 
Growing 26 
Expanding 99,663 
Total 100,000 
ClassNumber of final configurations in class
Empty 178 
Stable 133 
Growing 26 
Expanding 99,663 
Total 100,000 

4 Discussion and Analysis of Results

As seen in Table 1, at each time step there was a significant improvement in finding the rules needed for self-replication by invoking just parsimony. The emergence of self-replication may be orders of magnitude faster than what would happen by pure random chance, but is still a potentially slow process. The acceleration occurred even though the search for self-replication was not directly guided by that eventual property, which would not emerge until the self-replication had been achieved. The search cannot see very far into the future, in the same way as opening moves of a chess game can only dimly anticipate the end game.

As shown in Table 2, bolstering parsimony with continuity further increased the improvement ratio. Applying continuity provided more immediate guidance than parsimony. Continuity was suggested by the observation that primordial organic compounds tended to adhere to inorganic substrates to form structures that corresponded to the shape of those structures [4]. To apply this, we assumed that UL06W8V had been formed above a substrate that had the same general layout of surface elements. Furthermore, we assumed that the growth of UL06W8V would be constrained at least somewhat by that substrate, so that there would not be radical changes in the configuration from one time step to the next.

The simulation results demonstrate that applying parsimony (with or without continuity) does significantly increase the speed at which self-replication can emerge in the particular case studied. But parsimony is not a basic principle of chemistry or physics; it is a guiding principle that suggests what information should be added to an ongoing dynamic process as it requires additional information.

How does parsimony work? We can begin to understand by interpreting the third set of simulations (unfettered random selection of next rule symbols) in terms of physical analogues. A lattice with expanding content can be the analogue of insoluble flocs that form when mixtures of dissolved molecules continue to react, forming insoluble particles that aggregate until they can no longer be suspended in their surrounding liquid [8]. These flocs are essentially lost from further reaction. Alternatively, if the lattices with expanding content correspond to molecules that are adhering to a substrate, the expansion will continue until the entire substrate is covered and then stop reacting.

Lattices with stable or slowly growing content would correspond to systems that can remain dissolved or suspended, or alternatively adhering to a portion of the substrate. If they happen to form some type of semipermeable boundary (proto cell wall), they could remain active and slowly develop further.

If we could examine a large number of samples of pre-biotic “soup,” the analogy suggests that most samples would contain mixtures that had continued to consume resources (new rules, reactive molecules), becoming very large and complex. However, it is likely that the earliest self-replicating assemblages of molecules were relatively small, since the probability of forming just the right combination of molecules that reproduce increases exponentially with the number of molecules involved. Thus, for most samples the content is already too large and complex to develop into a self-reproductive system. In contrast, samples with stable (or even growing) content consume only limited resources, remain agile, and might be amenable to finding a combination of molecules capable of reproduction.

Figure 4 shows the percentage of lattices with non-expanding content that reach their terminal state, as a function of the minimum number of final rules. Those lattices whose contents need the fewest resources arrive earliest at their final state. This suggests that using fewer resources is advantageous—consistent with parsimony. There is a simple explanation for this behavior. At each time tick, a variable but generally small number of new rules must be added. Thus, the total number of rules continues to increase with each successive tick. However, if the lattice contains stable or growing configurations, or is empty, this increase stops. We can assume that as the remaining configurations become larger, the chances that new rules will be needed somewhere in the lattice increases—there are so many opportunities for novel local arrangements. Thus, Figure 4 must have the asymptotic shape found by the simulation.

Figure 4. 

Percentages of trials having the same or fewer rules for classes empty, stable, and growing.

Figure 4. 

Percentages of trials having the same or fewer rules for classes empty, stable, and growing.

The application of parsimony forces a dynamic process into the space of low resources. As noted above, self-replication uses relatively low resources (although there are very many stable configurations with far lower resource needs). We see then that by narrowing and limiting where a process can operate, parsimony is just accelerating what would eventually occur as a consequence of the blind, unguided random selections.

In general, one would expect parsimony to be effective in any dynamic system in which the chances of reaching a desired goal decreases as more resources are added. This may seem counterintuitive. However, parsimony seeks to achieve a goal using minimum resources. Thus, for parsimony to be effective in driving the system into regions of low resource consumption, there must be a penalty with respect to reaching the goal as more resources are consumed. In contrast, if the chances of reaching the goal were to increase, you would have “anti-parsimony”; the system would be favoring high, not low resource consumption.

Acknowledgment

The author wishes to thank Professor James A. Reggia, Department of Computer Science, University of Maryland, who reviewed a draft of this article and made critical suggestions on how the text could be strengthened. The author is also indebted to an anonymous reviewer who sent detailed and specific comments on remaining problem areas so they could be corrected.

References

1
Byl
,
J.
(
1989
).
Self-reproduction in small cellular automata
.
Physica D
,
34
,
295
299
.
2
Codd
,
E.
(
1968
).
Cellular automata
.
New York
:
Academic Press
.
3
DeJong
,
K.
(
2006
).
Evolutionary computation
.
Cambridge, MA
:
MIT Press
.
4
Fox
,
S.
, &
Dose
,
K.
(
1972
).
Molecular evolution and the origin of life
.
San Francisco
:
W. H. Freeman
.
5
Gardner
,
M.
(
1970
).
Mathematical games—The fantastic combinations of John Conway's new solitaire game “life”
.
Scientific American
,
223
,
120
123
.
6
Hutton
,
T.
(
2010
).
Codd's self-replicating computer
.
Artificial Life
,
16
(
2
),
99
118
.
7
Langton
,
C. G.
(
1984
).
Self-reproduction in cellular automata
.
Physica D
,
10
,
135
144
.
8
McCabe
,
W.
, &
Smith
,
J.
(
1956
).
Unit operations of chemical engineering
(p.
371
).
New York
:
McGraw-Hill
.
9
Pan
,
Z.
, &
Reggia
,
J.
(
2010
).
Computational discovery of instructionless self-replicating structures
.
Artificial Life
,
16
(
1
),
39
63
.
10
Reggia
,
J. A.
,
Armentrout
,
S. L.
,
Chou
,
H. H.
, &
Peng
,
Y.
(
1993
).
Simple systems that exhibit self-directed replication
.
Science
,
259
,
1282
1287
.
11
Ripps
,
D.
(
2010
).
Using economy of means to evolve transition rules within 2D cellular automata
.
Artificial Life
,
16
(
2
),
119
126
.
12
Tempesti
,
G.
(
1995
).
A new self-reproducing cellular automaton capable of construction and computation
. In
Proceedings of the Third European Conference on Artificial Life
(pp.
555
563
).
Berlin
:
Springer Verlag
.
13
von Neumann
,
J.
(
1966
).
Theory of self-reproducing automata
.
Urbana
:
University of Illinois Press
.
Edited and completed by A. W. Burks
.

Appendix. Minimal Rule Set for UL06W8V

This study required an algorithm to reduce a set of specific rules to its minimal representation as specific and generalized rules. To test this algorithm, it was applied to the rule set published by Reggia et al. [10, Table 2]. (Although three additional rules were found necessary after the end of the life cycle of the parent in order to complete the child, these were not included in what follows. For each extra rule, the center is preserved.)

The results show that the minimal rule set can be expressed by seven generalized rules plus 11 specific rules for a total of 18 rules (see Table 4).

This is slightly shorter than the 20 rules given by Reggia et al. in their Table 2.

Table 4. 

Minimal rule set.

....> → ^ 
..OOO → ^ 
…^O → < 
…>. → O 
…#. → O 
..<<. → # 
 
<…# → . 
<..L → L 
<.L → L 
 
>O. → . 
 
O^O.> → V 
OVO. → > 
OOO.> → > 
O.<O^ → V 
 
O<O. → > 
O.<. → < 
 
L.O → O 
 
#.<L. → O 
....> → ^ 
..OOO → ^ 
…^O → < 
…>. → O 
…#. → O 
..<<. → # 
 
<…# → . 
<..L → L 
<.L → L 
 
>O. → . 
 
O^O.> → V 
OVO. → > 
OOO.> → > 
O.<O^ → V 
 
O<O. → > 
O.<. → < 
 
L.O → O 
 
#.<L. → O 

Author notes

*

David Ripps passed away in February 2016. We are pleased to have accepted his article for inclusion in the Artificial Life Journal before he passed and wish that he could have seen it in print after devoting himself to this work for so many years.