Abstract

We propose an approach to open-ended evolution via the simulation of swarm dynamics. In nature, swarms possess remarkable properties, which allow many organisms, from swarming bacteria to ants and flocking birds, to form higher-order structures that enhance their behavior as a group. Swarm simulations highlight three important factors to create novelty and diversity: (a) communication generates combinatorial cooperative dynamics, (b) concurrency allows for separation of time scales, and (c) complexity and size increases push the system towards transitions in innovation. We illustrate these three components in a model computing the continuous evolution of a swarm of agents. The results, divided into three distinct applications, show how emergent structures are capable of filtering information through the bottleneck of their memory, to produce meaningful novelty and diversity within their simulated environment.

1 Open-Ended Evolution

Life has been evolving on our planet over billions of years, undergoing several major transitions that transformed the way information is stored, processed, and transmitted. All these transitions, from multicellularity to the formation of eusocial systems and the development of complex brains, seem to lead to the idea that the evolution of living systems is open-ended. In other words, life appears to be capable of increasing its complexity indefinitely. Another formulation of open-endedness, echoed by Standish [42] and Soros [41], is that open-endedness depends fundamentally on the continual production of novelty. Life keeps uncovering new inventions, in a process that never seems to stop.

Since the 1950s, open-ended evolution (OEE) has been a central topic of research for artificial life approaches to the fundamental principles of life. Soon after, John von Neumann [51] contributed to the issue as well, with his early model of self-reproducing automata. Since 2015, a series of workshops have been taking place at artificial life conferences [45], the most recent of which1 was a launchpad for the present special issue. In general, an evolutionary system is considered to be open-ended when it is able to endlessly generate diverse novel entities of growing complexity. Engineering open-ended systems in the lab is not easy, and the main obstacle is that the designed evolutionary systems are subject to a thermodynamic drift making them collapse into equilibrium states. Once local optima are reached, they do not produce novelty anymore, so their complexity and diversity are bounded.

Innovation seems to emerge from collective intelligence, a phenomenon that occurs in groups or networks of agents that develop together the ability to enhance the group's cognitive capacity or creativity. This is reminiscent of the ongoing innovative process of science, which does not have any other fixed objective than the production of new knowledge, but makes discoveries mostly through accidents. Stuart Kauffman advocated for the idea of the adjacent possible, claiming that a biosphere can be viewed as a secular or long-term trend and it can maximize the rate of exploration of the adjacent possible of an existing organization [19, 20]. Ikegami et al. [2017; 18] builds on that idea to explain how, in terms of evolutionary transitions [28], a new stage (e.g., multicellular organism) of evolution may be produced without any information being passed on from the previous stage (e.g., from single cells). Rather, structural properties are assembled, producing a stepping stone to the next level of innovation.

These structural properties of a collective group can be compared to a bottleneck that acts as a filter on several levels of the system, implementing computation that is not present in any of its parts. Part of this idea is analogous to Tishby and Polani [1999; 46], where the information is squeezed through a bottleneck, to extract relevant or meaningful information from an environment. The resulting information “filtered” through the bottleneck retains only the features most relevant to general concepts.

In this article, we present three “C factors” that we deem important for novelty and diversity. We then introduce a swarm model to study these factors, applied in three different studies. We conclude with a discussion on open-endedness at large, framing the three factors in terms of the emergence of collective intelligence in swarm simulations.

2 Conditions for OEE

The OEE literature has proposed various conditions that are supposed to lead to the successful production of OEE in a system. A number of studies have attempted to formalize necessary conditions for OEE [6, 16]. As a recent example, Soros and Stanley [2014; 41] suggest four conditions at the scale of single reproducing individuals in the system: each should fulfill some nontrivial minimal criterion, be able to create novelty, act autonomously, and dispose of access to unbounded memory.

Such articles have typically been proposing their own model, to demonstrate the importance of the hypothesized conditions for the emergence of OEE within it. However, most evolutionary algorithms seem to either converge very quickly to a solution, or get stuck in a confined area of the search space. Either way, they don't seem to be able to intrinsically generate the amounts of complexity and novelty we find in nature, even at one scale.

Although this failure of simulated evolution to match open-ended properties found in natural evolution may be explained by shorter time scales, in principle one would have expected the decades of efforts, and increasingly large resources poured into research in evolutionary computation, to have unlocked more of its potential to create novelty. However, even the latest technologies don't seem to keep their inventivity. In general, promising models [12, 13, 23, 24, 40] that manage to demonstrate at least a few phase transitions or creative leaps—not necessarily with evolutionary computation—seem to have one common feature of containing several structural bottlenecks that filter relevant information through them, as a catalyst of creativity.

What seems to be missing to achieve general OEE? We choose to emphasize three “C” candidates that we see as worth pursuing—communication, concurrency, and complexity:

  • (a) 

    Communication: Constricted information flows between parts of the systems allow for synergy and cooperation effects.

  • (b) 

    Concurrency: The creation of separate space and time scales requires concurrent, nondeterministic, asynchronous models.

  • (c) 

    Complexity: Mere system growth can boost novelty and diversity.

These are the three factors on which we choose to concentrate in this article. We will expand on each of them a little further, before proposing how to apply them in concrete models (Studies 1, 2, and 3).

2.1 Communication

This first point addresses synergies and coordination between components of the systems, using information transfers. One well-known example of OEE is combinatorial creativity in human language, where syntactical rules are capable of producing infinitely many well-formed structures using recursion, thus making the number of potential sentences unbounded [15]. Although these may seem slightly dated remarks in view of the language studies based on artificial life systems [22], it is promising to focus on the cultural layer of dynamics, which lives on top of the main layer of entities. For example, in the case of web services (social tagging networks), we can analyze how combinatioral complexity is effective in evoking OEE. In cellular automata, one may want to study the interactions between gliders or other patterns. In artificial chemistry, one may want to look at information flows between types of molecules or replicators. In agent-based modeling, perhaps establishments of protocols between agents or groups of agents can become a factor to focus on.

Communication naturally adds relevant computing filters operating on unexploited information flows, effectively increasing the bandwidth of useful information flows within the system per clock cycle. Communication offers a layer for metadynamics at a different time scale from the first-order dynamics. This induces a separation of time scales, thus doubling the system's capacity to implement learning mechanisms. Designing information to circulate between subentities of the system forces the creation of more structural bottlenecks.

We propose information exchanges as a central mechanism promoting open-ended evolution. From information flows in groups of individuals, a system can boost its own production of creativity to achieve indefinite complexity. Examples are detailed in Studies 1 and 2 in Section 4.

2.2 Concurrency

In many situations, a system cannot scale up to larger space-time scales as it is. We need to add some ingredient to make it work on the larger scales. One such remedy is to give it asynchronous updates. Removing the global clock is needed to make larger systems function consistently without constantly checking local consistency. On the other hand, we know that cellular automata (CA) tend to lose their complexity on adopting asynchronous dynamics. Yet asynchrony is an original natural phenomena difficult to bring to artificial systems.

According to Dave Ackley [1], models should be indefinitely scalable, ruling out deterministic, synchronous models (such as simple cellular automata) and suggesting nondeterministic, asynchronous ones. Bersini et al. [1994; 4] proposed that asynchronous rather than synchronous updating may be the key factor in inducing stability in simulations. Although they were examining a variant of cellular automata, their results, based on an analysis of the Lyapunov exponent, indicated the responsibility of asynchrony for sensitivity of the update function. Ackley and Ackley [2015; 1] propose using asynchrony.

Concurrency is also closely related to the ability a system has to evolve separate time scales. Although highly contingent on other properties of a simulation, the capacity to develop heterogeneous time scales often constitutes a barrier to producing intrinsic novelty. Researchers used to separate lifetime learning and evolutionary learning, as two distinct mechanisms [31]. However, the effects of accumulating and filtering information into and out of one system's memory occurs on a much more continuous range of time scales. In nature, from phenotypic plasticity, to maternal effects, to sexual selection, or to gene flow, many events have their time scales intricately interlaced. We will address this in particular in Studies 1 and 2.

2.3 Complexity

We have no grounds to argue that nature is its own only possible realization, since there would be no satisfactory explanation for that. One important feature of nature, though, is its complexity, which can translate into both system size and landscape complexity. In the simplest of all cases, complexity can be reached merely with large population sizes. Ikegami et al. [2017; 18] proposed that large groups of individuals, given the right set of structural characteristics, may be the main driver for emergence. They discussed this hypothesis in relation to large-scale boid swarm simulations [37], in which the nucleation, organization, and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks.

Collective behaviors can be qualitatively changed by increasing the number of agents, that is, a colony or group size. In actual observation, for example, individual bees change their behaviors depending on the colony size. Also, fish change their performance in sensing an environmental gradient depending on the school size. In previous work [30], we simulated a half-million-bird flock using a boids model [37, 49] and found that qualitatively different behavior emerges when the total number of individuals exceeds a few thousand or so. Flocks of different sizes and different shapes interact to diminish some flocks but to generate new ones. Different types of fluctuation become dominant in flocks of different sizes. The correlation of the local density fluctuation becomes dominant in the larger flocks, and that of the velocity fluctuation in the smaller ones. An example of that is offered in Study 3 (Section 4.3).

Extending the argument on size, environmental complexity is definitely necessary to a certain extent to create complex behaviors. Only with richer environments, encompassing complex distributions of energy resources and ways for systems to survive, can emerging individuals explore a rich set of strategies and increasingly complex solutions. As mentioned earlier, evolutionary landscapes have become an important concept in biology to analyze the dynamics at play in an ecosystem.

The picture to have is that of a unit of selection (e.g., a gene, among many other options) being represented by a point in a multidimensional search space. That space is typically given as many dimensions as there are degrees of freedom for the entity (e.g., a combination of nucleotide sequences) to vary and evolve in the space. The search space is mapped onto an additional dimension, which is usually reproductive success, or fitness. The shape of fitness across all degrees of freedom of the system has a strong effect on the dynamics that it can achieve. Malan and Engelbrecht [2013; 26] identify eleven characteristics of fitness, which make it more or less difficult to evaluate. These characteristics include the degree of the variables' interdependence, noise, fitness distribution, fitness distribution in search space, modality, information to guide search and deception, global landscape structure, ruggedness, neutrality, symmetry, and searchability.

In evolutionary systems, richer environments, benefiting from a complex distribution of energy sources and ways for systems to survive, give rise to richer sets of pathways. The larger the search spaces, the more complex fitness functions are potentially evolved. Another way is to make the environment a more complicated function of time, which the agents will need to learn in order to extract more energy from it. In Studies 1 and 2, we present results suggesting that simulations should be required to provide sufficient system complexity with respect to the environment of agents.

Simulation time and memory, though not mentioned yet, are important components to consider. Computationally, the whole course of evolution on Earth is like a single run of a single algorithm that invented all of nature, and seems like it will never end. One obvious difference is the size of the systems, which might be the missing requirement to get ever-greater emergent complexity and novelty over a very long time. However, we do not insist on that requirement in this article, as we consider it trivial that a system with too low computational power will not be able to achieve OEE to any extent.

Similarly, there is a distinction to be made between endo-OEE, producing novelty from within, and exo-OEE, which makes use of input from outside the system. Picbreeder [39], for example, explicitly requires external human input to function, which makes it a debatable generator of OEE. Nevertheless, OEE is not about new information, but rather inventions achieved by the system. In that respect, swarms are a promising model: Without increasing the ensemble size, they let us focus on how coordination patterns self-organize, generating intrinsic novelty. To give another example, even increasing the number of neurons in a neural network still requires neurons to differentiate themselves and create coordinated networks before they get to foster innovative ideas.

We exemplify the importance of size and complexity in Studies 1 and 3, while discussing how to make simulations parallelizable, to save considerable amounts of time and memory by distributing them over many machines.

3 Concurrent Evolutionary Neural Boids Model

The model we choose to present puts together the above-mentioned series of features as a means to promote the system's open-endedness. We give some details here, and will go over the details of several applications of it in the next section. The evolutionary system is an agent-based simulation, based on Reynolds' boids model [37].

The boids model was based on simple rules computed locally, allowing one to simulate flocks of agents moving through artificial environments. As with most artificial life simulations, boids showed emergent behavior, that is, the complexity of boids arises from the interaction of individual agents adhering to a set of simple rules of separation (e.g., steer to avoid crowding local flockmates), alignment (e.g., steer towards the average heading of local flockmates, and cohesion (e.g., steer to move toward the center of mass of local flockmates).

In our model, as in Reynolds' model, the population of agents moves around in a continuous three-dimensional space, with periodic boundary conditions (Figure 1). However, instead of using fixed rules to control the boids' motion, we allow agents to evolve their own controllers through concurrent evolutionary computation. Each agent, instead of responding to simple rules, is controlled by its own neural network. The parameters of the neural network are encoded in a genotype vector, which determines the individual's sensorimotor choices at each moment in time. This corresponds to standard evolutionary robotics methodologies [32], except that we introduce the following variant. The genotype is evolved through the course of the simulation, via a continuous variant of an evolutionary algorithm [54], that is, agents with high level of fitness are allowed, at any point in the running simulation, to replicate with mutation.

Figure 1. 

Graphical representation of the world in a neural boids simulation. Each agent is represented as an arrow indicating its current direction. The color of an agent indicates the average value of its internal nodes. The green spheres represent the centers for energy sources. Although variants presented later in this article display slightly different graphics, the backbone is the same.

Figure 1. 

Graphical representation of the world in a neural boids simulation. Each agent is represented as an arrow indicating its current direction. The color of an agent indicates the average value of its internal nodes. The green spheres represent the centers for energy sources. Although variants presented later in this article display slightly different graphics, the backbone is the same.

This model also builds upon prior work on the effect of self-organized interagent communication and cooperative behavior on the agents' performance of tasks [34, 35]. Previous research has shown the difficulty of using communication channels [29, 36] but also shown the cooperative value of information transfers [55]. This will be complemented by the results from previous information- theoretic analyses of learning systems, which managed to shed light on the role of information flows in learning [25, 47, 48].

Agents are given a certain energy, which also acts as their fitness. This will be specific to the study cases. Each agent comes with a set of 12 different sensors. The neural network (represented in Figure 2) takes the information from those sensors as inputs, in order to decide the agent's actions at every time step. The possible actions amount to the agent's motion and, in the specific variant shown here, a prisoner's-dilemma action (cooperate or defect), as well as two output signals. The architecture is composed of a 12 input, 10 hidden, 5 output, and 10 context neurons connected to the hidden layer (see Figure 2).

Figure 2. 

Architecture of the agent's controller. The network is composed of 12 input neurons, 10 hidden neurons, 10 context neurons, and 5 output neurons.

Figure 2. 

Architecture of the agent's controller. The network is composed of 12 input neurons, 10 hidden neurons, 10 context neurons, and 5 output neurons.

The agents' motion is controlled by M1 and M2, outputting two Euler rotation angles: ψ for pitch (i.e., elevation) and θ for yaw (i.e., heading), with floating-point values between 0 and π. Even though the agents' speed is fixed, the rotation angles still allow the agent to control its average speed (for example, if ψ is constant and θ equals zero, the agents will continuously loop on a circular trajectory, which results in an almost-zero average speed over 100 steps).

The outputs Sout(1) and Sout(2) control the signals emitted into two distinct channels, which are propagated through the environment to the agents within a neighboring radius set to 50. The choice of two channels was made to allow for signals of higher complexity and possibly more interesting dynamics than in greenbeard studies [11].

The received signals are summed separately for each direction (front, back, right, left, up, down), and weighted by the inverse square of the emitters' distance. This way, agents further away have much less influence on the sensors than closer ones do. Every agent is able to receive signals on the two emission channels, from 6 different directions, totaling 12 different values sensed per time step. For example, the input Sin(6,1) corresponds to the signals reaching the agent from the neighbors below.

The evolution is performed continuously over the population. Agents with negative or zero energy are removed, while agents with energy above a threshold are forced to reproduce, within the limit of one infant per time step. The reproduction cost is low enough, considering the threshold, not to put the life of the agent at risk.

4 Study Cases

We go over the application of this model in three selected studies. Each of them highlights a specific property of OEE. Study 1 shows how agents can form patterns to accelerate their search for energy, distributed over an n-dimensional space, collaborating via local signaling with their neighbors. Study 2 shows the invention of dynamical group strategies in a spatial prisoner's dilemma, allowing for specific cooperation effects. Study 3 shows the effect of growth on the emergence of noise-canceling effects.

4.1 Study 1: OEE via Collective Search Based on Communication

Since Reynolds' boids, coordinated motion has often been reproduced in artificial models, but the conditions leading to its emergence are still subject to research, with candidates ranging from obstacle avoidance to virtual leaders. The relation of spatial coordination and group cooperation has long been studied in game theory and evolutionary biology.

We here apply our model of agents exchanging signals and moving in a three-dimensional environment, to a task of dynamical search for free energy in space [54, 55]. Each agent's movements are controlled by artificial networks, evolved through generations of an asynchronous selection algorithm. During the evolution, the agents are able to communicate to produce cooperative, coordinated behavior.

Individuals develop swarming, using only their ability to listen to each other's signals. The agents are selected based on their performance in finding invisible resources in space giving them fitness. The agents are shown to use the information exchanged between them via signaling to form temporary leader-follower relations allowing them to flock together. The swarmers outperform the non-swarmers in finding the resource, thus reaching a neutral evolutionary space, which leads to a genetic drift.

This work constructs an adaptive system to evolve swarming based only on individual sensory information and local communication with close neighbors. This addresses directly the problem of group coordination without central control, without being aware of the positions of neighbors, and without any use of the substrate to determine where to deposit information (stigmergy) [14]. The approach has also the advantage of yielding original and efficient swarming strategies. A detailed behavioral analysis is then performed on the fittest swarm to gain insight into the behavior of the individual agents.

The results show that agents progressively evolve the ability to flock through communication to perform a foraging task. We observe a dynamical swarming behavior, including coupling/decoupling phases between agents, allowed by the only interaction at their disposal, namely, signaling. Eventually, agents come to react to their neighbors' signals, which are the only information they can use to improve their foraging. This can lead them to either move towards or move away from each other. While moving away from each other has no special effect, moving towards each other, on the contrary, leads to swarming. Flocking with each other may lead agents to slow down their pace, which for some of them may keep them closer to a food resource. This creates a beneficial feedback loop, since the fitness brought to the agents will allow them to reproduce faster, and eventually multiply this type of behavior within the total population.

The algorithm converges to build a heterogeneous population, as shown in Figure 3. The phylogeny is represented horizontally in order to compare it with the average number of neighbors throughout the simulation. The neighborhood becomes denser around iteration 400k, showing a higher portion of swarming agents. This leads firstly to a strong selection of the agents able to swarm together over the other individuals, a selection that is soon relaxed due to the signaling pattern being widely spread, resulting in a heterogeneous population, as we can see on the upper plot, with numerous branches towards the end of the simulation.

Figure 3. 

Top: average number of neighbors during a single run. Bottom: agents' phylogeny for the same run. The roots are on the left, and each bifurcation represents a newborn agent.

Figure 3. 

Top: average number of neighbors during a single run. Bottom: agents' phylogeny for the same run. The roots are on the left, and each bifurcation represents a newborn agent.

In this scenario, agents do not need extremely complex learning to swarm and eventually get more easily to the resource, but rather rely on dynamics emerging from their communication system to create inertia and remain close to goal areas.

The simulated population displays strong heterogeneity due to the asynchronous reproduction schema, which can be seen in the phylogenetic tree (Figure 3). The principal-component analysis (PCA) plotted in Figure 4 shows a large cluster (left side) in addition to a series of smaller ones (right side). The genotypes in the early stages of the simulation belong to the right clusters, but get to the left cluster later on, reaching a higher number of neighbors. The plot shows a diverse set of late clusters, which translates to numerous distinct behaviors in the late stage of the simulation.

Figure 4. 

Two principal components of a PCA on the genotypes of all agents of a typical run, over one million iterations. Each circle represents one agent's genotype, its diameter representing the average number of neighbors around the agent over its lifetime, and its color showing its time of death ranging from bright green (at time step 0, early in the simulation) to red (at time step 106, when the simulation approaches one million iterations).

Figure 4. 

Two principal components of a PCA on the genotypes of all agents of a typical run, over one million iterations. Each circle represents one agent's genotype, its diameter representing the average number of neighbors around the agent over its lifetime, and its color showing its time of death ranging from bright green (at time step 0, early in the simulation) to red (at time step 106, when the simulation approaches one million iterations).

Such heterogeneity may suppress swarming, but the evolved signaling helps the population to form and keep swarming. The simulations do not exhibit strong selection pressures to adopt specific behavior apart from the use of the signaling. Without high homogeneity in the population, the signaling alone allows for interaction dynamics sufficient to form swarms, which proves in turn to be beneficial in improving fitness.

These results represent an improvement on previous models using hard-coded rules to simulate swarming behavior, as they are evolved from very simple conditions. Our model also does not rely on any explicit information from leaders, such as was previously used in part of the literature [8, 44]. It does not impose any explicit leader-follower relationship beforehand, simply letting the leader-follower dynamics emerge and self-organize. In spite of being theoretical, the swarming model presented in this article offers a simple, general approach to the emergence of swarming behavior previously approached via the boids rules. This simulation improves on previous work in that agents naturally switch leadership and followership by exchanging information over a very limited channel of communication. Finally, our results also show the advantage of swarming for resource finding. It's only through swarming, enabled by signaling behavior, that agents are able to reach and remain around the goal areas.

In terms of cooperation, this model exemplifies a case of multilevel selection theory [50, 52], which models the layers of competition and evolution, within an ecological system. Our system shows the emergence of different levels that function cohesively to maximize reproductive success. The fitness value of the group-level dynamics outweighs the competitive costs, resulting in individuals constantly and nontrivially innovating in their ways of cooperating, to create behaviors that are not centrally coded for.

This study shows swarming dynamics emerging from a communication system between agents, immersed in a simulated environment with spatial distribution of energy resource. The concurrent evolution scheme, running at the same time as the simulation itself, led to decentralized leader-follower interactions, which allowed for collective motion patterns, which in turn significantly improved the groups' fitnesses.

This model encodes the stochastic evolution of a controller that maps sensory input onto motor output, to optimize the performance on a task, as framed broadly by Nolfi and Floreano [2000; 32]. We capture the fight against a difficult wall [38], which simulations typically fail at because they suddenly hit a so-called “wall of complexity”: Trivial problems are solved easily, but it's hard to jump to solving difficult ones. If we take the no-free-lunch argument from Wolpert and Macready [1997; 56] that no optimization algorithm is at all times superior to the others, it is natural that the more specific the algorithm, the more it is likely to fail with new problems.

Our results suggest that novelty can be produced by the asynchronous evolution of a heterogeneous community of agents, which through their mixture of strategies may achieve open-ended, uninformed learning. The heterogeneity present in the model also offers an extension to the advantages of particle swarm optimization (PSO) [10]. While PSO only offers one unique objective function to optimize, each agent in the swarm effectively runs its own function, and they are combined into a swarm behavior. Although these results suggest open-endedness, it is worth noting we do not bring a proof that the phenomenon is truly open-ended, which may require the emergence of ever-complexifying communication, or an uninterrupted sequence of evolutionary innovations.

The information flows were a focus of the original work [55]. From these flows, one can notice three main bottlenecks. The evolutionary computation contains a bottleneck effect, as a result of the selection based on the agents' performance on the task. Another bottleneck can be found between the sensory inputs of each agent and its motor outputs, as the neural controller acts as a filter for the information. The agents' signaling also naturally produces a bottleneck effect, as the information transmitted from agent to agent is constrained by the physical communication bandwidth. The combination of these three bottlenecks allows for relevant information to be filtered into the swarm, which is able to learn certain behaviors (see also next subsection).

4.2 Study 2: OEE via Cooperative Flocking

The evolution of cooperation is studied in game theory, and extensions have been made to include spatial dimensions. This problem is often tackled by using simple models, such as considering interactions to be a game of prisoner's dilemma (PD).

As previously, this study presents a 3D simulation of agents with an asynchronous neuroevolution of controllers (see Figure 5). We examined a variation of the model with a distinct fitness function in a separate study, based this time on the agents playing a spatial version of the prisoner's dilemma [53]. We study the effect of the movement control on optimal strategies, and show that cooperators rapidly join into static clusters, creating favorable niches for fast replications. It is also noted that, while remaining inside those clusters, cooperators still keep moving faster than defectors. The system dynamics are analyzed further to explain the stability of this behavior.

Figure 5. 

Graphical representation of the world in a simulation. Each agent is represented as an arrow indicating its current direction. The color of an agent indicates its current action, either cooperation (blue) or defection (red). Note the cluster of cooperators being invaded by defectors.

Figure 5. 

Graphical representation of the world in a simulation. Each agent is represented as an arrow indicating its current direction. The color of an agent indicates its current action, either cooperation (blue) or defection (red). Note the cluster of cooperators being invaded by defectors.

This work presents, in an even more explicit fashion than the previous study, a model aimed at showing emergent levels of selection for cooperative behavior [52]. At every time step, agents are playing a N-player version of the prisoner's dilemma with their surroundings, meaning that they make a single decision that affects all agents around them. They get reward and/or punishment based on the number of cooperators around them. Their decision is one of the outputs of their neural network. Effectively, the payoff matrix we used is an extension of Chiong and Kirley's [2012; 5], where we added distance to take into account spatial continuity.

Based on the outcome of the match, agents can choose a new direction, which is similar to leaving the group in the walkaway strategy [2], the main difference being that, in our case, it is also possible for groups to split. It is also similar in another aspect: There is a cost to leaving a group, as a lone agent may need time to meet others.

At the beginning of each run, the environment is seeded with random agents. Since all weights in their neural network are set at random, roughly half of the agents initially choose to cooperate while the other half choose to defect. This leads to a fast extinction of cooperators, until approximately 50,000 time steps, until a group emerges strong enough to survive. The second phase follows, in which cooperators are quickly increasing in number due to the autocatalytic nature of this strategy. A third step happens eventually, where defectors invade the cluster, followed either by the survival of the cluster due to cooperators running away or a reboot of the cycle. In the case of survival, oscillations in the proportion of cooperators can be observed. However, this phenomenon is averaged away over multiple runs, since the period and phase of the oscillations are not correlated from one experiment to the other. Were a defector to appear near a cluster of cooperators, the cluster would react by “reproducing away.” However, the chance of being overtaken by the defectors is much higher than in the dynamic case.

From this three-dimensional model of agents playing the prisoner's dilemma, the first result is that cooperators, when they are present, quickly evolve to form clusters, as they represent a favorable pattern. The clustering behavior can be interpreted as a degenerated version of the simulations presented above, since the cooperating agents present the same capacities of information exchange as in that model. We note that this solution is evolved through a longer time scale, as it is not always viable locally, depending on the distribution and behavioral thresholds of defectors. While the clustering itself can be expected, it is interesting to observe that the cooperators' overall movement rate is still higher than that of defectors. This is even more surprising in that those clusters do not seem to move fast. Instead, analysis shows that cooperators are moving quickly inside the cluster, which may be a way to adapt to an aggressive environment.

In addition, comparison with the static case showed that movement made the emergence of cooperators harder, but more stable in the long run. Since it is harder for defectors to overtake a cluster of cooperators, our systems often show a soft bistability, meaning that they will eventually switch from one state to the other. It is even possible to observe a sort of symbiosis, where cooperators are generating more energy than necessary, which is in turn used by peripheral defectors. In this case, replacement rates allow cooperators to stay ahead, keeping this small ecosystem stable. This cohesion among cooperators seems to be enhanced by signaling, even though signals might attract defectors. Additional investigation of the transfer entropy, for instance, could be a promising next step.

Another result is found in the choice of actions, generated by the neural networks without consideration of the past actions. We notice the emergence of a dynamical memory effect, which otherwise must be encoded in each agent, here emerging from the agents' motion in space.

Since the prisoner's dilemma game has become a common model used in evolutionary biology to study the dependence of outcomes on the costs that characterize an ecosystem, this model, with a fitness based on the results of such a game, showed the emergence of spatial coordination based on the exchange of signals between agents. The signals remained very simple, and the environment was fixed in time.

This model's evolutionary computation reached solutions composed of different parts, including soft bistable strategies, different radii of clusters, and the use of dynamical patterns to improve their fitness. The solutions were also distributed over different time scales. The communication between agents also allowed them to converge on these behaviors more quickly. These elements refer back to the three C's mentioned in Section 2, for the discovery of novel solutions to a simple PD game.

Lastly, we note that many different neural architectures may coexist, as only a part of the neural architecture is used to implement flocking. This neural heterogeneity is something we'd like to insist on in the context of OEE. Additionally, communication is important for filtering out the neural architecture's heterogeneity, which potentially explains the heterogeneity in a community (i.e., agents can stay in a community if they can communicate with each other). The communication may therefore indirectly help to preserve the heterogeneity.

4.3 Study 3: OEE via Large-Scale Swarms

Studying flocking models can also lead to the emergence of OEE, by focusing on such emergent phenomena as macroscopic layers of patterns and structures that appear as a result of cooperative phenomena between autonomously behaving elements. A group of elements creates a self-organizing structure, which governs the individual micro rules and creates a new macro structure. Therefore, consecutive micro–macro recurrent self-organization is identified as an emergent phenomenon.

Here, we describe the contribution of the same swarming simulation, scaled up, and demonstrate the effect of size on the emergence of open-endedness [9]. We start by presenting a degenerate version of that model, which shows large-scale dynamics in the less computationally costly case of agents that don't preserve any internal state other than position and velocity [18].

Starting with this simpler stateless model, we observe a noticeable change when the total number of boids increases from 2,048 to 524,288 while the density is kept constant (Figure 6). In order to compute large swarming behavior, we parallelized the computational steps using the method of general-purpose computing on graphics processing units (GPGPU). The next step was to extend it to stateful agents.

Figure 6. 

Visualization of swarming behavior, simulated by a large scale stateless boids model [18]. The total number of boids in each panel is (a) 2,048, (b) 16,384, (c) 131,072 and (d) 524,288, respectively. Some flocks are composed of a very large number of boids with narrow filament patterns. The initial velocity of each boid is set at random, and the density of the total number of boids is kept constant at 16,384 (number per cubic unit). The minimum and the maximum speed are set at 0.001 and 0.005 unit per step, respectively.

Figure 6. 

Visualization of swarming behavior, simulated by a large scale stateless boids model [18]. The total number of boids in each panel is (a) 2,048, (b) 16,384, (c) 131,072 and (d) 524,288, respectively. Some flocks are composed of a very large number of boids with narrow filament patterns. The initial velocity of each boid is set at random, and the density of the total number of boids is kept constant at 16,384 (number per cubic unit). The minimum and the maximum speed are set at 0.001 and 0.005 unit per step, respectively.

We explore the effect of reaching a critical mass, in particular on the efficiency of the swarm's foraging behavior. We study the problem of maintaining the swarm's resilience to noisy signals from the environment. To do so, we look at stateful boids, that is, moving agents controlled by neural network controllers, which we evolve through time in order to explore further the emergence of swarming, as in the previous two subsections. However, we now ground our model in a more realistic setting where information about the resource location is made partly accessible to the agents, but only through a highly noisy channel. The swarming is shown to critically improve the efficiency of group foraging, by allowing agents to reach resource areas much more easily by correcting individual mistakes in group dynamics. As high levels of noise may make the emergence of collective behavior depend on a critical mass of agents, it is crucial in simulation to attain sufficient computing power to allow for the evolution of the whole set of dynamics.

Because this type of simulations based on neural controllers and information exchanges between agents is computationally intensive, it is critical to optimize the implementation in order to be able to analyze critical masses of individuals. In this work, we address implementation challenges by showing how to apply techniques from astrophysics known as treecodes to compute the signal propagation and efficiently parallelize for multi-core architectures. The results confirm that signal-driven swarming improves foraging performance. The agents overcome their noisy individual channels by forming dynamic swarms. The measured fitness is found to depend on the population size, which suggests that large-scale swarms may behave qualitatively differently.

The minimalist study presented in this article, together with crucial computational optimizations, opens the way to future research on the emergence of signal-based swarming as an efficient collective strategy for uninformed search. Future work will focus on further information analysis of the swarming phenomenon and how swarm sizes can affect foraging efficiency.

In this model, we specifically focus on the addition of noise to the food detection sense that the agents possess, and hypothesize that it can be overcome by the emergence of a collective behavior involving sufficiently large groups of agents.

Many systems, from atomic piles to swarms, seem to work towards preserving a precarious balance right at their critical point [3]. An atomic pile is said to be critical when a chain reaction of nuclear fission becomes self-sustaining. A minimal amount of fissionable material has to be compacted together to keep the dynamics from fading away. The notion of critical mass as a crucial factor in collective behavior has been studied in various areas of application [27, 33].

Similarly, the size of the formed groups of agents may be crucial in order to reach a critical mass in swarms, enough to overcome very noisy environments. Part of the focus will therefore be on the optimization of the computer simulation itself, as large-scale swarms may differ qualitatively in behavior from ordinary-sized ones.

The model extends the original setup described before, which proposed an asynchronous simulation evolving a swarming behavior based on signaling between individuals. However, unlike the original model, where the individuals don't perceive directly either the food patches or the other agents around them, here we give a sense of vision to every agent, allowing it to detect nearby resources. However, we add a high level of noise to make this information highly imperfect.

We used an agent-based simulation to show how signal-driven swarming, emerging in an evolutionary simulation such as in Witkowski and Ikegami [2014; 54], allows agents to overcome noise in information channels and improve their performance in a resource-finding task. Our first contribution is the very introduction of noise, demonstrating that the algorithm performs well against noises filling up channels of information almost to their full capacity, in the inputs of agents. The individuals, by means of a swarming behavior helped by basic signaling, manage to globally filter out the noise present in the information from their sensory inputs, to reach the food sites.

We proposed a hierarchical method based on the Barnes-Hut simulation in computational physics and its parallel implementation. We achieved a performance improvement of a few orders of magnitude over the previous implementation [54]. This implementation is crucial to achieving the simulation of a sufficient number of agents to test for large-scale swarms (i.e., involving a very large number of individuals), which have been suggested to generate qualitatively different dynamics.

The optimization of the fitness acquired by phenotypes uses efficient patterns of behavior (motion and signaling), which themselves are encoded in the weights of agents' neural networks. The real optimization therefore occurs at the higher level of the Darwinian-like process in the genotypic search space. Efficient genotypes are selected by the asynchronous genetic algorithm throughout a simulation run.

We observed that signaling improves the foraging of agents (see Figure 7 for plots from Drozd et al. [9] of efficiency or fitness against simulation time), in terms of the average resource retrieved per agent per iteration as a measure of the population's fitness. Without noise, the agents using signaling are less efficient than their silent counterpart, which we found is not due to the cost of signaling, but rather to the excess of noise brought by the signal inputs. The difference remains very small between signaling and non-signaling agents.

Figure 7. 

Agents' efficiency plot with and without signal, from a stateful (neural-network-controlled) boids model in the original work [9], with mean (central line) and standard deviation range (area plot) over 10 runs. The plots correspond to noiseless (top), constant noise 20 (middle), and constant noise 40 (bottom), respectively.

Figure 7. 

Agents' efficiency plot with and without signal, from a stateful (neural-network-controlled) boids model in the original work [9], with mean (central line) and standard deviation range (area plot) over 10 runs. The plots correspond to noiseless (top), constant noise 20 (middle), and constant noise 40 (bottom), respectively.

We find however that above a certain noise level, the cost to signal is fully compensated by the benefits of signaling, as it helps the foraging of agents. The average fitness becomes even higher as we increase the noise level, which suggests that the signaling behavior increases in efficiency for high levels of noise, allowing the agents to overcome imperfect information by forming swarms.

We also observe scale effects in the influence of the signal propagation on the average fitness of the population. For a smaller population, only middle values of signal propagation seem to bring about fitter behaviors, whereas this is not the case for larger population sizes. On the contrary, larger populations are most efficient for lower levels of signal propagation. This may suggest a phase transition in the agents' behavior for large populations, involving the way the swarming itself helps foraging.

Understanding criticality seems strongly required for a broad, fundamental theory of the physics of life as it could be, which still lacks a clear description of how life can arise and maintain itself in complex systems. The effects of criticality have recently been investigated further by one of the authors, using a similar setup [21]. The results showed exploratory dynamics at criticality in the evolution of foraging swarms, and the tension between local- and global-scale adaptation.

Through this work, by increasing the number of simulated boids that maintain their own states, we may introduce more than the mere number. By allowing for many information exchanges between computing agents, the simulation can effectively take leaps of creativity. In Stanley and Lehman's 2015 book [43], objective functions are presented as a distraction, as novelty and diversity might not be achieved by hard-coding the arrival point. Here, in contrast, we have many evolvable objective functions cooperating to reach a solution, as a stepping stone in the search for novelty.

By letting the swarm grow, we see the emergence of collective intelligence, which corresponds to the invention of signal-based error correction. By exchanging signals, the agents are able to correct the error induced by the noise we injected in the simulation. As in the large-scale boids simulation, the invention happens after a critical mass of agents is reached, suggesting similar dynamics with stateful agents.

5 Discussion

OEE comprises the pervasive innovative processes found in human technologies and biological evolutions, and we barely observe open-endedness outside these examples. Yet, some artificial systems demonstrate close-to-OEE phenomena, which we have discussed earlier in this article.

Achieving real OEE remains an open challenge, and at this point all works in the literature fall short of that objective. Although that may be the case with the swarm models presented above, it was one of our goals to emphasize the importance of maintaining the evolvability of a system. In an adversarial game-theoretical setup, for example, reaching an ESS or a local attractor may keep the system from inventing new solutions. In this situation, explicitly stopping the system from learning too much may allow the system to avoid being stuck in such attractors, and possibly to keep innovating forever.

In this article, we propose collective intelligence as a driving force towards open-ended evolution, suggesting that collective groups can develop the ability to be more innovative. Instead of aiming at optimizing one fixed objective function, a collective swarm of agents works with as many competing objectives as there are agents in the swarm.

Through information exchanges between a certain number of agents, these objectives, embodied in the agent's behaviors, can collaborate to implement a search for novelty. All agents contribute to the search in behavioral space, as one whole organization, by exploring the adjacent possible. Each novel discovery in the system, or emergent level of organization, can be reached from an adjacent state where the system was previously. The way one moves from one point to the next, which should retain information accumulated in the past, is constrained by the structure of the swarm, in a bottleneck effect.

We discuss several instances of bottlenecks in this article. One is a task or environmental condition that each agent must overcome. In the case of a foraging environment, organizing swarming turned out to be a critical step. So it became a major transition from non-swarming to swarming agents. Swarming behavior was obtained by organizing a hillside function in the neural controller. After the swarming behavior has been achieved, other properties (e.g., individual patterns) start to evolve. So for the task, swarming was a necessary behavior to organize and was a bottleneck for the entire evolutionary process. In other words, OEE emerges by setting up the right environmental condition.

In the case of a game-theoretic situation, such as Study 2, the communication system among all agents constituted a bottleneck to achieve mutual cooperation. With the emergence of niche construction, the door opens for regulating mechanisms such as cooperation, reciprocal altruism, or social punishment, to achieve implementation. In this example, OEE, in the form of the invention of cooperation mechanisms, can only evolve as a secondary structure once the swarming structure is already established.

In the case of large swarm models, the bottleneck is twofold. One aspect is the scale itself, and the other is the CPU resource. We discussed the evolution of a swarm through increasing its size, showing there is a critical size where the different kinds of fluctuation dominate in larger flocks (direction fluctuation leading to density fluctuation). If such a transition occurs in a larger-size simulation (we expect it can happen for each 3–4-order-of-magnitude difference in sizes), we say that OEE is caused by increasing the size.

In addition to this point, 3D swarm models require huge computational power, and we need to elaborate programming for a large-scale system. In Study 3, each boid has a list of neighbors, and it is updated periodically. This speeds up the calculation of the distance from one boid to its neighboring boids. In Study 1, each boid can listen to the sound sent unidirectionally from the other boids, so that we don't have to calculate the exact distance. Real birds never measure the distance to the other individuals. So measuring the distance is an unnecessary bottleneck due to the computational model. Here the OEE depends on new computational techniques to overcome this computational bottleneck.

The computation of a swarm displays a bottleneck effect, in that the emergent properties of the swarm and its embodiment in a simulated environment may constrain the way the information (communication, lineage information) flows within the system, and the way relevant information (strategies, motion patterns) is progressively retained2 through time in its structure. Nevertheless, the simplicity of such information flows may be limiting; more complex information transfer protocols may need to emerge from bottlenecks in order to bootstrap OEE.

For open-endedness, bottlenecks are crucial in order to (perhaps counterintuitively) act against learning. We observe examples of such bottlenecks in systems like Picbreeder [39], where one must find a way to prevent the system from assuming that the current apparent goal is the ultimate goal, as this would preclude further innovations. Picbreeder-like systems present similarities to our signal-based swarms, as they have communication between many agents that filters information to let innovations come about.

As suggested in the beginning of the article, bottlenecks can be caused by different components: an explicit communication system, a concurrent evolutionary system, and a greater complexity. These three components are highlighted in the studies described above, and we propose them as the principal ones to create novelty and heterogeneity in solutions.

First, the communication between agents is shown to catalyze swarming and cooperation strategies. In previous work of turn-taking interaction between two agents equipped with neural networks [17], we noticed that performing democratic turn-taking offers novel styles of motion evolutionarily. Accordingly, here, the local interactions between agents in a flock allow for the swarm to take particular shapes (Study 1), invent an explicit cooperative protocol (Study 2), and implement a noise-canceling policy (Study 3). To reach OEE, perhaps more than mere signaling, higher complexity levels of language may need to emerge.

Second, the concurrent evolution algorithm essentially selects for meaningful information in behavioral space, by squeezing noisy behavior through a selective bottleneck. However, instead of using one unique objective function, the selection is distributed asynchronously in space and time. Differential time scales also help accelerate the learning, which should happen as fast as possible, while retaining the way to generate the best patterns found in the past. Lastly, once past the selection bottleneck, heterogeneity seems to increase considerably in genotypic space.

Third, in terms of complexity, given that the population size is large enough, with a consequently large number of degrees of freedom, we notice the swarm dynamics significantly change in various ways. The flock's surface curvature may vary for large or small flocks, and so may the attraction and repulsion induced by the exchange of different signals. The motor responses may be amplified, since the input signals may significantly increase, given a higher density of neighboring agents, as seen in Study 1. Similarly, smaller flocks may display a more ordered behavior, with the tradeoff however of being more sensitive to noise, since the critical mass is not reached to implement noise-canceling effects, as demonstrated in Study 3. Larger flocks can also be a source of individual behavioral differentiation, when a higher order of organization emerges. The key is not the size or the amount of new information, but rather the system promoting the invention of new coordination patterns within itself.

We have shown how collective intelligence has the ability to augment the creation of new and diverse solutions in a swarm, when given limited channels of communication, a concurrent evolution bottleneck, and a large number of constrained degrees of freedom. This comes as an inspiration for scientists: A good way to build an open-ended system, able to indefinitely discover new inventions, seems not to reside in centralized computation, but rather in distributed systems, composed of large collectives of communicating agents.

Acknowledgments

The authors would like to thank their collaborators who contributed to this work: Nathanael Aubert-Kato, Aleksandr Drozd, Yasuhiro Hashimoto, Norihiro Maruyama, Yoh-ichi Mototake, and Mizuki Oka.

Notes

1 

At ALIFE 2018, in Tokyo.

2 

A swarm can be shown to act as a collective memory, either explicitly (statefully) [55] or dynamically (statelessly) [7].

References

References
1
Ackley
,
D. H.
, &
Ackley
,
E. S.
(
2015
).
Artificial life programming in the robust-first attractor
. In
P.
Andrews
,
L.
Caves
,
R.
Doursat
,
S.
Hickinbotham
,
F.
Polack
,
S.
Stepney
,
T.
Taylor
, &
J.
Timmis
(Eds.),
Artificial Life Conference Proceedings 13
(pp.
554
561
).
Cambridge, MA
:
MIT Press
.
2
Aktipis
,
C.
(
2004
).
Know when to walk away: Contingent movement and the evolution of cooperation
.
Journal of Theoretical Biology
,
231
(
2
),
249
260
.
3
Bak
,
P.
(
2013
).
How nature works: The science of self-organized criticality
.
Berlin
:
Springer Science & Business Media
.
4
Bersini
,
H.
, &
Detours
,
V.
(
1994
).
Asynchrony induces stability in cellular automata based models
. In
R. A.
Brooks
&
P.
Maes
(Eds.),
Artificial life IV
(pp.
382
387
).
Cambridge, MA
:
MIT Press
.
5
Chiong
,
R.
, &
Kirley
,
M.
(
2012
).
Random mobility and the evolution of cooperation in spatial n-player iterated prisoner's dilemma games
.
Physica A: Statistical Mechanics and its Applications
,
391
(
15
),
3915
3923
.
6
Conrad
,
M.
, &
Pattee
,
H.
(
1970
).
Evolution experiments with an artificial ecosystem
.
Journal of Theoretical Biology
,
28
(
3
),
393
409
.
7
Couzin
,
I. D.
,
Krause
,
J.
,
James
,
R.
,
Ruxton
,
G. D.
, &
Franks
,
N. R.
(
2002
).
Collective memory and spatial sorting in animal groups
.
Journal of Theoretical Biology
,
218
(
1
),
1
11
.
8
Cucker
,
F.
, &
Huepe
,
C.
(
2008
).
Flocking with informed agents
.
Mathematics in Action
,
1
(
1
),
1
25
.
9
Drozd
,
A.
,
Witkowski
,
O.
,
Matsuoka
,
S.
, &
Ikegami
,
T.
(
2016
).
Critical mass in the emergence of collective intelligence: A parallelized simulation of swarms in noisy environments
.
Artificial Life and Robotics
,
21
(
3
),
317
323
.
10
Eberhart
,
R. C.
, &
Kennedy
,
J.
(
1995
).
A new optimizer using particle swarm theory
. In
Proceedings of the Sixth International Symposium on Micro Machine and Human Science
,
Vol. 1
(pp.
39
43
).
New York
:
Institute of Electrical and Electronics Engineers
.
11
Gardner
,
A.
, &
West
,
S. A.
(
2010
).
Greenbeards
.
Evolution
,
64
(
1
),
25
38
.
12
Goodfellow
,
I.
,
Pouget-Abadie
,
J.
,
Mirza
,
M.
,
Xu
,
B.
,
Warde-Farley
,
D.
,
Ozair
,
S.
,
Courville
,
A.
, &
Bengio
,
Y.
(
2014
).
Generative adversarial nets
. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, &
K. Q.
Weinberger
(Eds.),
Advances in Neural Information Processing Systems
(pp.
2672
2680
).
New York
:
Curran Associates, Inc
.
13
Greenbaum
,
B.
, &
Pargellis
,
A.
(
2016
).
Digital replicators emerge from a self-organizing prebiotic world
. In
C.
Gershenson
,
T.
Froese
,
J. M.
Siqueiros
,
W.
Aguilar
,
E. J.
Izquierdo
, &
H.
Sayama
(Eds.),
Proceedings of the European Conference on Artificial Life 13
(pp.
60
67
).
Cambridge, MA
:
MIT Press
.
14
Hauert
,
S.
,
Zufferey
,
J.-C.
, &
Floreano
,
D.
(
2009
).
Evolved swarming without positioning information: An application in aerial communication relay
.
Autonomous Robots
,
26
(
1
),
21
32
.
15
Hauser
,
M. D.
,
Chomsky
,
N.
, &
Fitch
,
W. T.
(
2002
).
The faculty of language: What is it, who has it, and how did it evolve?
Science
,
298
(
5598
),
1569
1579
.
16
Holland
,
J. H.
(
1999
).
Echoing emergence: Objectives, rough definitions, and speculations for echo-class models
. In
G. A.
Cowan
,
D.
Pines
, &
D.
Meltzer
(Eds.),
Complexity
(pp.
309
342
).
Cambridge, MA
:
Perseus Books
.
17
Iizuka
,
H.
, &
Ikegami
,
T.
(
2003
).
Adaptive coupling and intersubjectivity in simulated turn-taking behaviour
. In
W.
Banzhaf
,
J.
Ziegler
,
T.
Christaller
,
P.
Dittrich
, &
J. T.
Kim
(Eds.),
Advances in artificial life
(pp.
336
345
).
Berlin, Heidelberg
:
Springer
.
18
Ikegami
,
T.
,
Mototake
,
Y.-i.
,
Kobori
,
S.
,
Oka
,
M.
, &
Hashimoto
,
Y.
(
2017
).
Life as an emergent phenomenon: Studies from a large-scale boid simulation and web data
.
Philosophical Transactions of the Royal Society A
,
375
(
2109
),
20160351
.
19
Kauffman
,
S.
(
2003
).
The adjacent possible: A talk with Stuart Kauffman
. .
Philosophical Transactions of the Royal Society A
.
20
Kauffman
,
S. A.
(
2000
).
Investigations
.
Oxford, UK
:
Oxford University Press
.
21
Khajehabdollahi
,
S.
, &
Witkowski
,
O.
(
2018
).
Critical learning vs. evolution: Evolutionary simulation of a population of Ising-embodied neural networks
. In
T.
Ikegami
,
N.
Virgo
,
O.
Witkowski
,
M.
Oka
,
R.
Suzuki
, &
H.
Iizuka
(Eds.),
Artificial Life Conference proceedings
(pp.
47
54
).
Cambridge, MA
:
MIT Press
.
22
Kirby
,
S.
(
2002
).
Natural language from artificial life
.
Artificial Life
,
8
(
2
),
185
215
.
23
Lehman
,
J.
, &
Stanley
,
K. O.
(
2011
).
Evolving a diversity of virtual creatures through novelty search and local competition
. In
N.
Krasnogor
(Ed.),
Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO'11
(pp.
211
218
).
New York
:
ACM
.
24
Lenski
,
R. E.
,
Ofria
,
C.
,
Pennock
,
R. T.
, &
Adami
,
C.
(
2003
).
The evolutionary origin of complex features
.
Nature
,
423
(
6936
),
139
.
25
Lin
,
H. W.
,
Tegmark
,
M.
, &
Rolnick
,
D.
(
2017
).
Why does deep and cheap learning work so well?
Journal of Statistical Physics
,
168
(
6
),
1223
1247
.
26
Malan
,
K. M.
, &
Engelbrecht
,
A. P.
(
2013
).
A survey of techniques for characterising fitness landscapes and some possible ways forward
.
Information Sciences
,
241
,
148
163
.
27
Marwell
,
G.
, &
Oliver
,
P.
(
1993
).
The critical mass in collective action
.
Cambridge, UK
:
Cambridge University Press
.
28
Maynard-Smith
,
J.
, &
Szathmáry
,
E.
(
1997
).
The major transitions in evolution
.
Oxford, UK
:
Oxford University Press
.
29
Mitri
,
S.
,
Wischmann
,
S.
,
Floreano
,
D.
, &
Keller
,
L.
(
2013
).
Using robots to understand social behaviour
.
Biological Reviews
,
88
(
1
),
31
39
.
30
Mototake
,
Y.
, &
Ikegami
,
T.
(
2015
).
A simulation study of large scale swarms
. In
K.
Naruse
(Ed.),
Proceedings of the SWARM Conference 2015
(pp.
446
450
).
Japan
:
University of Aizu
.
31
Nolfi
,
S.
,
Parisi
,
D.
, &
Elman
,
J. L.
(
1994
).
Learning and evolution in neural networks
.
Adaptive Behavior
,
3
(
1
),
5
28
.
32
Nolfi
,
S.
,
Floreano
,
D.
, &
Floreano
,
D. D.
(
2000
).
Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines
.
Cambridge, MA
:
MIT Press
.
33
Oliver
,
P. E.
, &
Marwell
,
G.
(
2001
).
Whatever happened to critical mass theory? A retrospective and assessment
.
Sociological Theory
,
19
(
3
),
292
311
.
34
Olson
,
R. S.
,
Hintze
,
A.
,
Dyer
,
F. C.
,
Knoester
,
D. B.
, &
Adami
,
C.
(
2013
).
Predator confusion is sufficient to evolve swarming behaviour
.
Journal of the Royal Society Interface
,
10
(
85
),
20130305
.
35
Prokopenko
,
M.
,
Gerasimov
,
V.
, &
Tanev
,
I.
(
2006
).
Evolving spatiotemporal coordination in a modular robotic system
. In
International Conference on Simulation of Adaptive Behavior
(pp.
558
569
).
New York
:
Springer
.
36
Rasmusen
,
E.
(
2006
).
Games and information: An introduction to game theory
(4th ed.).
Oxford, UK
:
Basil Blackwell
.
37
Reynolds
,
C. W.
(
1987
).
Flocks, herds and schools: A distributed behavioral model
. In
M. C.
Stone
(Ed.),
ACM SIGGRAPH Computer Graphics
,
Vol. 21
(pp.
25
34
).
New York
:
ACM
.
38
Schmickl
,
T.
,
Zahadat
,
P.
, &
Hamann
,
H.
(
2016
).
Sooner than expected: Hitting the wall of complexity in evolution
.
arXiv preprint arXiv:1609.07722
.
39
Secretan
,
J.
,
Beato
,
N.
,
D'Ambrosio
,
D. B.
,
Rodriguez
,
A.
,
Campbell
,
A.
, &
Stanley
,
K. O.
(
2008
).
Picbreeder: Evolving pictures collaboratively online
. In
M.
Czerwinski
,
A.
Lund
, &
D.
Tan
(Eds.),
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI'08
(pp.
1759
1768
).
New York
:
ACM
.
40
Silver
,
D.
,
Schrittwieser
,
J.
,
Simonyan
,
K.
,
Antonoglou
,
I.
,
Huang
,
A.
,
Guez
,
A.
,
Hubert
,
T.
,
Baker
,
L.
,
Lai
,
M.
,
Bolton
,
A.
, et al
(
2017
).
Mastering the game of go without human knowledge
.
Nature
,
550
(
7676
),
354
.
41
Soros
,
L. B.
, &
Stanley
,
K. O.
(
2014
).
Identifying necessary conditions for open-ended evolution through the artificial life world of chromaria
. In
H.
Sayama
,
J.
Rieffel
,
S.
Risi
,
R.
Doursat
, &
H.
Lipson
(Eds.),
Proceedings of the Fourteenth International Conference on the Simulation and Synthesis of Living Systems (Artificial Life 14)
(pp.
793
800
).
Citeseer
.
42
Standish
,
R. K.
(
2003
).
Open-ended artificial evolution
.
International Journal of Computational Intelligence and Applications
,
3
(
02
),
167
175
.
43
Stanley
,
K. O.
, &
Lehman
,
J.
(
2015
).
Why greatness cannot be planned: The myth of the objective
.
London
:
Springer
.
44
Su
,
H.
,
Wang
,
X.
, &
Lin
,
Z.
(
2009
).
Flocking of multi-agents with a virtual leader
.
IEEE Transactions on Automatic Control
,
54
(
2
),
293
307
.
45
Taylor
,
T.
,
Bedau
,
M.
,
Channon
,
A.
,
Ackley
,
D.
,
Banzhaf
,
W.
,
Beslon
,
G.
,
Dolson
,
E.
,
Froese
,
T.
,
Hickinbotham
,
S.
,
Ikegami
,
T.
,
McMullin
,
B.
,
Packard
,
N.
,
Rasmussen
,
S.
,
Virgo
,
N.
,
Agmon
,
E.
,
Clark
,
E.
,
McGregor
,
S.
,
Ofria
,
C.
,
Ropella
,
G.
,
Spector
,
L.
,
Stanley
,
K. O.
,
Stanton
,
A.
,
Timperley
,
C.
,
Vostinar
,
A.
, &
Wiser
,
M.
(
2016
).
Open-ended evolution: Perspectives from the OEE workshop in York
.
Artificial Life
,
22
(
3
),
408
423
.
46
Tishby
,
N.
,
Pereira
,
F. C.
, &
Bialek
,
W.
(
1999
).
The information bottleneck method
. In
B.
Hajek
&
R. S.
Sreenivas
(Eds.),
Proceedings of the 37th Annual Allerton Conference on Communication, Control and Computing
(pp.
368
377
).
Urbana-Champaign, IL
:
University of Illinois
.
47
Tishby
,
N.
, &
Polani
,
D.
(
2011
).
Information theory of decisions and actions
. In
V.
Cutsuridis
,
A.
Hussain
, &
J. G.
Taylor
(Eds.),
Perception-action cycle
(pp.
601
636
).
London
:
Springer
.
48
Tishby
,
N.
, &
Zaslavsky
,
N.
(
2015
).
Deep learning and the information bottleneck principle
. In
2015 IEEE Information Theory Workshop (ITW)
(pp.
1
5
).
New York
:
Curran Associates, Inc
.
49
Toner
,
J.
, &
Tu
,
Y.
(
1998
).
Flocks, herds, and schools: A quantitative theory of flocking
.
Physical Review E
,
58
(
4
),
4828
.
50
Traulsen
,
A.
, &
Nowak
,
M. A.
(
2006
).
Evolution of cooperation by multilevel selection
. In
S. A.
Levin
(Ed.),
Proceedings of the National Academy of Sciences of the U.S.A.
,
103
(
29
),
10952
10955
.
51
von Neumann
,
J.
,
Burks
,
A. W.
, et al
(
1966
).
Theory of self-reproducing automata
.
IEEE Transactions on Neural Networks
,
5
(
1
),
3
14
.
52
Wilson
,
D. S.
, &
Sober
,
E.
(
1994
).
Reintroducing group selection to the human behavioral sciences
.
Behavioral and Brain Sciences
,
17
(
4
),
585
608
.
53
Witkowski
,
O.
, &
Aubert
,
N.
(
2014
).
Pseudo-static cooperators: Moving isn't always about going somewhere
. In
H.
Sayama
,
J.
Rieffel
,
S.
Risi
,
R.
Doursat
, &
H.
Lipson
(Eds.),
Proceedings of the Fourteenth International Conference on the Simulation and Synthesis of Living Systems (Artificial Life 14)
,
Vol. 14
(pp.
392
397
).
Cambridge, MA
:
MIT Press
.
54
Witkowski
,
O.
, &
Ikegami
,
T.
(
2014
).
Asynchronous evolution: Emergence of signal-based swarming
. In
H.
Sayama
,
J.
Rieffel
,
S.
Risi
,
R.
Doursat
, &
H.
Lipson
(Eds.),
Proceedings of the Fourteenth International Conference on the Simulation and Synthesis of Living Systems (Artificial Life 14)
,
Vol. 14
(pp.
302
309
).
Cambridge, MA
:
MIT Press
.
55
Witkowski
,
O.
, &
Ikegami
,
T.
(
2016
).
Emergence of swarming behavior: Foraging agents evolve collective motion based on signaling
.
PloS ONE
,
11
(
4
),
e0152756
.
56
Wolpert
,
D. H.
, &
Macready
,
W. G.
(
1997
).
No free lunch theorems for optimization
.
IEEE Transactions on Evolutionary Computation
,
1
(
1
),
67
82
.