Abstract
Cooperation among individuals has been key to sustaining societies. However, natural selection favors defection over cooperation. Cooperation can be favored when the mobility of individuals allows cooperators to form a cluster (or group). Mobility patterns of animals sometimes follow a Lévy flight. A Lévy flight is a kind of random walk but it is composed of many small movements with a few big movements. The role of Lévy flights for cooperation has been studied by Antonioni and Tomassini, who showed that Lévy flights promoted cooperation combined with conditional movements triggered by neighboring defectors. However, the optimal condition for neighboring defectors and how the condition changes with the intensity of Lévy flights are still unclear. Here, we developed an agent-based model in a square lattice where agents perform Lévy flights depending on the fraction of neighboring defectors. We systematically studied the relationships among three factors for cooperation: sensitivity to defectors, the intensity of Lévy flights, and population density. Results of evolutionary simulations showed that moderate sensitivity most promoted cooperation. Then, we found that the shortest movements were best for cooperation when the sensitivity to defectors was high. In contrast, when the sensitivity was low, longer movements were best for cooperation. Thus, Lévy flights, the balance between short and long jumps, promoted cooperation in any sensitivity, which was confirmed by evolutionary simulations. Finally, as the population density became larger, higher sensitivity was more beneficial for cooperation to evolve. Our study highlights that Lévy flights are an optimal searching strategy not only for foraging but also for constructing cooperative relationships with others.
1 Introduction
Cooperative behavior is necessary to sustain human and animal societies (Clutton-Brock, 2009; Dugatkin, 1997; Rand & Nowak, 2013). However, the previous studies of evolutionary games show that cooperation is not favored by natural selection compared to defection (Nowak, 2006a, b). Therefore, it has been suggested that special mechanisms are needed for cooperation to evolve (Nowak, 2006b). In the proposed mechanisms, spatial (or network) reciprocity has often been studied (Nowak & May, 1992; Ohtsuki et al., 2006; Perc & Szolnoki, 2010; Perc et al., 2013; Roca et al., 2009; Santos & Pacheco, 2005; Szabó & Fáth, 2007). In those traditional models, individuals do not move in the spatial environment because all spaces are occupied. Namely, static networks were used for interactions among individuals. In contrast, many biological organisms have the ability to move. Mobility is a fundamental trait of animals and humans because animals forage for food, and people often move when they interact. Recently, spatial reciprocity with mobility has attracted great attention, and various theoretical models have been developed. Earlier theoretical studies have assumed random (unconditional) movements (Enquist & Leimar, 1993; Smaldino, 2013; Traulsen & Nowak, 2006; Vainstein et al., 2007). They revealed that the evolution of cooperation is hindered by mobility because it basically destroys cooperative clusters and leads the population to a well-mixed state. On the other hand, they also showed cooperation is sustained in the case of low mobility because it contributes to expanding the regions of cooperative clusters. Recent studies showed that cooperation is enhanced even in the random movements for low mobility if the update rules are properly devised (Antonioni et al., 2014; Sicardi et al., 2009).
The situation drastically changes if conditional movements have been assumed (Aktipis, 2004; Buesser et al., 2013; Helbing & Yu, 2009; Ichinose et al., 2013; Jiang et al., 2010; Roca & Helbing, 2011; Tomassini & Antonioni, 2015). In this case, each agent monitors its current environmental conditions within its local neighborhood and moves to another location if the conditions are found to be undesirable. In many cases, “undesirable” refers to the situation that there are many defectors within the local neighborhood. This contingent movement enhances the evolution of cooperation even if the mobility rate is high because cooperative clusters tend to be created by keeping a distance from defectors.
In those studies, Tomassini and Antonioni (2015) focused on a special mobility type, called a Lévy flight. A Lévy flight is a kind of random walk but it is characterized by many small movements with a few big movements. More formally, the distance of movements follows a power-law distribution. It has been shown that some animal species use Lévy flights when foraging (Sims et al., 2012; Viswanathan et al., 1996, 1999). When resources are randomly distributed and there is no information on their locations, a search pattern based on a Lévy flight type is optimal (Lomholt et al., 2008). Another study shows that humans also use Lévy flights (Brockmann et al., 2006).
Tomassini and Antonioni (2015) studied the evolution of cooperation in spatial games where agents perform Lévy flights. In their model, they assumed two types of conditions where Lévy flights are performed by agents: (1) Agents always perform Lévy flights, and (2) agents perform Lévy flights only when more than half of their neighbors are defectors. They showed that cooperation evolved only in the latter case.
Motivated by this study, we focus on the evolution of cooperation of mobile agents that perform Lévy flights in spatial games. Tomassini and Antonioni’s model was a bit extreme in the sense that they only consider two types of conditions for Lévy flights. Here, we consider a continuous range of sensitivity to the presence of defectors to identify the optimal level of Lévy flights for the evolution of cooperation in spatial games. From another perspective, it was unknown whether the power-law characteristic of movement promoted cooperation. Simply, big jumps that are not due to Lévy flights may lead to the evolution of cooperation, or short jumps are better than such big jumps. To test this case, here we also let the intensity of Lévy flights be adjustable in the model. As the special case, uniform movements where agents can move to any space regardless of distances with equal probability, and constant movements where agents always move a fixed distance are also realized. Through this extension, we study how the intensity of Lévy flights affects cooperation. Finally, we reveal how the sensitivity that yields the optimal cooperation changes depending on the population density.
2 Model
We previously developed an agent-based model of the evolution of cooperation in a square lattice where the sensitivity to neighboring defectors in Lévy flights is adjusted by step functions (Miyagawa et al., 2020). Here, we extended it so that the intensity of Lévy flights can be adjusted. First, agents are randomly distributed into an L × L lattice. The density of the agents is given by ρ. Thus, the number of agents is N = L2ρ. At the beginning of a simulation, half of the agents are cooperators and the other half are defectors. Then, the following process is repeated until the specified number of time steps (tend = 500) is obtained.
One agent is randomly selected from the whole population. (This agent may be selected multiple times in one time step because we used an asynchronous update scheme.)
The agent (located in the center in Figure 1(b)) plays one of four games (Figure 1(a)) with its neighbors and obtains the payoff. The neighboring agents (located in the light gray area in Figure 1(b)) also play the game with their neighbors and obtain payoffs. The details of these games is described below.
The agent imitates the strategy of the neighbor that obtained the highest payoff within the neighborhood, including itself. If two or more agents have the highest payoff at the same time, the agent randomly picks up and imitates the strategy of one of those agents.
The agent is unsatisfied when the neighbors are defectors. If the fraction of defectors is equal to or greater than a threshold value, it performs a Lévy flight to another cell if the cell is empty. Otherwise, the agent does not move.
The above is repeated N times, which is regarded as one time step (t).
In Step 2, the agent and its opponent play one of four common two-person, two-strategy, symmetric games (Figure 1(a)). The two strategies are cooperation (C) and defection (D). If both cooperate, they receive R; if one cooperates and the other defects, the former obtains S and the latter obtains T; if both defect, they receive P. The games are classified into the following four types depending on the payoff relationships—the Harmony Game (HG): R > T > P and R > S > P (Licht, 1999); the Stag Hunt (SH): R > T > P > S; the Prisoner’s Dilemma (PD): T > R > P > S; and the Hawk-Dove game (HD): T > R > S > P. In the PD, cooperation is the strategy that benefits others by paying costs while defection is the strategy that enjoys the benefits from cooperators without paying any cost.
The jump distance of Lévy flights, x, is given by a power-law distribution P(x) = Cx−α, where C is a normalization constant such that Cx−α = 1. Note that as we use periodic boundary conditions, if the jump of an agent is over a boundary, the agent comes back from the opposite boundary. Thus, the maximum jump length is ⌊L/2⌋ rather than L. If jump length x is larger than ⌊L/2⌋, it is equal to L − x. The probability distribution P(x) is affected by this spatial periodicity, but only very slightly, because the probabilities of such long-range jumps are very small. We systematically varied α in the range of 0 ≤α≤ 10 in the simulations. α = 0 can be considered the special case. In this case, an agent moves to another cell regardless of the distance with equal probability. In other words, an agent moves to another cell based on a uniform distribution. Thus, we call this special case a uniform movement hereafter. Actual observations suggest that animals use 1 ≤α≤ 3. As the control experiments, we also considered the cases of contingent jumps to certain distances. The settings are P(x) = 1, x = 1, 2, 3, 5, 10, where agents always jump a certain distance (1, 2, 3, 5 or 10).
We use L = 50 and ρ = 2/3 unless otherwise noted. For the game parameters, we fix (R, P) = (1, 0) while changing − 1 ≤ S ≤ 1 and 0 ≤ T ≤ 2.
3 Result
3.1 Lévy Flights Promote Cooperation
First, we focus on whether and how Lévy flights promote cooperation in spatial games. Figure 2 shows the snapshots of the simulation where the sensitivity is s = 1/2 and three distinct cases, α = 3.0 (Lévy flight), α = 0 (uniform movement), and P(1) = 1 (fixed movement), are compared. Here, we set (S, T) = (−0.4, 1.4), thus the game is the PD. In the figure, cooperators (defectors) are shown in blue (red). We also provide the whole simulation as a video.1
Cooperators die out in this parameter setting when uniform movements (α = 0) are assumed. This is because cooperative clusters are not maintained, as agents are well-mixed by uniform movements (α = 0). In contrast, cooperators eventually spread in the form of clusters when Lévy flights (α = 3.0) or fixed movements (P(1) = 1) are assumed. Even in these two cases, in the early stages of the simulation, cooperators almost go extinct but a few clusters still survive (from t = 0 to 50). If cooperators are clustered, they can obtain higher payoffs within the areas. Thus cooperative clusters can survive. Then, agents at the borderlines tend to imitate the cooperative strategy because the payoff of an agent in a cooperative cluster is high. Therefore, cooperative clusters gradually expand their regions.
We now focus on the difference between Lévy flights (α = 3.0) and fixed movements (P(1) = 1). The dynamics of strategies change; in the fixed movements (P(1) = 1) is a bit faster than that in the Lévy flights (α = 3.0). (See also the video to confirm this.) In the fixed movements (P(1) = 1), agents only move to cells with a distance of one. In this case, the dynamics are faster because the strategy change settles down quickly from cooperation to defection or vice versa. In contrast, in the case of the Lévy flights (α = 3.0), cooperators and defectors are sometimes mixed again by rare big jumps, which contribute to a longer time for the conversion. However, there is no difference between Lévy flights (α = 3.0) and fixed movements (P(1) = 1) in the final fraction of cooperators. In Lévy flights (α = 3,0), the probability that agents move to a cell at a distance of one are about 0.8320. Thus, we find that the shortest movements prominently contribute to maintaining cooperative clusters and these movements are useful to gradually expand those clusters.
Next, we show how cooperation evolved in the whole TS plane when s = 0, 1/2, and 1. Figure 3 shows the average fraction of cooperators, denoted by C, at the final step of the simulations (tend = 500). Here, α = 3.0, α = 0, and P(1) = 1 are compared again.
For all cases, cooperation evolved when the games were the HG and the SH because cooperation between two agents (R) is most beneficial. In contrast, cooperation was hard to evolve when the games were the PD and the HD. In those two games, unilateral defection (T) is most beneficial. Moreover, defection is the dominant strategy in the PD due to T > R and P > S. Thus, the PD resulted in the worst case for cooperation to evolve. When we compare the three results for s values, cooperation evolved in the moderate sensitivity s = 1/2. By this time, we do not see any characteristic difference in the results between α = 3.0 and P(1) = 1. We discuss these results in detail and clarify the difference between α = 3.0 and P(1) = 1 in the next section by showing all α and s values.
3.2 Optimal Sensitivity for Cooperation
Here, we try to find out which s produces the optimal cooperation level when the intensity of Lévy flights α is varied. We changed s with summing up −1 ≤ S ≤ 1 and 0 ≤ T ≤ 2. Figure 4 shows the optimal sensitivities depending on α.
We first found that the moderate sensitivity most promoted cooperation in all cases (Figure 4). We explain the results by dividing them into two cases depending on s values. When s ≥ 1/2, P(1) = 1, α = 5.0, and α = 10.0 were best for cooperation. Because the probability that agents jump to a distance of one is 0.9644 for α = 5.0, 0.9990 for α = 10.0, and 1 for P(1) = 1, it is implied that the shortest movements promote cooperation when s ≥ 1/2. When s is high, agents frequently move in the presence of neighboring defectors. In such cases, because big jumps destroy cooperative clusters, short movements are needed to maintain these clusters. On the other hand, when s ≤ 1/4, P(2) = 1, α = 2.5, and α = 3.0 were best for cooperation. These low s values mean that agents are patient with neighboring defectors. In this case, cooperative clusters are invaded by defectors until the forms of clusters collapse. Thus, even if cooperators jump to close cells, it is difficult for them to reform clusters because there are fewer cooperators nearby. In those situations, there is a possibility that cooperative clusters are reorganized in different places due to a bit further jumps (P(2) = 1) or rare big jumps (α = 2.5, 3.0), which work better for cooperation.
The total results mean that Lévy flights are not particularly better than fixed movements. Is there no advantage for rare big movements? From the results, it seems that there is merit for Lévy flights. That is, Lévy flights with α = 2.5 and 3.0 sufficiently promote cooperation regardless of s. To investigate this conjecture, in the next section, we conducted evolutionary simulations where the intensity of Lévy flights is an evolvable trait as well as the strategy of agents.
3.3 Evolution of the Intensity of Lévy Flights
We conducted 30 evolutionary simulations in total. All of the results are provided as supplementary information (Figure S1). (Please find supplemental material for this article online here: https://doi.org/10.1162/artl_a_00382.) Here, we pick up six results as examples shown in Figure 5. Figure 5 shows the average of αi (), βi (), and the fraction of cooperators (C) over time by evolution. The results are classified into three cases. The first case was that Lévy flights evolved. We say that Lévy flights evolved when converged lower than 5 and converged lower than 5.5. This case happened 9/30 = 30%. Panels (a), (b), and (c) in Figure 5 mean that Lévy flights are achieved by evolution because α ≈ 2.87 and β ≈ 2.13 for (a), α ≈ 3.90 and β ≈ 1.77 for (b), and α ≈ 3.34 and β ≈ 1.59 for (c) are obtained. The second case was that fixed movements evolved. We say that fixed movements evolved when converged larger than 5 and converged lower than 5.5. This case happened 17/30 ≈ 57%. Panels (d), (e), and (f) show such cases. In panel (d), α ≈ 6.64 and β ≈ 2.32 are obtained. In panel (e), α ≈ 6.77 and β ≈ 1.34 are obtained. In panel (f), α ≈ 9.13 and β ≈ 2.11 are obtained. These results are close to the case of fixed movements P(1) = 1 or P(2) = 1 because agents almost always jump to a cell at a distance of one or two in the two results due to large . The last case was that evolution favored defection over cooperation. This case happened 2/30 ≈ 7%. In this case, all agents became defectors. Thus, because there is no selection pressure on α and β, these values fluctuate by random drift. See Figure S1 in the supplementary information. In the other two simulations, evolution did not lead to convergence within 500 time steps.
In this way when s is diverse, Lévy flights and cooperation often co-evolve by evolution. It is known that Lévy flights are the optimal balance between exploitation and exploration (Murakami et al., 2019) and they have functional advantages near a critical point (Abe, 2020). In our model, the optimal balance was effective to the sensitivity to defectors, which was why Lévy flights evolved.
3.4 Change of Optimal Sensitivities Depending on Densities
Finally, we focus on how cooperation evolves depending on density ρ. Lévy flights (α = 3.0), uniform movements (α = 0.0), and fixed movements (P(1) = 1) are compared. Figure 6 shows C when sensitivity s and density ρ were changed. As seen in the figure, Lévy flights (with α = 3.0) and fixed movements (P(1) = 1) promoted cooperation compared to uniform movements as a whole (red regions in the first and third panels in Figure 6 are larger than that in the second panel). As explained above, this is because the shortest movements have the advantage of forming cooperative clusters.
Next, we examine the effect of density on cooperation. When the sensitivity was at its highest s = 1, cooperation did not evolve at all in all cases. When the sensitivity was too low s ≤ 1/4, cooperation did not evolve much. Thus, even when the density was changed, moderate sensitivities 1/4 ≤ s < 7/8 were best for cooperation to evolve.
Moreover, as the density became larger, higher sensitivity promoted more cooperation in all cases. In sparse situations (low densities), cooperative clusters tend to be maintained because they are surrounded by few defectors. In contrast, in dense situations, cooperative clusters tend to be destroyed by surrounding defectors. In that case, it is better for cooperators to escape from their current positions by moving to other cells. Thus, higher sensitivity can promote cooperation in dense situations.
4 Conclusion
We investigated the effect of how sensitivity to defectors when performing Lévy flights promotes the evolution of cooperation. We constructed an agent-based model where agents play games with their neighbors, update their strategies, and perform Lévy flights to move to other cells in a square lattice. Compared to the previous work, we tested various levels of sensitivity to defectors for the condition of Lévy flights and analyzed the relationship between the sensitivity and density for cooperation. We also checked how the intensity of Lévy flights affected cooperation by changing α. The evolutionary simulations showed the following facts. First, cooperation was most promoted in the moderate sensitivity. Second, the optimal movements were different depending on the sensitivity. When the sensitivity was high, the shortest movements where agents jump to a cell with a distance of one were best for cooperation. On the other hand, when the sensitivity was low, a bit further or rare big jumps were best for cooperation. Our results implied that Lévy flights were not so optimal for promoting cooperation when the sensitivity was fixed. However, through evolutionary simulations of the intensity of Lévy flights, we found that agents evolved to use Lévy flights when the sensitivity was diverse. Finally, as the density increased, higher sensitivity to defectors was better for cooperation to evolve.
We previously suggested that big jumps promoted cooperation in spatial games (Ichinose et al., 2013). However, in that study, agents tended to move to distant cells as the fraction of neighboring defectors became high. Namely, the distances of jumps were the functions of neighboring defectors. Therefore, a relatively high cognitive ability was required because agents had to have not only the detection of neighboring environments but also the desire to keep at a distance from defectors. Contrary to this previous work, in this study, mobile agents only require a simple cognitive ability, which is the detection of neighboring environments, because jump distances do not depend on the fraction of the defectors. Even in this case, we showed that cooperation sufficiently evolved. Thus, our study highlights the possibility of the evolution of cooperation in biological mobile organisms that have simple cognitive abilities. We can include a desire for agents to keep a distance from defectors in the current model, which is one direction of future work.
Lévy flights are known as an optimal search strategy when the targets (e.g., food, mates, or habitats) are sparsely distributed in the environment. Here, we showed that Levy flights are effective not only where targets are sparsely distributed but also in the presence of sensitivity to defectors in the context of the evolution of cooperation. This is a new finding that we first discovered.
Acknowledgments
This work is supported by HAYAO NAKAYAMA Foundation for Science & Technology and Culture.