Abstract

We describe the questions and discussions raised at the First Workshop on Social Learning and Cultural Evolution held at theArtificial Life Conference 2016 in Cancún, Mexico in July 2016. The purpose of the workshop was to assemble artificial life researchers interested in social learning and cultural evolution into one group so that we could focus on recent work and interesting open questions. Our discussion related to both the mechanisms of social learning and cultural evolution and the consequences and influence of social learning and cultural evolution on living systems. We present the contributions of our workshop presenters and conclude with a discussion of the more important open questions in this area.

1 Introduction

The cultural systems of human beings and other social life forms are seen as evolutionary systems from the perspective of the artificial life community. The Artificial Life conference series and journal regularly includes research on agent-based models of social systems, evolution of collective behavior, evolution of communication, evolution of cooperation, evolution of culture, evolution of cultural transmission, and evolution of language.

In July of 2016 we held the First Workshop on Social Learning and Cultural Evolution at the Artificial Life Conference 2016 in Cancún, Mexico. The purpose of the workshop was to bring together researchers interested in these topics in order to share recent research and current research directions. The workshop was a single afternoon session split into two parts. Each part consisted of a keynote presentation followed by a few short presentations. We concluded the workshop with an open discussion on various topics, including future research directions.

The topics discussed at the workshop covered a broad field, including evolutionary game theory, life history evolution, cross-species mimicry, gene-culture co-evolution, social learning in robots, and social learning strategies. More broadly, we can categorize the discussions into two groups. Many of the discussions concerned the effects of social learning on an evolving system, while others concerned the mechanisms that make social learning possible.

2 Effects of Social Learning and Cultural Evolution

One important question that precedes any discussion of social learning is what counts as social learning. There are phenomena that might appear social at first glance but that involve little or no social mechanism when studied further [16, 12]. For clarity we will call a phenomenon social if it involves the coordination of more than one individual. Collective behavior and behavioral or morphological mimicry are social phenomena, as they involve coordination between agents (sometimes of different species). We call a mechanism social if it involves the transfer of information from one individual to another. Some instances of collective behavior and mimicry are supported by social mechanisms, while others are not.

Islam and Grogono [5] presented a model of how mimicry of morphology can evolve in a population of predators and potential prey without the need for social mechanisms. The model consists of a predator that can discern potential prey by the pattern they display, and prey that can evolve the pattern they display. As an example consider a bird that eats butterflies, some of which are edible and others are not. It can discern the prey by the color or pattern on their wings. As a result it can learn or evolve to properly identify which prey to eat.

The prey, on the other hand, have the ability to evolve the colors and patterns they display. This creates a coupled evolutionary system where both predators and prey evolve simultaneously. Since the predator has evolved to avoid eating the poisonous prey, if an edible prey evolves to camouflage itself, it can avoid being eaten (Batesian mimicry). This phenomenon is observed in butterflies, frogs, and many other species of insect. Islam and Grogono [5] showed that this simple model could evolve Batesian mimicry without the need for social mechanisms.

A more complex mimicry phenomenon involves multiple inedible prey mimicking each other. The evolutionary pressure for inedible prey to mimic each other is the limited memory of the predator. If a predator can only remember a small number of patterns, then, despite being inedible, some prey might still get eaten. By evolving to mimic each other, the inedible prey can avoid being eaten (Mullerian mimicry). Mullerian mimicry also evolved in the model of [5].

This type of evolved mimicry is a good example of social phenomena that evolve through non-social means. Of course, mimicry can also involve social mechanisms, as it does in cases of behavioral imitation or emulation in humans and apes [17]. Indeed, other social phenomena, such as herding, assortative mating, philopatry, and territorial fidelity, can also be evolved without any social mechanisms [9].

Collective behavior is any event where agents coordinate their behavior with each other. This includes cases of mimicry, where one behavior is the same as the other. However, it also includes cases of coordination, as in sexual reproduction or collective hunting, where there are different roles in the collective behavior. As with mimicry, collective behavior can arise from non-social and social forces.

The evolution of collective behavior has been a central topic in artificial life. Many of the major evolutionary transitions in biology have involved individual life forms engaging in collective behavior for the greater good of all [15]. One problem of focus in the evolution of collective behavior is how cooperative coalitions can evolve in situations where the coalition consists of unrelated individuals and is susceptible to cheaters. These situations are most often modeled as game theoretic social dilemmas like the prisoner's dilemma.

The viability of cooperators in these scenarios depends on the social network that determines game partners. The simplest model is a complete network in which any individual can be the partner of any other individual. In this situation a population of cooperators is susceptible to invasion by a single defector [1]. One of our keynote speakers, Francisco Santos, reviewed recent results on the evolution of collective behavior [13, 14].

When the social network of the population is scale-free, we have an environment where cooperation can evolve [13]. Highly connected cooperators in a scale-free network are robust to small numbers of defectors in their neighborhood. Highly connected defectors become victims of their own success after exploiting all cooperators in their neighborhood.

Natural social networks are commonly scale-free networks, though they are also dynamic. When individuals can revise their connections based on choice, the social networks that are generated are scale-free in nature [14]. That is, when individuals (both cooperators and defectors) adjust their network to avoid defectors and attract cooperators, the networks that emerge are the kinds that support cooperation.

While this research focuses on cooperation in a particular context, it suggests an impact more broadly on the study of social learning. The structure of a social network in which social learning is successful must also be one that is robust to misinformation or deceit. Thus, we might expect such networks to also display scale-free structure. Further, if social learners choose to learn only from those that have been helpful in the past, we might expect scale free social learning networks to emerge.

Using both individual and social learning to improve fitness requires time. During this time the fitness of an individual is commonly less than that of others in its population. In many species a period of protection evolves in which individuals are protected from selection while they are learning. Bullinaria [3, 4] investigates the effect of individual and social learning on life history evolution, in particular, the evolution of protection periods.

One important result from [4] regards the conditions in which social learning will evolve relative to individual learning. That article considers different levels of learning fidelity in both individual and social learning while allowing the propensity to learn to evolve over time. When both learning individually and learning socially are highly reliable (though not perfect), individuals evolve to learn (both individually and socially) as much as possible. If individual learning fidelity is high enough while social learning is not as reliable, social learning does not evolve. This confirms results from mathematical models that suggest social learning can only evolve in systems where the fidelity of copying is very high.

In addition, [4] considers the effect of individual and social learning on protection period evolution. When social learning fidelity is low, then protection periods evolve that are slightly longer than the time needed to individually learn to be a high performer. When social learning fidelity is very high, then protection periods evolve that are slightly longer than the time needed to socially learn to be a high performer. Interestingly, when individual and social learning both have high (but not perfect) fidelity, much shorter protection periods evolve. This might be because the interaction between individual and social learning in these cases allows learning of high-performing behavior in a much shorter time. Further investigation is needed to confirm this result.

3 Mechanisms of Social Learning and Cultural Evolution

Mechanisms of social learning and cultural evolution are central to the artificial life study of these phenomena. Without models of social learning mechanisms we cannot simulate them. Chris Marriott's presentation reviewed several different evolutionary and cultural models that are popular in the artificial life community. These included the simple evolutionary model, models with horizontal transfer of genetic material, agent-based models of social behavior, and evolutionary development models. The presentation concluded with a model of a coupled genetic-cultural system called the dual inheritance model [8, 10, 11].

The dual inheritance model incorporated elements of the other models listed. In the dual inheritance model each agent has two kinds of information: static genetic information that is only active during reproduction and growth, and dynamic memetic information that is causal in behavior selection. Agents inherit their genetic information from their parents. Agents develop their initial memetic information through their genetic interaction with the environment (development). Memetic information interacts with the environment to create behavior (production of phenotype). Memetic information changes over time through individual learning (development) and through social learning (horizontal social transfer).

One benefit of the dual inheritance model is that it can support divergent genetic and cultural evolutionary trajectories [10, 11]. Such divergence is needed to explain how cultural evolution can occur so rapidly (over generational time) and how culture can evolve that is contrary to biological survival. Chris Marriott presented the dual inheritance model as a model of genetic-cultural coevolution at an abstract level. The nature of the mechanisms involved was not described.

Social learning mechanisms implemented in simulation are highly parameterized. Decisions need to be made about how the mechanisms operate, when the mechanisms are used, whom to learn from in the population, what aspects of culture are transmitted, and whether to trust the agent you have learned from or what you have learned. As there are not many solid answers to these questions at this time, many social learning simulations have to make these decisions blindly or following intuitions. In order to better understand social learning mechanisms, it is common for social learning experiments to focus on one of these dimensions at a time.

James Borg presented research on one of these questions, namely, whom should an agent learn from [2, 6]? In the population there are a number of different ways we can select a model for our learning. Ultimately the goal would be to select the best model, that is, the agent with the best behavior to learn. In simulation this might be as simple as selecting the agent with the highest fitness (learn from the best). This strategy is not necessarily always available if agents can't easily evaluate each other's success, and even when fitness information is interpretable, the conformist nature of fitness-based copying mechanisms do not necessarily make them adaptive. In such scenarios there are alternatives.

An agent could learn from a random individual, the closest individual, its parents, its grandparents, its siblings, other members of its generation, any elders, or the eldest. Each of these rules might be beneficial in different environments or under different circumstances, this being evidenced by the large variety of social learning mechanisms seen in nature [7]. James Borg et al. posed the question whether these rules for learning can affect what can be learned in the population. In particular, they considered a task that was inaccessible to incremental genetic evolution [2]. The results of this simulation suggest that learning from others is robust to different learning partners. That is, at least in the experimental conditions evaluated, social learning using a variety of strategies was capable of achieving success at the task; interesting variations between differing social learning mechanisms were observed, which challenged the learn-from-the-fittest principle often applied in artificial life simulations of social systems. Further research is needed to confirm this result in other simulations.

The final presentation and discussion was led by Evert Haasdijk, who gave us an introduction to some of the work being undertaken by the Horizon 2020-funded Deferred Restructuring of Experience in Autonomous Machines (DREAM) project. Differing from the other presentations, Evert posed an important open question regarding social learning in robots. The ease of transferring learned behavior from one robot to another depends largely on the control architectures of the robots. In homogeneous robots with explicit knowledge representation, transfer between robots can be direct and error-free. While this might be ideal for some purposes, it is hardly representative of social learning in nature and is not possible in most modern robots.

A common control architecture for evolved robots is the neural network, which relies on implicit knowledge representation. Isolating discrete units of knowledge in a neural network is difficult if not impossible. In addition we may wish for two robots with different architectures altogether to share knowledge. At this time there is no method available for this type of social learning in robots. Nonetheless, human social learning occurs in myriad ways, including shared experience through cohabitation, imitation, direct instruction, guided exploration, study of written works, and reverse engineering of artefacts.

4 Conclusion

The First Workshop on Social Learning and Cultural Evolution in Cancún, Mexico was a great opportunity for discussion on these topics. We plan to follow up with a second workshop at the European Conference on Artificial Life in Lyon, France in September, 2017. We conclude with a few important open questions in the area of social learning and cultural evolution:

  • • 

    How can the lessons and methodologies of evolutionary game theory be built into other artificial life experiments?

  • • 

    How can we characterize the parameter space of social learning mechanisms?

  • • 

    How can heterogeneous robots share learned information?

  • • 

    What effect does social learning and cultural evolution have on genetic evolution?

References

1
Axelrod
,
R. M.
(
1984
).
The evolution of cooperation
.
New York
:
Basic Books
.
2
Borg
,
J. M.
,
Channon
,
A.
,
Day
,
C.
, et al
(
2011
).
Discovering and maintaining behaviours inaccessible to incremental genetic evolution through transcription errors and cultural transmission
. In
T.
Lenaerts
,
M.
Giacobini
,
H.
Bersini
,
P.
Bourgine
,
M.
Dorigo
, &
R.
Doursat
(Eds.),
Advances in Artificial Life, ECAL 2011: Proceedings of the Eleventh European Conference on the Synthesis and Simulation of Living Systems
(pp.
101
108
).
Cambridge, MA
:
MIT Press
.
3
Bullinaria
,
J. A.
(
2009
).
Lifetime learning as a factor in life history evolution
.
Artificial Life
,
15
(
4
),
389
409
.
4
Bullinaria
,
J. A.
(
2017
).
Imitative and direct learning as interacting factors in life history evolution
.
Artificial Life
,
23
(
3
),
374
405
.
5
Islam
,
M.
, &
Grogono
,
P.
(
2016
).
Modelling the evolution of mimicry
. In
C.
Gershenson
,
T.
Froese
,
J.
Siqueiros
,
W.
Aguilar
,
E. J.
Izquierdo
, &
H.
Sayama
(Eds.),
Proceedings of the Artificial Life Conference 16
(pp.
442
449
).
Cambridge,MA
:
MIT Press
.
6
Jolley
,
B. P.
,
Borg
,
J. M.
, &
Channon
,
A.
(
2016
).
Analysis of social learning strategies when discovering and maintaining behaviours inaccessible to incremental genetic evolution
. In
E.
Tuci
,
A.
Giagkos
,
M.
Wilson
, &
J.
Hallam
(Eds.),
From animals to animats 14: Proceedings of the 14th International Conference on Simulation of Adaptive Behavior
(pp.
293
304
).
Cham: Switzerland
:
Springer
.
7
Laland
,
K. N.
(
2004
).
Social learning strategies
.
Animal Learning & Behavior
,
32
(
1
),
4
14
.
8
Marriott
,
C.
, &
Chebib
,
J.
(
2014
).
The effect of social learning on individual learning and evolution
. In
H.
Sayama
,
J.
Rieffel
,
S.
Risi
,
R.
Doursat
, &
H.
Lipson
(Eds.),
Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems
(pp.
736
741
).
Cambridge, MA
:
MIT Press
.
9
Marriott
,
C.
, &
Chebib
,
J.
(
2015
).
Finding a mate with no social skills
. In
S.
Silva
&
A. I.
Esparcia-Alcázar
(Eds.),
GECCO '15: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation
(pp.
185
192
).
New York
:
ACM
.
10
Marriott
,
C.
, &
Chebib
,
J.
(
2016
).
Divergent cumulative cultural evolution
. In
C.
Gershenson
,
T.
Froese
,
J.
Siqueiros
,
W.
Aguilar
,
E. J.
Izquierdo
, &
H.
Sayama
(Eds.),
Proceedings of the Artificial Life Conference 16
(pp.
508
515
).
Cambridge, MA
:
MIT Press
.
11
Marriott
,
C.
, &
Chebib
,
J.
(
2016
).
Modeling the evolution of gene-culture divergence
. In
C.
Gershenson
,
T.
Froese
,
J.
Siqueiros
,
W.
Aguilar
,
E. J.
Izquierdo
, &
H.
Sayama
(Eds.),
Proceedings of the Artificial Life Conference 16
(pp.
500
507
).
Cambridge, MA
:
MIT Press
.
12
Noble
,
J.
, &
Todd
,
P. M.
(
2002
).
Imitation or something simpler? Modeling simple mechanisms for social information processing
. In
K.
Dautenhahn
&
C. L.
Nehaniv
(Eds.),
Imitation in animals and artifacts
(
Chapter 16
, pp.
423
439
).
Cambridge, MA
:
MIT Press
.
13
Santos
,
F. C.
, &
Pacheco
,
J. M.
(
2006
).
Scale-free networks provide a unifying framework for the emergence of cooperation
.
Physical Review Letters
,
95
(
9
),
98
104
.
14
Santos
,
F. C.
,
Pinheiro
,
F. L.
,
Lenaerts
,
T.
, &
Pacheco
,
J. M.
(
2012
).
The role of diversity in the evolution of cooperation
.
Journal of Theoretical Biology
,
299
,
88
96
.
15
Smith
,
J. M.
, &
Szathmary
,
E.
(
1997
).
The major transitions in evolution
.
Oxford, UK
:
Oxford University Press
.
16
Whiten
,
A.
, &
Ham
,
R.
(
1992
).
On the nature and evolution of imitation in the animal kingdom: Reappraisal of a century of research
.
Advances in the Study of Behavior
,
21
,
239
283
.
17
Whiten
,
A.
,
McGuigan
,
N.
,
Marshall-Pescini
,
S.
, &
Hopper
,
L. M.
(
2009
).
Emulation, imitation, over-imitation and the scope of culture for child and chimpanzee
.
Philosophical Transactions of the Royal Society of London B: Biological Sciences
,
364
(
1528
),
2417
2428
.