Abstract
On this 30th anniversary of the founding of the Artificial Life journal, I share some personal reflections on my own history of engagement with the field, my own particular assessment of its current status, and my vision for its future development. At the very least, I hope to stimulate some necessary critical conversations about the field of Artificial Life and where it is going.
1 Introduction
The year 2024 marks the 30th anniversary of the formation of the Artificial Life journal by Chris Langton in 1993. It is also the 37th anniversary of the very first Artificial Life conference organized by Chris Langton at Los Alamos National Labs in 1987 (Langton, 1989a). Anniversaries are appropriate times for reflection, both on where we have been and on where we are going. Accordingly, this essay is divided into three parts: a brief sketch of my own personal history with the field of Artificial Life, a somewhat critical assessment of its current status, and some highly opinionated suggestions about its future trajectory.
2 Looking Back
I arrived at university as an undergraduate computer engineering and science student in 1980 with a keen interest in artificial intelligence (AI), and I immediately immersed myself in all things AI, which at the time meant what is now often called “classical” or “symbolic” approaches to AI. I started reading every AI book and conference proceedings I could get my hands on, learned Lisp (and later Prolog), and started building natural language understanding systems and expert systems. However, by the time I began my graduate studies in 1985, I had already become convinced that this approach to AI was on the wrong track and began searching about for alternatives, primarily through a deep dive into the philosophy of mind literature.
In 1985, a philosophy professor, David Helman, gave me a preprint of a penetrating critique of classical AI by Terry Winograd and Fernando Flores (1986). Chapter 4 of that book introduced the ideas of the Chilean biologists Humberto Maturana and Francisco Varela, which provided a road map for an entirely different way of thinking about cognition that was inextricably grounded in its biological underpinnings (Maturana & Varela, 1980, 1987; Varela, 1979). This chapter was to have a profound influence on the subsequent direction of my research. I immediately set about reading every book and paper of Maturana and Varela’s that I could find and organizing a small reading group in which I tried to come to grips with the dense, unfamiliar ideas that went against almost everything I had been taught in computer science and AI.
A number of other very important things were also happening in the influential years 1985–1987. The roboticist Rodney Brooks (1986) began a line of work that emphasized the fundamental importance of physical embodiment to intelligent behavior. In parallel, and from the unlikely direction of anthropology and human–computer interaction, came a line of work emphasizing the fundamental situatedness of intelligent systems and the importance of that environmental context to their actions (Agre & Chapman, 1987; Suchman, 1987; Winograd & Flores, 1986). Additionally, these years saw the beginning of the second neural network revolution (Rumelhart et al., 1986), following the original excitement around perceptrons in the 1950s and preceding the current enthusiasm for deep learning. Decades of advances in the mathematical theory of dynamical systems also began to reach a much wider audience (Gleick, 1987). On a personal level, I began to learn a great deal more neuroscience through my discussions with a colleague, Hillel Chiel, which only deepened my growing interest in biology and biological perspectives on cognition. I also read a series of inspiring popular books, including Braitenberg’s (1984) Vehicles, Poundstone’s (1985) book on Conway’s Game of Life cellular automaton, and Dewdney’s (1984) Planiverse novel describing a 2-D simulated world with a complex ecosystem of organisms. Finally, I encountered Dennett’s (1983) suggestion that AI should perhaps switch from “the modeling of human microcompetencies … to the whole competences of much simpler animals” (p. 350), either by inventing fictional creatures to analyze (Dennett, 1978) or by modeling simpler real animals (Dennett, 1983). I began to formulate a vague plan to build a computer model of a simple but complete organism.
Thus it was into a rich cauldron of revolutionary ideas that the announcement of a workshop on “Artificial Life” dropped in 1987. Even though I had nothing to present, I managed to scrape together enough funding to attend. The experience was eye-opening. Here was an entire burgeoning field that shared some of my misgivings about classical AI, my increasingly biological perspective on cognition, and my intuition that these ideas could be explored through computer simulation. I interacted with and/or heard stimulating talks from people such as Alexander Cairns-Smith, William Calvin, Michael Conrad, James Crutchfield, Richard Dawkins, A. K. Dewdney, Eric Drexler, James Gleick, David Goldberg, Paulien Hogeweg, John Holland, Gerald Joyce, Stuart Kauffman, Richard Laing, Chris Langton, Aristid Lindenmayer, Hans Moravec, Karl Niklas, Alan Perelson, Howard Pattee, Przemysław Prusinkiewicz, Mitchell Resnick, Craig Reynolds, Otto Rössler, Rudy Rucker, Arthur Winfree, and many others. I returned from Los Alamos invigorated, determined to transform my philosophical musings and vague research plans into a concrete thesis project.
But to move forward, I had to make a decision. Maturana and Varela’s framework can be divided into two components: autopoiesis (concerned with what organisms are and how they are constituted) and the biology of cognition (concerned with the behavior of organisms in interaction with their environments). I had no idea at the time about how to model autopoiesis, but, perhaps naively, I was convinced that some combination of invertebrate neuroethology, neural networks, and dynamical systems theory would allow me to model the neural basis of behavior in a complete simple animal. Thus I put aside autopoietic concerns about the constitution of organisms and set about implementing an artificial insect whose behavior, morphology, and nervous system were drawn from the invertebrate neuroethology literature. This model ultimately became the basis for my 1989 dissertation on computational neuroethology (Beer, 1990) and subsequently led to a long line of work on biorobotics, neuromechanical modeling of the mechanisms of animal behavior, and the evolution and analysis of model brain–body–environment systems that continues to this day, a story that has been told elsewhere (Beer, 2021). Interestingly, and for reasons that will become clear by the end of the next section, most of that work was not published in Artificial Life venues, but rather in the adaptive behavior, neuroscience, and robotics communities.
However, the other, autopoietic, half of Maturana and Varela’s framework was never far from my mind. Eventually, in a conversation on December 6, 1995, during a sabbatical at the Santa Fe Institute, Barry McMullin and I hit on the idea of using a glider in John Conway’s Game of Life cellular automaton as a simple model of an emergent individual that could be used to explore and more rigorously develop the notion of autopoiesis and its many implications. Over the next several days, I sketched out some thoughts about how to proceed, which unfortunately then languished in one of my research notebooks for many years as other projects took priority. Thus it wasn’t until nearly a decade later that I finally wrote these ideas up for a special issue of the Artificial Life journal commemorating Francisco Varela’s untimely death (Beer, 2004). My hope at the time was that others would pick up these ideas and carry them forward, but, when no one did, I decided in early 2012 to pursue them myself in a series of papers that marked my official return to the field of Artificial Life (Beer, 2014, 2015, 2020a, 2020b, 2020c).
3 Where We Stand
After almost 40 years of development (and a much longer prehistory), Artificial Life can no longer claim to be a new endeavor. Four decades is more than enough time for a field of research to have matured into a shared understanding of its history and a cohesive set of goals, methods, and standards of research. So what is the common vision that unifies the field of Artificial Life? Caveat lector: This section adopts a rather critical tone. However, it is intended, not to outrage or offend anyone, but simply to stimulate some much needed critical reflection.
Unfortunately, an outsider could be forgiven for their confusion regarding what the field of Artificial Life actually is and what it is trying to do. Indeed, they might be reminded of this crucial exchange between Martin Sheen’s and Marlon Brando’s characters in Francis Ford Coppola’s Apocalypse Now:
willard: They told me … that you had gone … totally insane … and that your methods … were unsound.
kurtz: Are my methods unsound?
willard: I don’t see … any method … at all, sir.
Within a typical ALife conference proceedings or volume of Artificial Life, one might find work on oil droplets, large language models, chemical and DNA computing, neural network architectures and learning algorithms, evolutionary algorithms, education, robotics, economics, affective computing, cellular automata, reinforcement learning, ethics, reservoir computing, ecology, evolvable hardware, cooperative behavior, artificial chemistry, neuroscience, sociology, computer music and art, cognitive science, information theory, and culture. The point is not that some aspects of these many topics do not belong in Artificial Life. But Artificial Life cannot just be about neural networks, or evolutionary algorithms, or reinforcement learning, or dynamical systems theory, or information theory. These are merely methods, many of which already have their own communities, conferences, and journals. Likewise, Artificial Life cannot be AI, or robotics, or neuroscience, or chemistry, or biology, or ecology, or sociology, or economics; these already exist as independent scientific fields, with histories much longer than Artificial Life’s. And while the intersection of all these activities threatens to be empty, their union is nothing less than the totality of science and engineering.
It may be more fun to think of oneself as a “gonzo” scientist exploring the wild frontier, unencumbered by staid disciplinary boundaries, more Burning Man than Royal Society. But a scientific field can only coast on its adolescent rebellion for so long. Eventually, it has to grow up and demonstrably contribute something to the larger scientific enterprise. Of course, diversity can be a wonderful thing. But taken to an extreme, it becomes a lack of commitment that hampers the progress of Artificial Life in very pragmatic ways.
The first problem is the state of peer review in the field. Peer review in general is at best a very imperfect instrument in any discipline, with a noise term that threatens to swamp any signal. But I have been reviewing papers for 35 years, in fields ranging from neuroscience to software engineering, from robotics to cognitive science, from philosophy to mathematics, and the inconsistency of reviews in Artificial Life is among the worst I have seen. As a member of the senior program committee for the Artificial Life conference four times now, the number of reviews I have been asked to reconcile that nearly (and sometimes completely) span the range from “definitely accept” to “definitely reject” for the same paper is alarming. For the most part, I do not blame the individual reviewers for this. Indeed, I have no doubt that my own ALife reviews have sometimes contributed to this inconsistency. Rather, the blame lies squarely at the feet of the field itself. If we collectively have no shared sense of what Artificial Life is and what does or does not constitute a contribution to it, then how can we expect any consistency in evaluation of a given piece of research?
The second problem is a lack of historical awareness about what has already been accomplished in the field. This is most obviously reflected in the often inadequate related work sections of many papers. But a much more insidious consequence of historical ignorance is the way it hinders progress. As the Spanish American philosopher George Santayana (1905) wrote, “those who cannot remember the past are condemned to repeat it” (p. 284). So much work in Artificial Life is a duplication of previous work with minor variations whose significance is often left unclear. Not only must we be honest about all of the direct influences on our work but we should make every effort to describe the totality of existing work that forms the backdrop against which it has been carried out and against which its contributions should be evaluated. If individual authors are unwilling to conduct or incapable of conducting adequate scholarship, then it is up to reviewers to enforce this. Unfortunately, this leads us right back to the problem described in the previous paragraph.
The third and final problem is a lack of consistent progress toward any identifiable goal. In part, this is a natural consequence of historical ignorance. If you don’t know where you’ve been, then how can you decide where you should be going? But its roots go much deeper than that. The vast majority of research projects in ALife are one-offs. They tackle a particular problem of interest to the authors, with an idiosyncratic combination of methods and at best a shallow analysis of results, and then they are dropped. This has the consequence that the literature is filled with many minor but incommensurable variations on a theme. To make progress, a field needs not just research projects but research programs. Such long-term, concerted efforts at tackling an important problem in an incremental fashion by a group of researchers who share clearly articulated goals, approaches, and methods are the way that science normally progresses. Of course, a certain number of one-offs are always helpful for probing the frontier, but they should be the exception, not the rule. If everyone is leaping into the unknown frontier, then no one is keeping the home fires burning. Individual projects are generally most useful if they build on existing work by varying one thing, whose significance can then be understood, not by varying everything. Unfortunately, this demands a shared understanding of the history and goals of the field, which leads us right back to the other problems identified earlier.
It was for all these reasons that I largely avoided Artificial Life venues for the computational neuroethology branch of my research earlier in my career. At the time, the adjacent field of adaptive behavior struck me as more scientifically serious, more intellectually and methodologically grounded, and more connected to the better established fields with which it made contact (e.g., ethology, neuroscience, robotics), than Artificial Life. Put bluntly, I decided that it provided a better foundation on which a budding young researcher could build a career.
4 Looking Forward
Criticism is relatively easy. I’ve always believed that any serious critic has a responsibility to at least suggest ways in which the problems they identify might be solved. Thus this final section sketches some ideas about how Artificial Life might move forward in a way that addresses the concerns I have raised. I have no desire to impose my own vision on others. It is only offered here as one example of how we might proceed. Others certainly can and should offer their own visions so that the field as a whole can engage in the critical self-examination necessary to move forward.
First and foremost, we need to decide what we as a field want to be when we grow up. To me, the answer to this question is straightforward. As Chris Langton (1989b) said when he defined the field, Artificial Life is the “biology of possible life,” it is the study of “life as it could be,” whose ultimate goal is to extract the “logical form” of living systems. While biology focuses almost exclusively on “life as we know it” (Langton, 1989b), Artificial Life is interested in the structure of the space of all possible life and how the actual instances of life that we have so far encountered are situated within that space. This makes Artificial Life a superset of biology and, in a sense, the true home of theoretical biology. And biology is sorely in need of such a broader theoretical framework. As the late Nobel laureate Sydney Brenner (2012) wrote a little over a decade ago,
biological research is in crisis. … Technology gives us the tools to analyze organisms at all scales, but we are drowning in a sea of data and thirsting for some theoretical framework with which to understand it. Although many believe that “more is better,” history tells us that “least is best.” We need theory and a firm grasp on the nature of the objects we study to predict the rest. (p. 461)
More than any other biological endeavor, I think that the perspective of Artificial Life has the potential to deliver on this demand for an all-encompassing understanding of biology writ large—but only if we seize this problem of formulating a generalized theory of biology and explicitly make it our own, only if we commit to systematically exploring this space of possibilities in a way that builds toward the kind of first principles understanding that is required, only if we make every effort to engage with the rest of biology in doing so. Note that I am focusing only on the scientific component of Artificial Life. A parallel vision for the engineering component should also be formulated, a task which I will leave to others whose work lies more in that direction. However, even projects focused on engineering Artificial Life in some medium must engage with the question of what life is in the first place.
What form might such an endeavor take? The fundamental unit of biology is the organism. Indeed, organisms are the center around which the science of biology turns. Developmental biology studies the development of organisms. Ethology is the study of the behavior of organisms. Ecology is the study of interactions between organisms and their environments. Population biology is the study of large groups of organisms. And so on.
Thus it is the study of the space of possible organisms and the phenomena that they generate that should be our focus. Note, however, that many other aspects of biological (and Artificial Life) research flow quite naturally from this focus. A fundamental theory of biological individuals must also address their constitution; their origin; their behavior in interaction with their environments and with one another; their reproduction; and their transformation in evolution, development, and learning. Interestingly, this pattern repeats on the cellular, multicellular, and population scales in biology. For example, a multicellular organism is constituted by appropriately organized interactions among populations of individual cells, and it in turn has its own constitutive, origin, interactive, and transformative characteristics to be understood. This suggests that an agenda focused on understanding the constitution, origin, interaction, and transformation of biological individuals in general might provide the missing unifying organization that Artificial Life requires.
Imagine an Artificial Life conference organized along these lines, with one day each devoted to the constitution, origin, interaction, and transformation of biological individuals and the remaining day devoted to, say, historical, philosophical, and artistic aspects or promising new directions of work that do not yet fit cleanly within the other days. Or one could imagine parallel tracks devoted to each theme instead of separate days. The individual days or tracks could be further subdivided into sessions focused on each of the major problems or approaches being pursued within that theme.
Given my own history of engagement with Artificial Life, this formulation of its goals should come as no great surprise. It is very much aligned with the agenda that Maturana and Varela put forward 50 years ago. Indeed, it is exactly the research program that I have been pursuing with the constitution, origin, transformation, and interaction of gliders in the Game of Life. However, one need not accept Maturana and Varela’s autopoietic/biology of cognition/enactive proposals, nor my own particular attempts at formalizing these ideas, to participate in this overall agenda. There are alternate dynamical, autocatalytic, thermodynamic, informational, and so on formulations of biological individuality and the phenomenology that it generates that are being and should continue to be explored in parallel.
How might this vision of Artificial Life begin to address the specific problems with current ALife research that I raised earlier? In most ways, the actual range of research projects would not change. However, each project would need to be explicit about which notion of biological individual they are assuming (rather than simply calling it “lifelike”), which aspect of living phenomena they are targeting, and what particular research program within that branch of investigation they are pursuing. By explicitly situating each project within an overall agenda, it would be easier to identify reviewers from the same branch of research, easier to settle on consistent standards of evaluation for each branch, and easier to come to agreement on assessing the contribution of a given piece of work within that branch. With respect to historical awareness, structuring the scientific agenda of Artificial Life along the lines I have suggested would allow for high-quality review articles and textbooks that, instead of merely supplying a laundry list of past projects, synthesize the field’s history into a shared, unifying vision of what we are trying to accomplish and the progress that has or has not been made. This is a history that students could be taught and that every researcher in Artificial Life could be expected to know, and within which every new project could be positioned. Finally, a field pursuing a common agenda is a field that can make steady progress on that agenda, with individual projects moving away from incommensurable one-offs to links in a chain of progress, research programs steadily building toward shared goals.
To be clear, I am not arguing that Artificial Life should entirely set aside its diversity. Many Artificial Life researchers greatly value and rightfully champion its freewheeling nature, seeing it as a major driver of cross-disciplinary innovation, and I have no wish to stifle this. What I am suggesting is that this exploratory component of the field should be better balanced with a collection of long-term research programs organized around shared questions and approaches, so that we can harness all of this innovation to make steady progress on the scientific questions that drive us.
Whose responsibility is it to make such a transformation happen? I have repeatedly spoken about the “field” of Artificial Life as having certain problems and needing to make certain changes. But a scientific field is not itself a responsible party. Like a corporation, it is a social construction of the people whose collective activities constitute it. Thus it is ultimately up to us, the editors, conference organizers, reviewers, and researchers in Artificial Life, to decide what kind of field we want it to be and then to take the actions necessary to make it happen.
ALife as it could be, indeed.
Acknowledgments
I thank Eden Forbes, Eduardo Izquierdo, Connor McShaffrey, and Gabe Severino; the coeditors of Artificial Life, Susan Stepney and Alan Dorin; and two anonymous reviewers for their feedback on an earlier draft of this essay. The opinions expressed herein remain, of course, my own.