Abstract
Nowadays, interdisciplinary fields between Artificial Life, artificial intelligence, computational biology, and synthetic biology are increasingly emerging into public view. It is necessary to reconsider the relations between the material body, identity, the natural world, and the concept of life. Art is known to pave the way to exploring and conveying new possibilities. This survey provides a literature review on recent works of Artificial Life in visual art during the past 40 years, specifically in the computational and software domain. Having proposed a set of criteria and a taxonomy, we briefly analyze representative artworks of different categories. We aim to provide a systematic overview of how artists are understanding nature and creating new life with modern technology.
1 Introduction
As a theoretical biologist, Christopher Langton coined the term Artificial Life (ALife) and elaborated it in Ars Electronica in 1993 (Gerbel et al., 1993). Art, life, and nature always lead the important research topics in many interdisciplinary studies (Bender et al., 2021).
Langton’s original definition was questioned by many (Aguilar et al., 2014), including himself. In 1998, he redefined “Artificial Life” as “the study of natural life, where nature is understood to include rather than to exclude, human beings and their artifacts” (Langton, 1998). Langton further stated that human beings and all they do are part of nature. It is also highly related to cybernetics, and as Donna J. Haraway (1991) stated in Simians, Cyborgs, and Women “human beings, like any other component or subsystem, must be localized in a system architecture whose basic modes of operation are probabilistic” (p. 212).
ALife investigates “life-as-we-know-it” as well as “life-as-it-might-be” (Langton, 1993). This topic constantly attracts scientists and artists who discover the fundamental principles of life and attempt to rethink a new form of life in an artificial system. Nowadays, interdisciplinary fields between ALife, artificial intelligence (AI), computational biology, and synthetic biology are increasingly emerging into the public view. It is necessary to reconsider the relations with the material body, identity, the natural world, and the concept of life (Wu & Huang, 2021). Art paves the way for exploring new possibilities.
Rock art of the Chauvet caves in France shows a hybrid image mixing the human’s arm and the bison’s head (Valladas et al., 2001) 30,000 years ago, in the Upper Paleolithic period. It was interpreted as a technological mimicry of living things (Dorin, 2015). In fourth century BC China, many strange and mysterious creatures may have existed. Classic of Mountains and Seas contains a detailed description of weird animals and geographic locations of mountains and rivers, accompanying the ancient national geographic record (L. Chen, 2010). In 1739, Jacques de Vaucanson (1742) created an automaton named Digesting Duck, which has had a significant influence on artificial life research (Dorin, 2004). This automaton duck imitated all actions of eating, drinking, and digestion. In the 19th century, mermen and mermaids were manufactured by artisans and shipped to Europe for display in private scientific and ethnographic collections of curiosities or to the paying public (Dorin, 2015). One of the famous and attractive freak creatures is the “Feejee Mermaids.” This hybrid organism was recreated by The Mermaid De-Extinction Project (Pell, 2021–) and was found to contain the DNA of both Pongo (orangutan) and Salmo (salmon) genuses (Rogers et al., 2022). It has been proven that due to biological limitations, its life-span would be less than 20 minutes, and it could not exist in reality.
This survey focuses on how artists, via synthesis or simulation using hardware, software, and wetware technologies, understand nature and represent the characteristics of the natural living system through their practices in the visual art domain.
1.1 Motivation
Several surveys in ALife focus primarily on technical development (Aguilar et al., 2014; Lehman et al., 2020; Taylor et al., 2016), including a recent one on methods of procedural generation of virtual creatures’ morphology (Lai et al., 2021). Almost all surveys confirm ALife art, or art informed by ALife (Antunes, 2013), as one of the critical parts of ALife research. There have also been numerous surveys on ALife art. For example, Penny (2010, 2015) writes about hardware ALife art. Dorin (2003, 2004, 2015) provides interesting views on ALife over the years, especially in the humanities and social studies.
Since the publication of Mitchell Whitelaw’s (2004) work Metacreation, ALife art has gained wide acceptance by the public. However, no related survey has systematically focused on ALife in visual art, specifically on software and computational aspects, in 10 years. With the rapid development of AI mixed with synthetic biology and computational biology, there is an urgent need for such a state-of-the-art review.
From the theme Genetic Art—Artificial Life in 1993 (Gerbel et al., 1993) to Artificial Intelligence: The Other I in 2017 (Stocker, 2017) to the newly established category of “Artificial Intelligence and Life Art” of the Ars Electronica in 2019 (Ogawa & Stocker, 2019), the focus has shifted over the past 30 years. The consideration and practice of ALife are, however, much older.
In the 1960s, creative melding of biology and technology in artifacts gained much attention, exploring the principles and practices of cybernetics and computer programming (Dorin, 2015). One of the most representative examples has been the popular 1968 exhibition Cybernetic Serendipity (Usselmann, 2003), where much of the artwork was explored under the banner of cybernetics and generative art. This laid the foundation for the emergence of “Artificial Life” as a new scientific discipline. VIDA Art and Artificial Life (Tenhaaf, 2008), from 1999 to 2012, was a series of key international exhibitions and competitions that witnessed the flourishing development of Artificial Life as a new discipline. The exhibitions demonstrated the creative and unique nature of Artificial Life.
How to understand nature (“life-as-we-know-it”) and how one can create new life (“life-as- it-might-be”) (Langton, 1993) have motivated ALife research. Different types of ALife reflected the principal technology of the specific period (Langton, 1993) in history as well.
According to Demos et al. (2021), visual arts play a vital role in exploring key issues of representation, affect, and societal involvement (p. 2). This view also applies to the development of the early ALife discipline. The controversial book Perceptrons (Minsky & Papert, 1969/2017) is often thought to have undermined confidence in the bottom-up research, though later Minsky withdrew his flawed analysis and apologized. It caused a decline in neural net research in the 1970s (Olazaran, 1996). Artists, however, were not affected by this event and continued their exploration. People in the ALife science community recognize the arts as engendering a new scientific discipline. This is probably because of the two different paradigms ALife and AI, “a biologically based paradigm of growth and adaptation, and a mathematico-logically based system of propositional reasoning on explicit representations” (Penny, 2009, sec 3.3).
How to reimagine life artistically? More importantly, how does technology influence art, society, and culture, and vice versa? What is the artist’s perspective on these touchy subjects, which involve ethical and moral issues? How do we define aesthetics in ALife? These questions have not been adequately answered and thus motivate us to provide a systematic and up-to-date survey on the topic. Owing to space limitations, we do not cover much the work prior to 2000 but review only a few representatives of those. We focus primarily on the most recent 40 years of practice in Artificial Life. By summarizing previous surveys closely related to ALife, ALife art (including computational generative art), AI art, and bio art, this survey explores how artists and scientists understand nature and create new life using modern technology in the 21st century. Specifically, we survey the works that attempt to answer the following questions: How was technology invented via bio-inspiration? How to understand the properties of life in technology? How do these works present our times, echoing Oxman (2015): “Here is to an age of design, a new age of creation, that takes us from a nature-inspired design to a design-inspired nature, and that demands of us, for the first time, that we mother nature”?
2 Material Collection
First, we searched for highly cited papers on Google Scholar using keywords such as “Artificial Life,” “Artificial Life art,” “Generative art,” “AI art” (Cetinic & She, 2022; Hertzmann, 2020; Manovich, 2019), and “Bio art” (Rogers et al., 2022; Vaage, 2016; Yetisen et al., 2015). We found several review papers (see Figure 1), many in journals, such as Digital Creativity (Dorin et al., 2012; Penny, 2010), Technoetic Arts (R. Greenfield & Cao, 2021), and AI and Society (Melkozernov & Sorensen, 2021; Todorovic, 2021). Additionally, we used the advanced search function on the MIT Press website, specifically in the Journal Artificial Life, to search for the keyword “art” (Antunes et al., 2015; Dorin, 2003; Kim & Cho, 2006; Lehman et al., 2020; Penny, 2015; Taylor et al., 2016). We also searched keyword “Artificial Life art” in Leonardo, from 2018 to 2022 (Gruber, 2020; Lowenberg, 2020; Rowland, 2021). The advanced search returned hundreds of results, most of which were irrelevant to ALife art. This is similar to the findings in a paper that analyzed 20 years of Artificial Life press, which stated that “some themes are poorly represented, such as art, because artists usually choose different venues to publicize their work” (Aguilar et al., 2014).
Second, we gained much inspiration from several doctoral theses, particularly the following:
Mehmet Selim Akten graduated in 2021 with the thesis titled “Deep Visual Instruments: Realtime Continuous, Meaningful Human Control Over Deep Neural Networks for Creative Expression” (Akten, 2021)
Rui Filipe Nicolau Lima Antunes graduated in 2013 with the thesis titled “On Computational Ecosystems in Media Arts” (Antunes, 2013)
Haru (Hyunkyung) Ji graduated in 2012 with the thesis titled “Artificial Natures: Creating Nature-Like Aesthetic Experiences Through Immersive Artificial Life Worlds” (Ji, 2012).
We also searched for the definitions and opinions referenced in their theses.
Third, we learned from media art theorists, such as Haraway (1991, 2013) and Lev Manovich (http://manovich.net/).
Finally, and most importantly, we obtained many resources from individual artists and their artworks via their websites and art reviews. As one of the top-level exhibitions and festivals in media art, Ars Electronica has a helpful archive starting from 1979, featuring many artworks. Another international, well-known competition, VIDA Art and Artificial Life, has listed many award-winning artworks from 1999 to 2012. Additionally, we retrieved academic papers from the ACM SIGGRAPH art paper and art gallery programs.
3 Concepts and Scope
Unlike the precise definitions of many concepts and terms in science and engineering, concepts in art are usually vague, sometimes even controversial. We explain a few essential concepts in Artificial Life art to be used to define the scope of this survey.
3.1 Concepts
3.1.1 Artificial Life Art in Contemporary Media Art
Artificial Life art (or ALife art) arose as the aesthetic arm of the ALife movement (Penny, 2010). It is a complex and interdisciplinary study among the various genres of contemporary media arts. Compared to other scientific research, ALife emphasizes the topic of “life-as-it-could-be,” which brings more possibility to the creativity of art (Ji, 2012).
Besides, “Artificial Life has a more holistic character due to its emancipation from reductionist, object-oriented views and focuses instead on system-oriented views of relations of processes” (Ji, 2012, p. 17). ALife has therefore the same characteristics as the avant-garde in contemporary arts, which focuses on the innovative, introducing or exploring new forms or subject matter.
In the early stages of ALife art, artists focused on a single critical process known as artificial evolution (Whitelaw, 2004). Then, in the following years, various artworks claimed that different themes were emerging with different media. For example, people explored the topic of ecosystem simulations, cellular automata (CAs), and behavioral robotics through the dry world of virtuality (Ascott, 2000) (digital image, animation, interactive installation, on- and off-line virtual environments). Other artists use moist media (Ascott, 2000) through the synthesis of the living system (comprising bits, atoms, neurons, and genes) to challenge perceptions regarding the utilization of new biological knowledge (Catts & Zurr, 2002).
Researchers divide ALife media into three subfields (Sinapayen, 2019). According to Bedau (2003), “three broad and intertwining branches of artificial life correspond to three different synthetic methods” (p. 505): hardware (Adamatzky & Komosinski, 2009), software (Adamatzky & Komosinski, 2005), and wetware (Doyle, 2003):
- Hardware ALife art.
This comprises artworks made primarily by hardware, including various approaches to animating mechanical structures and electronic devices (Nakayasu, 2020) in robotic arts. Exhibited in the famous exhibition Cybernetic Serendipity in 1968 (Usselmann, 2003), Edward Ihnatowicz (http://www.senster.com/) was an early proponent of embodiment in both robotic art (Kac, 1997) and AI, and also a pioneer of the discipline now known as ALife (Brown et al., 2009). Ihnatowicz initially created the artwork SAM and later the high-budget artworks The Senster and The Bandit. Many recent works also represent the diversity of hardware ALife art, for instance, the series Beach Animal, powered by the wind, by Theo Jansen (http://theojansen.net/); the fantasy Anima Machine, made mainly with metallic material, by Choe U-Ram (http://www.uram.net/eng_new/intro_en.html); and giant, mechanized animal sculptures with movement by La Machine (https://www.lamachine.fr/). The artworks The Flock, Autopoiesis, and Augmented Fish Reality created by Ken Rinaldo (https://www.kenrinaldo.com/) combine hardware with software.
- Software ALife art.
Artists and scientists generate artworks via computer simulation, algorithms, artificial intelligence tools, and other software.
- Wetware ALife art.
Artworks in wetware are made in life science laboratories and enable the audience to learn, understand, critically engage with, and comment on the future of life and the living (Zurr & Catts, 2004). The narrow definition of bio art is “art that literally works in the continuum of biomaterial, from DNA, proteins, and cells to full organisms. Bio Art manipulates, modifies or creates life and living processes” (Kac et al., 2017). Many artists continue to explore these areas, notably Joe Davis, Oron Catts and Ionat Zurr, Eduardo Kac, and Anna Dumitriu.
This survey essentially focuses on software ALife art.
3.1.2 Computational Art and Design (CAAD)
Computational art and design (CAAD), that is, the use of computation and algorithms in producing artistic and creative works (Akten, 2021), has been a widely accepted term. “CAAD was introduced in the 1950s to assist designers in assessing the ‘goodness’ of their creations” (Kalay, 1999, p. 2), and people soon found it useless. Nowadays, CAAD is shifting the focus from computer-aided design to computer-aided collaboration.
Regarding art versus design, Oxman (2016) states, “The role of Design is to produce embodiments of solutions that maximize function and augment human experience; it ‘converts’ utility into behavior. The role of Art is to question human behavior and create awareness of the world around us; it ‘converts’ behavior into new perceptions of information, representing the data that initiated the KCC in Science” (p. 5). Design and art work differently in their functions and sometimes are entangled with each other. The relations between science, engineering, design, and art are like a clock, with repetition, continuity, and change (Oxman, 2016). CAAD covers all the concepts in the following subsections that are more specific to art.
3.1.3 Computational Generative Art and Evolutionary Art
Not all generative arts use computers. Boden and Edmonds (2009) identified Jack Burnham, an art historian, as coiner of process art: “In such handling of materials the idea of the process takes precedence over end results” (Burnham, 1968, p. 3).
Computation is a relatively new approach to creative expression (Bolter & Grusin, 2000). Computer simulation allows one to build a “model world” that can re-create the world. Generative computer art could potentially create the concept of the computational sublime (McCormack & Dorin, 2001) that certain emergent properties have not even existed in the natural system (McCormack et al., 2014).
Computational generative art (CG-art) is defined as the art “produced by leaving a computer program to run by itself, with minimal or zero interference from a human being” (Boden & Edmonds, 2009, p. 37). Nowadays, artists and designers do not directly produce final artifacts using the computer during their creative practices. Instead, they design the process to collaborate with computers and then produce the final artifact (Akten, 2021).
Under CG-art is another category of art, called evolutionary art (evo-art), that has influenced ALife art profoundly. As early as 1953, mathematician Nils Aall Barricelli, who was later recognized as a pioneer of Artificial Life and evolutionary computation (EC), simulated evolving organisms on two-dimensional playing cards using numbers and converted them into images with computers (Fogel, 2006). “Evo-art is evolved by processes of random variation and selective reproduction that affect the art-generating program itself” (Boden & Edmonds, 2009, p. 37).
Australian researcher, artist, and educator Jon McCormack (2005) uses the term evolutionary music and art (EMA) to describe the generation of music and art using creative evolutionary systems. He believes that EMA is about not only the generative process but also the evaluation process.
3.1.4 Interactive Evolutionary Computation
Interactive evolutionary computation (IEC) is a type of EC that optimizes systems based on subjective human evaluation. The EC fitness function is replaced by a human user, typically through human–computer interfaces (Takagi, 2001).
As an initial IEC, the artificial selection program named Watchmaker was introduced in Richard Dawkins’s (1986/1996) book The Blind Watchmaker. Watchmaker’s interface allows users to select one option from many to simulate and evolve 2D shapes and patterns called “Biomorphs.” The chosen form becomes the basis for the next generation of forms, which can again be selected by the user (Lai et al., 2021).
IEC has also been used in art education to develop artistic sense rather than artistic skill (Takagi, 2001). It has also been used in art and Artificial Life research for image creation. For example, humans can select virtual insect shapes and plant morphological lines of CG-art with artistic applications (Takagi, 1998).
ALife artists are keen on using IEC to create and evolve artificial creatures based on their aesthetic preferences. Many artworks combine the concept and process of how and why artists make choices, to be introduced in the following sections.
3.1.5 AI Art, AI, and ALife
Until now, there has been no definitive answer as to what AI art is. We first summarize the three definitions of Manovich (2019):
AI refers to computers being able to perform many human-like cognitive tasks. Under this definition, art created by AI is something that professionals recognize as valid historical or contemporary art. As one cannot even give art a clear definition, how can we evaluate AI art?
What defines whether something is “AI” is not a method but the amount and type of control we exercise over an algorithmic process. At least three points in this process, a human author makes explicit choices and controls what a computer would do. First, a human designs network architecture and also an algorithm used to train a network, or the human selects from existing ones. Second, the human creates the training set. Third, the human selects what, in their view, are the most desirable artifacts generated by the network.
AI can create unique, systematic art forms by interpreting and extending human cultural patterns. Thus, AI art is a type of art we humans cannot create because of the limitations of our bodies and brains and other constraints.
As an example following the first definition of AI art by Manovich, a computer program designed to produce art autonomously, called AARON (H. Cohen, 1995), was created by British-born artist Harold Cohen. Even though the artist died in 2016, AARON can still create art pieces by itself (P. Cohen, 2016). Owing to the rapid evolution of various large-scale AI models and the lack of a clear definition of autonomous art, our survey would not delve deeply into autonomous AI art.
The third definition of AI art is still far from reality. How can we detect an artwork that could not be created by us? Life science is studying the physical subject of “what is life?” ALife is about “life-as-we-know-it” and “life-as-it-might-be.” We believe that the third definition is about alien life (Boden, 2003) and so will not be included in this survey.
This survey therefore adopts the second definition of AI art, that is, AI as a collaborator with humans to create ALife art.
There are many different views on the relationship between AI and ALife. Let us start from the original definitions of AI and ALife. In 1955, McCarthy (as cited in Hamet & Tremblay, 2017) coined the term AI by defining it as “the science and engineering of making intelligent machines” (p. S37). In 1987, Langton (as cited in Aguilar et al., 2014) coined the term ALife by defining it as “life made by man rather than by nature” (p. 2). The common denominator between AI and ALife points to the opposition between the natural and the artificial, the born and the made (Mambrol, 2018).
In science fiction, the difference between AI and ALife is about the relationship between intelligence and body. For example, in Frankenstein (Shelley, 1818), the creature’s intelligence naturally comes from the body. However, with the development of electronic and digital computers, intelligence no longer needs a body, as in Ghost in the Shell (Silvio, 1999). Besides, the essential background in fictional themes regarding AI and ALife is not merely computer science but the immense transformation of biology and the life sciences by cybernetics, information theory, and modern genetics (Mambrol, 2018) (see Figure 3).
Returning to our previous discussion of the second definition of AI art, AI is the collaborator with humans. This survey, therefore, treats AI as a creative art tool in the theme of ALife art.
3.2 Scope
This survey follows the preceding concepts and covers only software ALife art, without considering hardware ALife art and wetware ALife art. In particular, we focus on the representative artworks of CG-art (including evo-art), AI art, bio data, and other software ALife art that has questioned human behavior and created awareness of the world around us over the past 40 years. We will not cover autonomous AI art or the hot topic of generative AI with large models.
The survey covers many topics of software ALife art from different perspectives. The boundaries of different categories of certain artworks may not be crystal clear. We therefore review such artworks in different subcategories (see Figure 4).
4 Classification Criteria and Taxonomy
Having presented the survey’s background and scope, with many definitions and concepts of ALife art, we discuss our classification criteria, that is, materials, themes, and technologies followed by our taxonomy. The survey is structured primarily based on the materials of artworks, which we call “vertical classification” (see Figure 5). Horizontally, we label the artworks in different colors according to their themes and the technology used to create them.
4.1 Material
Our taxonomy is built vertically with a tree structure to represent the materials of various artworks, according to the materials employed in their production (Dorin et al., 2012). This survey focuses on software ALife art and divides it into two main subcategories: “bio as concept” and “bio as data,” inspired by Jo Wei (2022).
“Bio as concept” is further divided into two subcategories: “individual creature, swarm and computational ecosystems.” “Bio as data” is further divided into two subcategories: “image data” and “signal data.”
4.2 Theme
ALife art always presents the theme of life and nature. Examples include the properties of life, environmental problems, species extinction, food issues, the conscious and the unconscious, and the human and the nonhuman. Most surveys categorize the art by theme. Having vertically divided categories according to materials, our survey further categorizes artworks horizontally according to theme. Following Langton, ALife can be divided into “life-as-we-know-it” and “life-as-it-might-be.” We interpret “life-as-we-know-it” to mean “understanding nature” and “life-as-it-might-be” to mean “creating new life.” We visualize the themes in different colors in Figure 5.
4.3 Technology
Many technologies are inspired by nature. The survey “A Comprehensive Overview of the Applications of Artificial Life” (Kim & Cho, 2006) by a computer scientist mentions many methodologies of ALife. Not coincidentally, computational artists use various ALife algorithms to enrich their artworks. The basic technology of ALife includes, but is not limited to, the following methodologies: (a) CAs, (b) EC, (c) genetic algorithm (GA), (d) NNs, (e) Lindenmayer system (L-system), (f) ant colony optimization (ACO), and (g) bird-oid object (boid).
A CA consists of a regular grid of cells. CAs are discrete dynamical systems whose behavior is completely specified in terms of a local relation (Toffoli & Margolus, 1987). Each cell can be seen as an automaton. CAs can model complex behaviors like ecological systems by structuring appropriate rules. Inspired by Darwin’s (1859) book On the Origin of the Species, EC refers to computer models derived from the evolution process in nature (Fogel, 2000). Among several types of EC, GAs are one of the most popular methods (Kim & Cho, 2006).
GAs (Holland, 1992) always search for optimal solutions by relying on biologically inspired operators, such as mutation, crossover, and selection (Mitchell, 1998). Inspired by biological NNs in animal brains, artificial NNs, or simply NNs, are one type of model in machine learning (Abiodun et al., 2018). They are normally able to map different layers between inputs and outputs.
L-systems are grammar rules created by a theoretical biologist and botanist named Aristid Lindenmayer (1968). They could be used to create virtual plants, such as trees, flowers, and fungi (Ammeraal & Zhang, 2017), known as fractals, and also the morphologies of a variety of organisms (Rozenberg & Salomaa, 1980).
ACO finds solutions to difficult combinatorial optimization problems (Kennedy, 2006) inspired by the behavior of real ants, especially their pheromone-based communication. Boids are Artificial Life programs that simulate the flocking behavior of birds (Reynolds, 1987).
Because artists do not usually reveal much of the technical details of their projects, we would not classify the surveyed artworks using a “technology” criterion but rather by mentioning their implementation techniques within their own categories. Figure 5 uses the artworks’ small icons to represent different types of technologies and methodologies, synthesized under the categories of “themes” and “materials.”
5 ALife Art in Software: Bio as Concept
From the macro perspective of Darwin’s (1859) bio-inspired On the Origin of Species to the micro perspective of bio-inspired behavior of the ant, artists and scientists always get inspiration from nature. We choose several important artworks and divide them into two subcategories of “bio as concept,” that is, creating individual creatures and swarm and computational ecosystems.
5.1 Individual Creatures
One can create computational arts within the theme of Artificial Life by using many different techniques. In the 1980s, many computer scientists and artists started to make virtual creatures by algorithms. Yoichiro Kawaguchi (1982) observed and studied nature, then turned the morphological shell form of nature into mathematics and represented it in computer graphics at SIGGRAPH-1982. Later, Kawaguchi (n.d.) discussed other shapes of models, such as plants, snails, and horns, claws, and tusks, and created numerous fantasy artworks for years. As one of the pioneers, William Latham explores Artificial Life art starting from painting (Lambert et al., 2013). From 1983 to 1985, Latham designed the artwork FormSynth, which transforms hand-drawn shapes on paper. Then he collaborated with mathematician and programmer Stephen Todd in 1987 to build a computer system called FormGrow based on FormSynth and to construct simple shapes, such as bulging and hollowing objects. Inspired by Richards Dawkins’s (1986/1996) work, an artificial selection program called Watchmaker carries the concept of IEC. At the end of 1988, Todd and Latham worked together to develop a new system called Mutator, which managed the data from FormGrow and began to cross-breed forms together with it. Furthermore, Mutator can analyze basic components as genes and let the genes recombine and modify to produce large evolutionary trees in computer-generated 3D forms (Lambert et al., 2013). Latham mentioned that the method of Mutator came from the processes of nature, inspiring a simulation of natural selection; as he asserted, the artist, akin to a gardener, chooses and nurtures the samples that should be preserved and allowed to evolve (Todd & Latham, 1992, p. 104). This artwork is representative of how artists make aesthetic choices to influence the evolution of Artificial Life in the concept of IEC and has had a significant impact on many later artworks. For example, inspired by Dawkins and Latham, Form (1999), created by Andrew Rowbottom (1999), illustrates the characteristics of IEC through five versions of 3D works and an animated function.
Around the same time, following Darwinian evolution’s inspiration, Karl Sims (1991) created the artwork Genetic Images (1993), allowing the audience to interact with still images’ evolution. In 1994, he made a system for creating virtual creatures in three dimensions called Evolved Virtual Creatures (Sims, 1994), generated automatically using GAs. The virtual creature can also simulate evolution toward behaviors like swimming, walking, and jumping. With the 3D technology, Sims (1997) continued to create the artwork Galápagos (1997). The audience can participate in this work on 12 screens to observe how the 3D virtual organisms survive, mate, mutate, and reproduce. This artwork also combines the characteristics of IEC, such that artificial creatures are bred, not by the artist, but instead by the audience via multiple screens.
El Ball del Fanalet or Lightpools (1999) (Parés & Parés, 2001) is an art piece that is also interactive with the audience in both the virtual reality and real worlds. The audience holds a fanalet that illuminates the surface of a pond and searches for a fish. Once a virtual creature is found, it can be trained to dance with the audience, just like in the Catalan popular dance Ball del Fanalet, until the virtual creature starts dancing on its own.
Artists explore evolutionary concepts in Artificial Life art with other technologies and algorithms. Hornby and Pollack (2001) used L-systems to encode an evolutionary algorithm (EA) for creating virtual creatures. Using reaction-diffusion equation algorithms, Healing Series (2004) by Brian Knep (2004) simulates the formation and fading of scars by generating 2D patterns, with audience interaction. Around 1997, Christa Sommerer and Laurent Mignonneau (2015) worked together to create the artwork Life Spacies. In this work, the audience sends an e-mail, and an Artificial Life creature is created, living in the virtual environment in the museum. The virtual Artificial Life is linked with the written text, which is translated into a genetic code. When the audience continues to interact with the species, they influence the exchange of genetic code, their child creature’s creation, and the time of its death. The artist group then created a series of artworks, including Life Spacies II (1999), in which the virtual creatures could feed via the text provided by the user, and Life Writer (2006), which projects virtual creatures onto paper. These three artworks present the IEC characteristic with text.
Andy Lomas (2014) created a series of Artificial Life forms with self-developed algorithms and software using a simplified biological model of morphogenesis and cell division, such as Aggregation (2005) (Lomas, 2005) and Cellular Forms (2014) (Lomas, 2014).
Australian researcher and artist Jon McCormack has produced many research publications and artworks in Artificial Life art. For example, he discusses the emergent properties of the generative process and their capacity for creating complex, surprising, and novel artworks (McCormack & Dorin, 2001). Also, his artworks Fifty Sisters (2012–2016) (McCormack, 2013) and The Unknowable (2015–2017) (McCormack, 2017) present his modern art perspective of combining generative art with social topics during the generative process. Fifty Sisters is a series of artworks that consists of 50 still images of computer-synthesized plant forms. Each of the virtual plants was grown algorithmically from computer code via artificial evolution and a generative grammar. The visual components of plants are oil company logos, implying the original “Seven Sisters,” a cartel of seven oil companies that dominated the global petrochemical industry. This work aims to remind the viewer that the huge success of these oil companies is due to the long-time natural processes of plants: “We are expending this non-renewable resource in the relative blink of an eye” (McCormack, 2013, p. 74). The algorithm’s rules are somewhat similar to cell division in biology but more abstract and simplified. The creation process also contains evolving and “gene splicing” genetic codes when defining and developing each unique plant form (see Figure 6). Using a similar generative system, the artist chose endangered Australian plant species instead of oil company logos to create the artwork The Unknowable (2015–2017) on three screens in a darker, cleaner graphic style.
In his artwork Evolving Alien Corals (2018), Joel Simon (2018) uses a GA to simulate the evolution of virtual corals with morphogens, signaling, memory, and other biological capacities. Simon enables a multipurpose biomimetic form of optimization engine (see Figure 14).
Most recently, Ziwei Wu and Lingdong Huang (2021) used a GA to create an artwork called Mimicry (2021). Mimicry explores a pseudo-environment loop system in nature and artificial mechanical organisms, combining living flowers with projectors, webcams, and computer monitors. It reveals a new possibility of using a real-time loop system to create an altered nature in which virtual and real natures influence each other after a long exploration of the evolving, self-regenerating environment of nature and technology (see Figure 7).
Virtual insects resemble those in the environment captured in real time by a webcam. It is imperative to iteratively “evolve” the texture of the insects over time, to blend them into the background with GAs visually. A GA simulates the process of selective breeding to gradually reach an optimal solution in the following fashion. The actions of “mutation,” “crossover,” and “selection” (Holland, 1992) modify, generate, and filter a large set of candidate solutions over time until arriving at one that satisfactorily meets desired criteria.
By concatenating the bits for every pixel with different colors, the “gene” of a solution can be represented as a binary string that can be further operated. The process can be viewed as various textures continuously mixing and mashing against each other until a handful of the best ones remain (see Figure 8).
Ant- and ant-colony-inspired ALife visual art (Greenfield & Machado, 2015) is different from the aforementioned ALife art because it can easily transform the simulation of nature from individual creature to swarm. One may simulate ant pheromones visually by overlaying a series of networks. Artists obtained the abstract pieces Transport Network Overlay (2011) with interesting depth and color interactions (G. R. Greenfield, 2011). Inspired by ants’ nest construction, T. Albipennis (Urbano, 2011), a two-dimensional computer builder by Paulo Urbano, can generate patterns via simulated swarm.
5.2 Swarm and Computational Ecosystems
Compared to computer-generated individual creatures, there is also a more complex category called computational ecosystems (CEs) (Antunes et al., 2015). CEs could represent the swarm’s characteristics in biology via computer simulation. Two primary bottom-up models demonstrate individual-based modeling approaches: CAs and agent-based models (ABMs). CAs depict the state and dynamics of cells in a grid (Clarke, 2014), whereas ABMs allow individual components to have their own characteristics and rule sets (Crooks & Heppenstall, 2011). More specifically, one individual ALife creature is often seen as a computational agent, with creators setting different behavioral patterns. Multiple identical creatures form a swarm, which is generally composed of individuals with similar characteristics and relatively few evolutionary or developmental histories (Whitelaw, 2004). Computational ecosystems are multiagent systems in which multiple different creatures interact with each other in a virtual space (Antunes, 2013) (see Figure 9).
Starting from Conway’s Game of Life in the 1970s, British mathematician John Horton Conway used CAs to create a game with an initial configuration and observe how it evolved.1 A CA is a discrete model widely used in computational ALife art. For example, Paul Brown used CAs to generate and evolve a series of 2D artworks, Swimming Pool (1997) (Brown, 1997) and My Gasket (1998) (Brown, 1998). Quorum Sensing (2004) by Chu-Yin Chen (2004) created a virtual living world on the principles of Artificial Life, including evolution through GAs and nurturing from a substrate based on CAs. The artist translates the communication phenomenon in bacterial colonies into the idea of collective actions on the part of a group of spectators, on a sensitive and interactive carpet. In 2001, Jon McCormack used CAs to create the artwork Eden: An Evolutionary Sonic Ecosystem (McCormack, 2001), which is a type of Artificial Life environment visually and audibly synthesized. Most recently, Bert Wang-Chak Chan (2020) created Lenia (2020), a continuous CA family capable of producing lifelike self-organizing autonomous 2D patterns with dynamic movements. These life-forms bear resemblance to real-world microscopic organisms and have their own biodiversity (Chan, 2018).
Sometimes, artworks are not driven by nature-inspired algorithms; instead, they are driven by the rules in the system. In 1990, Tom Ray (1991) created a computer program competing for resources called Tierra. It creates a simulated ecosystem inside the computer. Virtual creatures breed, hybridize, and compete for CPU cycles and memory space (Penny, 2010). The Central City (1996) is an interactive net artwork created by Stanza (2002). This work is a visual collage reflecting on certain themes within urban consciousness. Using generative procedures with prerecorded materials, such as text, computer graphics, stills, video, and sampled sounds, the audience can interact to create a constantly transmuted environment. Electric Sheep (1999) (https://electricsheep.org/\#) was first created by Scott Draves and is still accessible via an app. It is a IEC crowdsourced evolving art that may be considered a distributed system, with all participating computers working together to form a supercomputer that renders animations called “sheep.” This distributed system has attracted 450,000 participants from all over the internet. The audience can vote for their favorite sheep, which could live longer and reproduce according to a GA with mutation and crossover. Other Artificial Life programs like “boids” (Reynolds, 1987), which simulate the flocking behavior of birds, are also used to create artworks. You Pretty Little Flocker (2015) (Eldridge, 2015) explores creative ecosystems’ aesthetic state space. Different flocks are placed on a 2D canvas and constrained with a circular arena to create environmental constraints.
Many ALife artworks using ABMs are multiagent systems that demonstrate how creators understand nature and summarize natural life into rule-based or behavioral rules assigned to agents, thereby creating new life and ecosystems. TechnoSphere (Prophet, 1996) is an interactive computer-based virtual world created in 1995. It is a digital ecology where the landscape and Artificial Life-forms evolve in a 3D environment. The audience can create Artificial Life creatures by selecting component parts like heads, bodies, eyes, and wheels, which link to their attributes, such as speed, visual perception, and rate of digestion. They can also interact with digital creatures through the internet and access it via the World Wide Web. By 2001, it had already attracted more than 650,000 users, who had created more than a million creatures (Prophet, 2001). In addition, TechnoSphere 2.0 was updated years later with an AR-based UbiComp app version allowing the audience to explore Hong Kong while interacting with virtual creatures (Prophet & Pritchard, 2015). Rui Filipe Antunes used virtual computational ecosystems in 3D software, with storytelling, to imply social impact with historical colonial narrative in the artworks Senhora da Graça (2010) (Antunes & Leymarie, 2010) and Where Is Lourenço Marques? (2013) (Antunes & Leymarie, 2013). In AfterGlow (2017) (Isley & Smith, 2017), the U.K. artist group boredomresearch presented an artistic interpretation of nature and malaria transmission between humans, macaques, and mosquitoes.
Ian Cheng (http://iancheng.com/) simulated life in 3D with “virtual ecosystems” using AI technology. Cheng (as cited in Comer & Cheng, 2019) claimed that his work Emissaries (2015–2017) was deeply influenced by the previous Artificial Life open-ended simulators by Will Wright, such as SimCity, The Sims, and Spore. In his artwork Emissaries, Cheng (2015–2017), introduces a narrative agent, “the emissary”, whose motivation is to enact a story conflicted with the open-ended chaos of simulation. AI in these artworks aims to resemble and replace cognition for the “posthuman” economy (Cheng, 2019) (see Figure 10).
6 ALife Art in Software: Bio as Data
ALife visual artworks are often in the form of “bio as data,” with many data types, including device-generated bio-signal data, such as EEG and EMG data. Data may also be macroscopic data from nature, such as tides and epidemic data. Artists choose different types of data to present various topics and creative ideas in their artworks.
We divide the types of data into two categories, visual and signal, based on the existing number and types of artworks.
6.1 Visual Data
Visual raw materials are typically used for the creation of art. When there is a large data set, AI art, or more specifically machine learning, is used. According to Manovich and Arielli (2021) in their book Artificial Aesthetics, “Machine Learning is used both to extract patterns from data and to generate patterns after training with said data” (p. 12).
Inspired by this idea, we propose a new methodology for categorizing artworks using visual data. We divide the visual artworks in ALife into two main categories involved in an iterative generation process (see Figure 11): (a) understanding nature and extracting its features and (b) generating new ALife based on visual data from nature. Following this methodology, many artists use ALife as visual data to further generate ALife, which we call ALife2 (squared). We believe that it is possible to learn from artists’ intention when selecting generated artworks and to further generate ALife3 (cubed), which would be much more artificial and abstract. Such iterations could possibly continue into higher orders; the generation of ALife could in theory be fully automated in the future.
Pretrained AI models have recently become trendy, especially with text-to-image conversion, such as Disco Diffusion (Crowson, n.d.), Midjourney (https://www.midjourney.com/), and DALL·E 2 (OpenAI, n.d.). With these models, anyone without an art background can generate amazing images in a few minutes. Others train the mystery deep-learning black boxes to create artworks. This leads to the classic questions of what art is and who owns the creativity, and literally to “The Death of the Author” (Barthes, 2001). Nevertheless, we believe that these tools will change our lives in the future. It has been controversial whether these generated images are “art,” and the debate continues.
Many well-known artists are focused on training data sets by themselves, including Refik Anadol, Memo Akten, Robbie Barrat, Sofia Crespo, Mario Klingemann, Trevor Paglen, Jason Salavon, Helena Sarin, and Mike Tyka (Hertzmann, 2020). For example, Hananona (STAIRLab, 2017) shows how to understand nature on the first step of an iterative ALife process by studying objects using deep learning models with multiclass classification, especially fine-grained recognition (Takeuchi, 2017). It is a flower classification system that is able to recognize different levels of flowers by deep learning of 300,000 flower pictures (Takeuchi, 2017). The flower images are collected from ImageNet (https://image-net.org/) and labeled in 406 classes, each with at least 700 images. Its Flower Map provides a visualization in which each image is transformed into a vector of 1,000 dimensions along the deep convolutional NN. The visualization system allows the audience to understand better how AI classifies flowers with the uploaded images. The author presents his understanding of nature by this classification system (see Figure 12).
With the same content of plants, Anna Ridler chooses tulips to represent the historical background between the Tulip Bubble and speculative manias of Bitcoin. The artist manually categorizes thousands of tulip images in Myriad Tulips (2018) (Ridler, 2018), which represent her understanding of nature. She then uses these images as the training set for the artworks Mosaic Virus (2019) (Ridler, 2019) to create new life in the second process of iterative ALife. In the machine learning model created by the artist, Bitcoin behaves like a virus, controlling different aspects of flowers, and the moving flower shows how the Bitcoin market fluctuates (see Figure 13). Ridler uses the same machine learning method to create various artworks and themes with different natural images. For instance, Circadian Bloom (2021) (Ridler, 2021) tells the time via flowers, connecting the understanding of nature with creating new life.
By mimicking nature, in Wavefront of Life (2018), Yoichi Ochiai (2018) trains the NN subjectively on personalized data sets to explore the external-subjective process. Moreover, this leads to the limited imagination of the human because colorization is a subjective process in the human mind. With the same concern, Sofia Crespo creates many new Artificial Lives via machine learning, starting from her childhood interest in jellyfish. Her work This Jellyfish Does Not Exist (2020–2021) (Crespo & McCormick, 2020–2021) attempts to answer the question of what artificial creatures have never been seen and imagined by humans, for example, due to our limitations in detecting color because of the limits of photo receptors.
Sofia Crespo and Feileacan McCormick work together at Entangled Others Studio. They created the 2D image Critically Extant (2022) (Crespo & McCormick, 2022) to bring critically endangered species to attention. Their 3D artwork Artificial Remnants (2019–2022) (Crespo & McCormick, 2019–2022) creates a nonhuman understanding of creatures by training machine learning algorithms on existing insect data sets.
Almost all of the aforementioned artworks involve selecting real images from nature, combined with machine learning to generate new images and videos that essentially demonstrate the first layer of iterative ALife—understanding nature and creating new life. The following artworks represent the second layer of iterative ALife, generating ALife2. The project Beneath the Neural Waves (2020–2022) (Crespo & McCormick, 2020–2022) aims to explore the aquatic ecosystem to observe biodiversity, specifically coral reefs. Without a sufficient number of real coral models for the training data set, Entangled Others invited Joel Simon to generate many virtual corals (details are in the following section) as their machine learning data set. It then generates more corals, bringing the secondary artificial creature by studying the virtual Evolving Alien Corals (2019) (see Figure 14).
This iterative ALife generation process also occurs in other creations. Andy Lomas (2016) developed a program called Species Explorer combining evolutionary and machine learning approaches to assist artists in creating artworks that systematically visualize how artists create the next generation of ALife. Species Explorer represents a modern version of IEC. Artificial creatures no longer evolve through EAs, but rather develop using machine learning techniques that incorporate the artists’ aesthetics. The Vase Forms (2019) series (Lomas, 2019) is one of the artworks of Andy Lomas’s aesthetics with the program Species Explorer and can also be 3D printed (see Figure 15).
Jon McCormack and Andy Lomas (2020) state in “Understanding Aesthetic Evaluation Using Deep Learning” that generative artwork combined with machine learning or deep learning technology is common in art creation. As discussed earlier (see Figure 11), it is possible to learn artists’ intentions when selecting generated artworks and so to generate a new generation of ALife automatically. It is interesting to imagine what ALife looks like after many iterations of analyzing the previous Artificial Life and generating a new generation.
Many of the artworks also view biology as visual data and utilize computer vision techniques. They essentially demonstrate how artists understand nature, rather than creating new life. For example, the artwork Institute for Inconspicuous Languages: Reading Lips (2018) by Špela Petrič (2018) explores whether a linguist can “establish basic communication signs with Ficus’s ‘tiny mouths’ (stomata).” The installation flyAI (2016) by David Bowen (2016) captures and classifies live houseflies’ images. It is able to trigger a pump to deliver water and nutrients to the colony based on the ranking system on the percentage of image recognition.
Other artworks present social issues. The Relative Velocity Inscription Device (2007) by Paul Vanouse (2007), exhibited in 2002, discusses the problem of race crossing and eugenics through the genes of the author’s family, a Jamaican-descended, mixed-ethnicity family. Vanouse uses previously extracted and amplified DNA fragments from each of his parents, his sister, and himself to drive running-figure avatars in a real-time performance and captured by computer vision algorithms. Each race is stored in a database that viewers can access via a touchscreen monitor (see Figure 16). Most recently, Gender Shades (2018) (Buolamwini & Gebru, 2018) and Exposing.ai (2021) (https://exposing.ai/) question, respectively, discrimination based on gender and skin color and the expansion of biometric surveillance technologies.
6.2 Signal Data
Many artists use bio-signal data to create their artworks. A significant number of artists convert bio-signal data into sound. These include Atau Tanaka, who transforms EMG data with the self-developed electronic device Myo (2018) (Di Donato et al., 2018) to drive live sound performance. Daito Manabe (2009) twitches human facial expressions through myoelectric sensors in the artwork Electric Stimulus to Face (2009).
The approaches to creating visual artworks with bio-signal data could be divided into three subcategories, according to their input, that is, EEG (electroencephalogram), ECG (electrocardiogram), and the EMG (electromyography) data. It is then up to the artist how to use bio-signal data to express their creative concepts and ideas. The output as visual artworks could be in the form of 2D pictures, 3D animation and sculpture, and even installation prototypes with different materials (see Figure 17).
The highly entertaining artwork Unconscious Flow (2000) by Naoko Tosa (2000) combines the participants’ emotion and a CG-animated mermaid. The heart rates of two participants, detected by electrodes, are used to calculate their levels of relaxation and strain, which are then mapped onto the synchronicity interaction model. This reveals communication codes in the hidden dimension that do not appear in the usually superficial communication, to see if the two participants are interested in each other. If both are highly relaxed and interested, the CG animations of the two mermaids on the screen join hands in brotherhood or enjoy friendly actions. If they are, however, highly strained and less interested, the CG animations quarrel with each other. In addition, there are different modes, such as shy and new communication. Haruki Nishijima’s artwork Remain in Light (2001) (Gaetano et al., 2008) allows the audience to collect electric sound waves with a butterfly net and release them onto the screen, which transforms the waves into a body of light that can be seen as “electronic insects.” The insects emit sounds and swarm around, revealing the invisible swarm behavior of sound data in urban space. Will.0.w1sp (2007) by Kirk Woolford (2007) blends real-time particle Artificial Life systems that exhibit autonomous drifting movement, from motion capture sequences of the audience. This work creates characters with sound that could be recognized through their connection to human “biological motion” but could also be able to explode and scatter in a nonhuman manner. Tango Virus (2005) (Causa et al., 2005) is an interactive installation where couples dance to a tango theme and their motion data feed into and deform the music. The motion data create a virus that attacks the musical composition, and the vivid pattern generated by the dance is projected onto screens surrounding the dance floor. Tango Virus draws on Artificial Life concepts to enrich viewers’ readings of the choreography, creating a counterpoint between human and computer-created living systems.
Owing to the limitations of devices other than EEG in capturing biological data, most artists pursue vigorous conscious and unconscious expressions using the bio-signal data from EEG. This leads to a new genre of art: “neuro art” (Gruber, 2020). Under this category, artists and scientists usually create visual artworks to express the relationship between body and mind, the emotional state of dynamic behavior, and the privacy of individuals with surveillance technology (Kelomees et al., 2020). These neuro artworks essentially demonstrate how artists understand nature, rather than creating new life.
For example, boredomresearch collaborated with Vladyslav Vyazovskiy, a neuroscientist at the University of Oxford, to create the artwork Dreams of Mice (2015) (Isley & Smith, 2015). They use the impulses of a recorded dream as the input signal to create the visual and acoustic expression of dream activities using custom-built software in the Blender Game Engine. It is a complex and beautiful visual artwork addressing the importance of the nonproductive third of human lives (see Figure 18).
Maurice Benayoun made a series of artworks visualizing EEG data, such as Brain Factory (2018) (Benayoun, Klein et al., 2018) and Value of Values (2018) (Benayoun, Mendoza et al., 2018). Varvara Guljajeva and Mar Canet created the artworks NeuroKnitting (2013) (Guljajeva et al., 2013) and NeuroKnitting Beethoven (2020) (Guljajeva & Canet, 2020), which use EEG data and a knitting machine.
7 Conclusions and Future Work
Artificial Life is always an essential research topic because it represents people’s understanding of “what is life?” at different ages. With cutting-edge technology and tools, expressing novel ideas in an innovative fashion becomes crucial in art.
7.1 Conclusions
This survey has presented a review of Artificial Life art with a systematic classification. We first introduce the historical background of Artificial Life art and the motivation of how to understand nature (“life-as-we-know-it”) and how to create new life (“life-as-it-might-be”). We then present the key concepts of Artificial Life art, essentially in contemporary computational art, including CG-art, evo-art, and AI art. Having limited our scope to software ALife art, we provide an overview of ALife art combined with different art themes and technology by first analyzing existing survey papers and relevant art papers. We review the categories of “bio as concept” (further divided into “individual creatures” and “swarm and computational ecosystems”) and “bio as data” (further divided into “visual data” and “signal data”).
EC was common in early software ALife art, especially in the “bio as concept–individual creature” category. The themes of IEC artworks focused mostly on creating new life. This is because the main focus of these works is not on imitating nature but rather on allowing humans to guide the evolution of virtual organisms. Additionally, technological limitations are sometimes closely related to the theme of the artwork. In the “bio as data” category, many more works are grouped into the theme of understanding nature than in “bio as concept.” For example, artworks using computer vision techniques and EEG technology (neuro art) are almost all about understanding nature. The artworks in the “visual data” category primarily use NNs, and with the rapid development of AI large models, this type of artwork is rapidly emerging. In the themes of ALife art from past to present, works related to understanding nature and creating new life are relatively evenly distributed over time, but the majority of artworks tend to focus more on creating new life.
This survey analyzes many fundamental concepts and definitions in ALife art and proposes a taxonomy with three different classification criteria. To our knowledge, the survey is the first for Artificial Life in visual art in the last 10 years. Our taxonomy with its visual representation as a tree provides a clear road map for anyone new to this area. New artworks can be included within our existing framework. The survey, however, does have its limitations. First, we restrict our scope to software ALife art because of space limitations, even though there has been a growing number of artworks in hardware and wetware ALife arts. Second, the nature of the domain area points to many outstanding and representative artworks in exhibitions, online, and on various art platforms, rather than in published papers. The survey reviews primarily academic papers and other forms of published works. We, however, believe that the survey serves well the audience interested or potentially interested in software ALife art, with the most up-to-date representative works and with a systematic classification.
7.2 Future Work
Our research aims at gaining insight into understanding nature and finding new ways to create life in Artificial Life art.
Having reviewed and analyzed the existing artworks with cutting-edge technology in software ALife art, we would like to share our view on the research gaps in three directions. More specifically, these areas have neither many artworks nor in-depth research so far: (a) Artificial Life combined with real environments to alter nature by CG-art and evo-art creation, (b) CG-art or evo-art as input data sets into AI art, and (c) software ALife art combined with wetware ALife art using cutting-edge technology (see Figure 19).
First, many works in CG-art and evo-art feature audience interaction. Most artworks, however, do not discuss the possibility of mixing Artificial Life with a natural environment. Our previous experimental artwork Mimicry (Wu & Huang, 2021) has made the first attempt. It presents an experimental loop system between a natural environment with flowers and a pseudo-environment with evolutionary virtual insects. We will continue along this line to explore further how Artificial Life art alters nature, and vice versa. More specifically, how would these fields entangle with nature and present themselves in another form of nature?
Second, AI art is becoming increasingly popular due to the rapid development of deep learning techniques. One of our future research questions is how artists create AI artworks with real-world data sets and their potential social impact. One important direction is to equip computer generative art with machine learning to create new iterative Artificial Lives. Several artists are already exploring what ALife2 looks like. We are deeply interested in what ALife3 to ALifen will look like in the future. With iterative machine learning, will Artificial Life become more and more artificial and abstract? Could one measure human’s aesthetic preferences during the iterative process objectively using modern technologies, such as fMRI?
Third, AI applied to computational biology and synthetic biology creates new cutting-edge fields and has recently made breakthroughs, such as AlphaFold2 (Cramer, 2021). Interdisciplinary research combining software and wetware Artificial Life has become promising. Yet, in reality, art lags far behind technology. How to use rapidly developing technologies in art practices to address contemporary social and economic topics will be a continuous challenge.
Acknowledgments
We express our gratitude to the anonymous reviewers and co-editors for their comments and suggestions, which have helped us to improve the quality of this survey. Thanks to the first author’s PhD Qualifying Exam Committee members Ionot Zurr, Varvara Guljajeva, and Wei Zeng for their feedback on the first draft of this survey. Thanks to Liwenhan Xie, Ze Gao, Yifang Wang, and Xiaofu Jin for their generous help with search and research methods. Finally, we thank Xinyu Ma, Rem RunGu Lin, and You Wang for their helpful comments and suggestions on improving the content. Additionally, we acknowledge the Aiiiii Art Center and curator Xi Li for their generous funding and support in organizing a series of academic activities known as Artificial Life, AI, Art and Altered Nature. These activities provided opportunities to engage in discussions with numerous artists and thus inspired our survey with diverse perspectives.
Notes
For more, see https://en.wikipedia.org/wiki/Conway\%27s_Game_of_Life.