Abstract
Even when concepts similar to emergence have been used since antiquity, we lack an agreed definition. However, emergence has been identified as one of the main features of complex systems. Most would agree on the statement “life is complex.” Thus understanding emergence and complexity should benefit the study of living systems. It can be said that life emerges from the interactions of complex molecules. But how useful is this to understanding living systems? Artificial Life (ALife) has been developed in recent decades to study life using a synthetic approach: Build it to understand it. ALife systems are not so complex, be they soft (simulations), hard (robots), or wet(protocells). Thus, we can aim at first understanding emergence in ALife, to then use this knowledge in biology. I argue that to understand emergence and life, it becomes useful to use information as a framework. In a general sense, I define emergence as information that is not present at one scale but present at another. This perspective avoids problems of studying emergence from a materialist framework and can also be useful in the study of self-organization and complexity.
1 Emergence
The idea of emergence is far from new (Wimsatt, 1986). It has certain analogies, with Aristotle referring to the whole as being more than the sum of its parts. It had some development in the nineteenth century, starting with John Stuart Mill, in what is known as British emergentism (McLaughlin, 1992; Mengal, 2006). However, given the success of reductionist approaches, interest in emergence diminished in the early twentieth century.
Only in recent decades has it been possible to study emergence systematically, because we lacked the proper tools to explore models of emergent phenomena before digital computers were developed (Pagels, 1989).
In parallel, limits of reductionism have surfaced (Gershenson, 2013a; Heylighen et al., 2007; Morin, 2007). While successful in describing phenomena in isolation, reductionism has been inadequate for studying emergence and complexity. As Murray Gell-Mann argues, reductionism is correct, but incomplete. (Gell-Mann, 1994, p. 118–119).
This was already noted by Anderson (1972) and others, as phenomena at different scales exhibit properties and functionalities that cannot be reduced to lower scales (Gu et al., 2009). Thus the reductionist attempt of basing all phenomena in the “lowest” scale (or level) and declaring only that as reality, while everything else is epiphenomena, has failed miserably. Nevertheless, it still has several followers, as a coherent alternative has yet to emerge (pun intended).
I blame the failure of reductionism on complexity (Bar-Yam, 1997; De Domenico et al., 2019; Ladyman & Wiesner, 2020; Mitchell, 2009). Complexity is characterized by relevant interactions (Gershenson, 2013b). These interactions generate novel information that is not present in initial or boundary conditions. Thus predictability is inherently limited, owing to computational irreducibility (Chaitin, 2013; Wolfram, 2002): There are no shortcuts to the future, as information and computation produced by the dynamics of a system must go through all intermediate states to reach a final state. The concept of computational irreducibility was already suggested by Leibniz in 1686 (Chaitin, 2013), but its implications have been explored only recently (Wolfram, 2002). An integrated theory of complexity is lacking, but its advances have been enough to prompt the abandonment of the reductionist enterprise. I do not see the goal of complexity as fulfilling the expectations of a Laplacian worldview, where everything can be predictable only if we have enough information and computing power. On the contrary, complexity is shifting our worldview (Heylighen et al., 2007; Morin, 2007) so that we are understanding the limits of science and seek not only prediction but also adaptation (Gershenson, 2013a). Instead of attempting to dominate Nature for our purposes, we are learning to take our place in it.
Thus we are slowly accepting emergence as real, in the sense that emergent properties have causal influence in the physical world (see later concerning downward causation). Nevertheless, we still lack an agreed-on definition of emergence (Bedau & Humphreys, 2008; Feltz et al., 2006). This might seem problematic, but we also lack agreed-on definitions of complexity, life, intelligence, and consciousness. However, this has not prevented advances in complex systems, biology, and cognitive sciences.
In a general sense, we can understand emergence as information that is not present at one scale but present at another. For example, life is not present at the molecular scale, but it is at the cellular and organism scales. When scales are spatial or organizational, emergence can be said to be synchronic, whereas emergence can be diachronic for temporal scales (Rueger, 2000).
It could be argued that the preceding definition of emergence is not sharp. I do not believe that emergence—similar to life—is or is not. We should speak about different degrees of emergence, and this can be useful to compare “more” or “less” emergence in different conditions and systems. Another argument against the definition could be that of vagueness. Still, sharp definitions tend to be useful only for particular contexts. This definition is general enough to be applicable in a broad variety of contexts, and particular, sharper notions of emergence can be derived from it for specific situations, (e.g., Abrahão & Zenil, 2022; Cooper, 2009; Neuman & Vilenchik, 2019).
Different types of emergence have been proposed, but we can distinguish mainly weak emergence and strong emergence. Weak emergence (Bedau, 1997) only requires computational irreducibility, so it is easier to accept for most people. The “problem” with strong emergence (Bar-Yam, 2004a) is that it implies downward causation (Bitbol, 2012; Campbell, 1974; Farnsworth et al., 2017; Flack, 2017).
Usually, emergent properties are considered to occur at higher/slower scales, arising from the interactions occurring at lower/faster scales. Still, I argue that emergence can also arise in lower/faster scales from interactions at higher/slower scales, as exemplified by downward causation. This is also related to “causal emergence” (Hoel, 2017; Hoel et al., 2013). Taking again the example of life, the organization of a cell restricts the possibilities of its molecules (Kauffman, 2000). Most biological molecules would not exist without life to produce them. Molecules in cells do not violate the laws of physics or chemistry, but these are not enough to describe the existence of all molecules, as information (and emergence) may flow across scales in either direction.
Note that not all properties are emergent—only those that require more than one scale to be described (thus the novel information). For example, in a crowd, there might be some emergent properties (e.g., coherent “Mexican wave” in a stadium), but not all of the crowd properties are usefully described as emergent (e.g., traffic flow at low densities, as it can be described fully from the behavior of drivers). In the same sense, a society can produce emergent properties in its individuals (e.g., social norms that guide or restrict individual behavior), but not all properties of individuals are necessarily described as emergent in this (downward) way (e.g., performance during a workout). Complexity occurs when novel information is produced through interactions between the components of a system. Emergence occurs when novel information is produced across scales.
One might wonder, then, whether all macroscopic properties are emergent. They are not. If they can be fully derived from a microscopic description (information), then we can call them differently for convenience, but they can be practically reduced to the information of the microscale. If novel information is produced at the macroscale, then that information can be said to be emergent.
Note that it is difficult to decide on the ontological status of emergence, that is, whether it “really exists” (independently of an observer). I am aiming “just” for an epistemology of emergence, which can be understood as answering the question, When is it useful to describe something as emergent? The answer to this question can certainly change with context so something might be usefully considered emergent in one context but not in another.
Because emergence is closely related to information, I explain more about their relationship in section 3, but before doing so, I discuss the role of emergence in Artificial Life.
2 (Artificial) Life
Life has several definitions, but none with which everyone agrees (Bedau, 2008; Zimmer, 2021). We could say simply that “life is emergent,” but this does not explain much. Still, we can abstract the substrate of living systems and focus on the organization and properties of life. This was done already in cybernetics (Gershenson et al., 2014; Heylighen & Joslyn, 2001) but became central in Artificial Life (Adami, 1998; Langton, 1997). Beginning in the mid-1980s, ALife has studied living systems using a synthetic approach: building life to understand it better (Aguilar et al., 2014). By having more precise control of ALife systems than of biological systems, we can study emergence in ALife, increasing our general understanding of emergence. With this knowledge, we can go back to biology. Then, emergence might actually become useful to understanding life.
ALife and its methods have had a considerable influence on the cognitive sciences (Beer, 2014b; Froese & Stewart, 2010; Harvey, 2019). It has yet to have an explicit impact on biology, perhaps because biological life is more complex than ALife, and biologists tend to study living systems directly. Still, computational models in biology are becoming increasingly commonplace (Noble, 2002), so it could be said that the methods developed in Artificial Life have been absorbed into biology (Kamm et al., 2018) and other disciplines (Barbrook-Johnson & Penn, 2021; Lazer et al., 2009; Rahwan et al., 2019; Seth, 2021; Trantopoulos et al., 2011).
Because one of the central goals of ALife is to understand the properties of living systems, it does not matter whether these are software simulations, robots, or protocells (representatives of “soft,” “hard,” and “wet” ALife) (Gershenson et al., 2020). These approaches allow us to explore the principles of a “general biology” (Kauffman, 2000) that is not restricted to the only life we know, based on carbon, DNA, and cells.
2.1 Soft ALife
Mathematical and computational models of living systems have the advantage and disadvantage of simplicity: One can abstract physical and chemical details and focus on general features of life.
A classical example is Conway’s Game of Life (Berlekamp et al., 1982). In this cellular automaton, cells on a grid interact with their neighbors to decide on the life or death of each cell. Even when rules are very simple, different patterns emerge, including some that exhibit locomotion, predation, and, one could even say, cognition (Beer, 2014a). Cells interact to produce higher-order emergent structures that can be used to build logic gates and even universal computation.
Another popular example is “boids” (Reynolds, 1987): Particles follow simple rules, depending on their neighbors (try not to crash, try to keep average velocity, try to keep close). The interactions lead to the emergence of patterns similar to flocking, schooling, herding, and swarming. There have been several other models of collective dynamics of self-propelled agents (Sayama, 2009; Vicsek & Zafeiris, 2012), but the general idea is the same: Local interactions lead to the emergence of global patterns.
Soft ALife has also been used to study open-ended evolution (OEE) (Adams et al., 2017; Pattee & Sayama, 2019; Standish, 2003; Taylor et al., 2016). For example, Hernández-Orozco et al. (2018) showed that undecidability and irreducibility are conditions for OEE. I would argue that OEE is an example of emergence, but not vice versa. Under the broader notion of emergence used in this article, undecidability and irreducibility are not conditions for emergence.
Emergence in soft ALife is perhaps the easiest to observe, precisely because of its abstract nature. Even when most examples deal with “upward emergence,” there are also cases of “downward emergence” (e.g., Escobar et al., 2019; Hoel et al., 2013), where information at a higher scale leads to novel properties at a lower scale.
2.2 Hard ALife
One of the advantages and disadvantages of robots is that they are embedded and situated in a physical environment. The positive side is that they are realistic and thus can be considered closer to biology than soft ALife. The negative side is that they are more difficult to build and explore.
Emergence can be observed at the individual robot level, where different components interact to produce behavior that is not present in the parts, (e.g., Braitenberg, 1986; Walter, 1950, 1951), or also at the collective level, where several robots interact to achieve goals that individuals are unable to fulfill (Dorigo et al., 2004; Halloy et al., 2007; Rubenstein et al., 2014; Vásárhelyi et al., 2018; Zykov et al., 2005).
Understanding emergent properties of robots and their collectives is giving us insight into the emergent properties of organisms and societies. And as we better understand organisms and societies, we will be able to build robots and other artificial systems that exhibit more properties of living systems (Bedau et al., 2009; Bedau et al., 2013; Gershenson, 2013c).
2.3 Wet ALife
The advantage and disadvantage of wet ALife is that it deals directly with chemical systems to explore the properties of living systems. By using chemistry, we are closer to biological life with wet ALife than we are with soft or hard ALife. However, the potential space of chemical reactions is so great, and its exploration is so slow, that it seems amazing that there have been any advances at all using this approach.
One research avenue in wet ALife is to attempt to build “protocells” (Hanczyc et al., 2003; Rasmussen et al., 2003; Rasmussen et al., 2004; Rasmussen et al., 2008): chemical systems with some of the features of living cells, such as membranes, metabolism, information codification, locomotion, and reproduction. Dynamic formation and maintenance of micelles and vesicles (Bachmann et al., 1990; Bachmann et al., 1992; Luisi & Varela, 1989; Walde et al., 1994) predate the protocell approach, while more recently, the properties of active droplets or “liquid robots” (Čejková et al., 2017) have been an intense area of study. These include the emergence of locomotion (Čejková et al., 2014; Hanczyc et al., 2007) and complex morphology (Čejková et al., 2018).
3 Information
Our species has lived through three major revolutions: agricultural, industrial, and informational. We can say that the first one dealt mainly with control of matter, the second with control of energy, and third and current one with the control of, obviously, information. This does not mean that we did not manipulate information beforehand (Gleick, 2011), only that we lacked the tools to do so at the scales we have since the development of electronic computers.
Shannon (1948) proposed a measure of information in the context of telecommunications. This has been useful, but also many sophistications have been derived from it. Shannon’s information can be seen as a measure of a “just so” arrangement, so it can also be used to measure organization. Still, it is “simply” a probabilistic measure that assumes that the meaning of a message is shared by sender and receiver. But of course, the same message can acquire different meanings, depending on the encoding used (Haken & Portugali, 2015).
Living systems process information (Farnsworth et al., 2013; Hopfield, 1994). Thus understanding information might improve our understanding of life (Kim et al., 2021). It has been challenging to describe information in terms of physics (matter and energy) (Kauffman, 2000), especially when we are interested in the meaning of information (Haken & Portugali, 2015; Neuman, 2008; Scharf, 2021).
One alternative is to describe the world in terms of information, including matter and energy (Gershenson, 2012). Everything we perceive can be described in terms of information—particles, fields, atoms, molecules, cells, organisms, viruses, societies, ecosystems, biospheres, galaxies, and so on—simply because we can name them. If we can name them, then they can be described as information. If we could not name them, then we should not be able to even speak about them (Wittgenstein, 2013). All of these have physical components. Nevertheless, other nonphysical phenomena can also be described in terms of information, such as interactions, ideas, values, concepts, and money. This gives information the potential to bridge physical and nonphysical phenomena, avoiding dualisms. This does not mean that other descriptions of the world are “wrong.” One can have different, complementary descriptions of the same phenomenon, and this does not affect the phenomenon. The question is how useful these descriptions are for a particular purpose. I claim that information is useful for describing general principles of our universe, as an information-based formalism can be applied easily across scales. Thus general “laws” of information can be explored, generalizing principles from physics, biology, cybernetics, complexity, psychology, and philosophy (Gershenson, 2012). These laws can be used to describe and generalize known phenomena within a common framework. Moreover, as von Baeyer (2005) suggested, information can be used as a language to bridge disciplines.
One important aspect of information is that it is not necessarily conserved (as matter and energy). Information can be created, destroyed, or transformed. We can call this computation (Denning, 2010; Gershenson, 2010). I argue that some of the “problems” of emergence arise because of conservation assumptions, which dissolve when we describe phenomena in terms of information. For example, meaning can change passively (Gershenson, 2012), that is, independently of its substrate. One instance of this is the devaluation of money: Prices might change, while the molecules of a bill or the atoms of a coin remain unaffected by this.
Several measures of emergence have been proposed (e.g., Bersini, 2006; Fuentes, 2014; Prokopenko et al., 2009). In the context posed in this article, it becomes useful to explore the notion of emergence in terms of information, because it can be applied to everything we perceive.
This normalization allows us to use the same measure and explore how different information is produced at different scales. If E = 0, then there is no new information produced: We know the future from the past, one scale from another. If E = 1, then we have maximum information produced: We have no way of knowing the future from the past, or one scale from another; we have to observe them.
In this context, “new information” does not imply something that has never been produced before but a pattern that deviates from the probability distribution of previous patterns: If previous information tells you nothing about future information (as in the case of fair coin tosses), then each symbol will bring maximum information; if future symbols can be predicted from the past (which occurs when only one symbol has maximum probability of occurring and all the others never occur), then these “new” symbols carry no information at all.
If we already have information at one scale, but observe “new” information at another scale, that is, that cannot be derived from the information at the first scale, then we can call this information emergent. This measure of emergence has successfully applied in different contexts (Amoretti & Gershenson, 2016; Correa, 2020; Febres et al., 2015; Fernández et al., 2017; Morales et al., 2018; Ponce-Flores et al., 2020; Ramírez-Carrillo et al., 2020; Zapata et al., 2020; Zubillaga et al., 2014). Still, this does not imply that other measures of emergence are “wrong” or not useful, as usefulness depends on the contexts in which a measure is applied (Gershenson, 2002). Moreover, the goal of this article is not to defend a measure of emergence but to explore the concept of emergence.
A better understanding of emergence has been and can be useful for soft, hard, and wet ALife. In all of them, we are interested in how properties of the living emerge from simpler components. In fewer cases, we are also interested in how systems constrain and promote behaviors and properties of their components (downward emergence).
Emergence is problematic only in a physicalist, reductionist worldview. In an informational, complex worldview, emergence is natural to accept. Interactions are not necessarily described by physics, but they are “real” in the sense that they have causal influence on the world. We can again use the example of money. The value of money is not physical but informational. The physical properties of shells, seeds, coins, bills, or bits do not determine their value. This is very clear with art, the value of which comes from the interactions among humans who agree on it. The transformation of a mountain excavated for open-pit mining does not violate the laws of physics; however, using only the laws of physics, one cannot predict whether humans might decide to give value to some mineral under the mountain and transform matter and energy to extract it. In this sense, information (interactions, money) has a (downward) causal effect on matter and energy.
Using an informationist perspective, one also avoids problems with downward causation, as this can be seen as simply the effect of a change in scale (Bar-Yam, 2004b). In the Game of Life, one can describe gliders (higher scale) as emerging from cell rules (lower scale) but also describe cell states as emerging from the movement of the glider in its environment. In a biological cell (higher scale), one can describe life as emerging from the interactions of molecules (lower scale) but also as molecule states and behavior emerging from the cell’s constraints. In many cases, biological molecules would simply degrade if they were not inside an organism that produces and provides the conditions for sustaining them. In a society (higher scale), one can describe culture and values as emerging from the interactions of people (lower scale) but also describe individual properties and behaviors as emerging from social norms.
Like with special and general relativity in physics, one cannot define one “real” scale of observation (frame of reference). Scales are relative to an observer, as is the information perceived at each scale. Information is relative to the agent perceiving it. In other words, as mentioned, different meanings can be implied by the same messages.
4 Self-Organization
Emergence is one of the main features of complex systems (De Domenico et al., 2019). Another is self-organization (Ashby, 1947, 1962; Atlan & Cohen, 1998; Gershenson & Heylighen, 2003; Heylighen, 2003), and it has also had great influence in ALife (Gershenson et al., 2020). A system can be usefully described as self-organizing when global patterns or behaviors are a product of the interactions of its components (Gershenson, 2007).
One might think that self-organization requires emergence, and vice versa, or at least, that they go hand in hand. However, self-organization and emergence are better understood as opposites (Fernández et al., 2014; Lopez-Ruiz et al., 1995).
This equation assumes that the dynamics are internal so that the organization is self-produced. Otherwise, we might be measuring “exo-organization.” Minimum S = 0 occurs for maximal entropy: There is only change. In a system where all of its states have the same probability, there is no organization. Maximum S = 1 occurs when order is maximum: There is no change. Only one state is possible, and we can call this state “organized” (Ashby, 1962). Note that this measure is not useful for deciding whether a system is self-organizing (Gershenson & Heylighen, 2003); rather, the purpose of the measure is to compare different levels of organization in a specific context.
Also, it should be noted that information (and thus emergence and self-organization as considered here) can have different dynamics at different scales. For example, there can be more self-organization at one scale (spatial or temporal) and more emergence at another scale.
5 Complexity
This measure of complexity, C, is maximal at phase transitions in random Boolean networks (Gershenson & Fernández, 2012), the Ising model, and other dynamical systems characterized by criticality (Amoretti & Gershenson, 2016; Febres et al., 2015; Franco et al., 2021; Pineda et al., 2019; Ramírez-Carrillo et al., 2018; Zubillaga et al., 2014). Recently, we have found that different types of heterogeneity increase the parameter regions of high complexity for a variety of models (López-Díaz et al., 2022; Sánchez-Puig et al., 2022).
Interestingly, typical examples of emergence and self-organization are not extreme cases of either. Perhaps this is the case because if we had only emergence or only self-organization, then these would not be distinguishable from the full system. It is easier to provide examples in contrast to another property. So, for example, a flock of birds is a good example of emergence, self-organization, and complexity because the flock produces novel information, self-organizes, and has interactions at the same time. If it had E = 1, S = 0, C = 0, then only information would be produced constantly (no complexity or organization). If the flock had E = 0, S = 1, C = 0, then it would be static and fully organized, without change (no complexity or emergence).
6 Discussion
Emergence is partially subjective, in the sense that the “emergentness” of a phenomenon can change depending on the frame of reference of the observer. This is also the case with self-organization (Gershenson & Heylighen, 2003) and complexity (Bar-Yam, 2004b). Actually, anything we perceive, all information, might change with the context (Gershenson, 2002) in which it is used. Of course, this does not mean that one cannot be objective about emergence, self-organization, complexity, life, cognition (Gershenson, 2004), and so on. We just have to agree on the context (frame of reference) first.
Therefore the question is not whether something is emergent. The question becomes, In which contexts it is useful to describe something as emergent? If the context focuses only one scale, it does not make sense to speak of emergence. But if the context implies more than one scale, and how phenomena/information at one scale affects phenomena/information at another scale, then emergence becomes relevant.
Thus, is emergence an essential aspect of (artificial) life? It depends. If we are interested in life at a single scale, we can do without emergence. But if we are interested in the relationships across scales in living systems, then emergence becomes a necessary condition for life: Life has to be emergent if we are interested in explaining living systems from nonliving components. Without emergence, we would fall into dualisms. A similar argument can be made for the study of cognition.
Moreover, information has been proposed to measure how “living” a system might be (Farnsworth et al., 2013; Fernández et al., 2014; Kim et al., 2021). This view also suggests that there is no sharp transition between the nonliving and the living; rather, there is a gradual increase in how much information is produced by an organism compared to how much of its information is produced by its environment (Gershenson, 2012).
One implication is that materialism becomes insufficient to study life, artificial or biological. Better said, materialism was always insufficient to study life. Only now are we developing an alternative. It remains to be seen whether it is a better one.
There are inherent limitations to formal systems (Gödel, 1931; Turing, 1936). These limitations also apply to artificial intelligence (Mitchell, 2019) and soft and hard ALife. Simply described, a system cannot change its own axioms. One can always define a metasystem, where change will be possible (in a predefined way), but there will be new axioms that will not be possible to change. Because scientific theories are also formal, they are also limited in this way. This is one of the reasons why emergence and downward causation are difficult to accept for some: There is no hope of a grand unified theory, as the emergent future cannot be prestated (Kauffman, 2008) and downward causation can change axioms of our theories. Thus the traditional approach has been to deny emergence and downward causation. I believe that this is untenable (Wolpert, 2022) and that we have to develop a scientific understanding of phenomena that—even when we know they cannot be complete—can be always evolving.
Acknowledgments
I am grateful to Edoardo Arroyo, Manlio De Domenico, Nelson Fernández, Bernardo Fuentes Herrera, Hiroki Sayama, Vito Trianni, Justin Werfel, and David Wolpert for useful discussions. Anonymous referees provided comments that improved the article. I acknowledge support from UNAM-PAPIIT (IN107919, IV100120, IN105122) and from the PASPA program of UNAM-DGAPA.