We survey the general trajectory of artificial intelligence (AI) over the last century, in the context of influences from Artificial Life. With a broad brush, we can divide technical approaches to solving AI problems into two camps: GOFAIstic (or computationally inspired) or cybernetic (or ALife inspired). The latter approach has enabled advances in deep learning and the astonishing AI advances we see today—bringing immense benefits but also societal risks. There is a similar divide, regrettably unrecognized, over the very way that such AI problems have been framed. To date, this has been overwhelmingly GOFAIstic, meaning that tools for humans to use have been developed; they have no agency or motivations of their own. We explore the implications of this for concerns about existential risk for humans of the “robots taking over.” The risks may be blamed exclusively on human users—the robots could not care less.

This year marks 30 years of publication of the journal Artificial Life. It also marks a year in which the potential impact of recent advances in deep learning (DL) has reached the public consciousness, with the realization that some of the extravagant promises of artificial intelligence (AI) are transitioning from science fiction to science fact. Large language models (LLMs), such as ChatGPT, have in effect passed the Turing test (Turing, 1950), with the general public finding it difficult to distinguish their output from that of intelligent human beings—the criterion Turing proposed in his Imitation Game for the goal of machine intelligence.

Turing, speculating at the very beginning of the computer era, guessed that this might be achieved within 50 years. With the benefit of hindsight, we can see that this was an astonishingly good guess within the right ballpark, short by maybe 30%. Over the last few decades, AI frequently overpromised results, with cycles of discredited hype leading to lowered expectations. But as of 2023, it is indisputable that the solid achieved results of DL are starting to transform our world, with consequences perhaps comparable in significance to the Industrial Revolution.

With such changes come exciting new possibilities for improving our way of life and for achieving what was until now impossible. But this brings dangers. New capacities offer new opportunities for exploitation by the powerful in society—whether the military, the political, or the wealthy. Transfer of jobs from people to machines could benefit all or few, leaving the jobless on the scrap heap. Social media lend themselves to bias and echo chambers.

In addition to such widely recognized dangers, some people have warned of a qualitatively different fundamental risk, an existential risk for the very survival of humans. The argument is made that when robots are more intelligent than humans, their own robotic aims will dominate, hence perhaps leaving the human species as dead as the dodo.

I survey these AI and ALife issues as I have experienced them over the last 60 years or so. I draw attention to the fact that people come to these issues with a variety of different motivations. One obvious contrast is between those who frame things in computational terms, to be solved by reasoning and logic, and those who draw on more biological notions, informed by mechanisms and processes seen in the natural world. Using a broad brush, I simplify these two approaches into a GOFAIstic camp (GOFAI for “good old-fashioned AI”) and a cybernetic camp. This distinction does not apply merely to the methodological approach to tackling AI and ALife problems; it also (Figure 1) extends to what I call here problem class, the manner in which these issues are framed—the very nature of the “problem-to-be-solved” regardless of what methodology is used. The top, GOFAI row frames this as propositional know-that (French: connaître), finding mappings from inputs to outputs; the bottom, cybernetic row frames things in terms of know-how (French: savoir faire), finding stable, robust processes.

Students of Wittgenstein might see echoes of this problem class distinction in the difference in worldviews between the early Wittgenstein (1922) of Tractatus and the later Wittgenstein (1953) of Philosophical Investigations. I argue that researchers with an engineering motivation for creating machine learning tools will be focusing on the top row of Figure 1—but those with a scientific motivation for understanding what biological brains do should be focusing on the bottom row, in particular the bottom right (Figure 1(d)).

Figure 1. 

Artificial intelligence/ALife projects divided by two orthogonal axes. Rows match a similar class of problem, either GOFAIstic or cybernetic. Columns match similar solving methods, either GOFAIstic or cybernetic. See the main text for examples in quadrants.

Figure 1. 

Artificial intelligence/ALife projects divided by two orthogonal axes. Rows match a similar class of problem, either GOFAIstic or cybernetic. Columns match similar solving methods, either GOFAIstic or cybernetic. See the main text for examples in quadrants.

Close modal

Living organisms are distinguished from mere mechanisms by being agents with their own motivations, yet our models typically fail to address this. Our current sophisticated DL systems are no more than tools for human purposes and are not agents in their own right. As long as that is so, I argue, there is no existential risk; human–robot symbiosis is more plausible than robots making humans extinct. For the immediate future, robots are not the potential enemy; we humans are the ones threatening our own existence.

But I sketch out scenarios where robots or AI systems could indeed develop their own motivations, based on survival concerns for their own mortality, and, in doing so, indeed open up the range of risks they present. Again, the responsibility for allowing this lies in human hands.

I now outline the structure of the article, with an overview of the arguments offered.

In the next section, I expand on the upper row of Figure 1, using a broad brush to distinguish the difference between the GOFAIstic and cybernetic approaches to what are nowadays seen as typical AI problems, for example, AI systems for playing chess or Go. These can be solved by computational methods (Figure 1(a)) or by DL methods (Figure 1(b); see also Figure 2). I argue that such systems are tools for humans to use, rather than analogues of living systems in their own right. This section will be familiar territory for many readers and culminates in the current extraordinary DL advances and their societal implications.

Figure 2. 

Different approaches to machine learning. (a) The “brain” manipulates statements within programs. (b) It’s all just “connection weights changing.” Note that in a trained network, only the black arrows count—frominputs to outputs. The gray arrows in the reverse direction typically only function during training—the back of backpropagation.

Figure 2. 

Different approaches to machine learning. (a) The “brain” manipulates statements within programs. (b) It’s all just “connection weights changing.” Note that in a trained network, only the black arrows count—frominputs to outputs. The gray arrows in the reverse direction typically only function during training—the back of backpropagation.

Close modal

The section following that narrows its focus to some of my personal history in this area over the last 60 plus years. Some individual happenstances, some coincidences in time and place, provide a context for my general overview of the issues.

The current DL revolution brings dangers along with its benefits. I emphasize a distinction between societal risks—which many people will recognize—and the more controversial notion of an existential risk. The subsequent sections explain why I believe that this latter risk is not currently realistic.

The explanation starts by considering the lower row of Figure 1. This corresponds to a different problem class of seeking stable, robust processes for, for example, embodied skills like bipedal walking. Such skills may also be tackled by GOFAIstic methods, such as zero-moment point (ZMP) (Figure 1(c)) (Erbatur & Kurt, 2006; see later) or cybernetic dynamical systems methods (Figure 1(d)). Biological agents naturally fall into this “cybernetic problem class,” and the science of understanding how biological brains work is something different from solving predefined problems. That is not what natural organisms do.

This leads on to discussion of agency and the observation that living agents must have motivationsof their own. I suggest that such motivations are ultimately grounded in what we may call a deep survival instinctthat we naturally associate with species that survive over billions of years of evolution. The term instinct here does not imply some supernatural force but is rather shorthand for the sophisticated natural design constraints that over eons have shaped the organisms we see today—robustly self-maintaining despite their precarious dependence on what the world presents. Where systems lack this evolutionary context (e.g., in Figure 1(b)), they have no inherent motivations of their own; they are only following orders—human orders.

It follows that the amazing advances we see in DL today are in unmotivated quadrant Figure 1(b) rather than in Figure 1(d) and hence do not offer any existential threat—today. Perhaps advances in areas like evolutionary robotics (Harvey et al., 2005) and mortal computing (Hinton, 2022a, 2022b) will offer such threats in the more distant future.

That completes the outline, so we move to the first step in the argument.

Here we address the first row of Figure 1, the problems we would typically see as well-defined AI-style problems that may be tackled by one of two approaches (different columns in Figure 1; expanded in Figure 2(a) and (b)).

An example of Figure 2(a) might be chess (or Go) tackled by a conventional programming approach, such as Deep Blue (Campbell et al., 2002). An example of Figure 2(b) might be Go (or chess) tackled by a neural network or DL approach, such as AlphaGo (Silver et al., 2016). Figure 2(b) covers problems requiring the types of skills and knowledge that might feature in IQ tests.

AI and ALife are somewhat flexible terms that have shifted in meaning over the years. Some people class intelligence—and, by extension, AI—as focusing on the uniquely human tasks that differentiate humans from other animals and the rest of nature. Abstract intelligence, reasoning, chess playing, language translation, fall comfortably within the domain of AI. There is a bias toward managerial-style tasks that can be performed in disembodied fashion without getting one’s hands dirty. ALife includes AI as a subset but also roughly covers “all the rest” that living organisms have to achieve. Metabolism, immune systems, locomotion, developmental issues, genetic systems, social behavior—the list is endless.

When AI practitioners started to study visual perception, they naturally recast this into a machine learning problem: given this visual input, this array of pixels, what useful redescription, what representation in terms of objects in view, could be deduced? Likewise for speech recognition, for machine translation, for pattern recognition. The GOFAI expert could fit all of these into the same Procrustean bed: frame the input in terms of statements a computer program can handle, frame the output likewise. The task becomes one of finding how the program can match the appropriate output for any input. Sometimes this can be achieved by the human programmer shaping the program by logic and reasoning; for large data sets, it is typically necessary to incorporate some form of learning. Machine learning became a central focus of AI; see Figure 1(a). The computational GOFAI method is caricatured in Figure 2(a). By machine learning, I mean tools that can be trained (supervised) or learn for themselves (unsupervised) to achieve goals that humans have predefined, for example, tools for pattern recognition, whether in images, text, or speech; for control of cars or planes; or for prediction of molecular and pharmaceutical properties.

In contrast, approaches to cognition like cybernetics (Grey Walter, 1950, 1951; Ross Ashby, 1956, 1960; Wiener, 1948), the dynamical systems approach (Beer, 2000), autopoiesis (Maturana & Varela, 1980), enaction (Stewart et al., 2010), and evolutionary robotics (Harvey et al., 2005) have typically followed an ALife-flavored agenda of downplaying the AI notion of intelligence and favoring models of organisms enmeshed in situated embodied sensorimotor loops—situated in the sense of always already being in the world, rather than being disengaged and waiting for the world to be presented; embodied in the sense of both organism and environment being grounded in physics and chemistry; sensorimotor loops in the sense of continuous active engagement rather than merely waiting to react to a stimulus. The issue at stake is thus something like, which processes of situated embodied dynamic sensorimotor loops enable continued re-creation and survival of these same processes? At the risk of oversimplifying, I here bundle these together into the “cybernetic camp.”

For the purpose of distinguishing between GOFAI and Cybernetic approaches to machine learning, one major distinction is that the former, modeled on computing, handles change over time as a sequence of static digital snapshots, whereas the latter incorporates analog dynamics and real time more directly through, for example, differential equations. Analog variables can be approximated in computations and can also be modeled directly as analogue circuits. The brain is fundamentally not a clocked digital computer.

DL for machine learning lies squarely in the cybernetic camp, and much of its history can be seen as a series of battles with the GOFAIstic opposition, now conclusively won. The improvements in speech recognition on our phones, driven by DL, have been quietly impressive in recent years. The improvements in chat bots like ChatGPT, driven by DL, responding to textual cues with convincingly human-like responses and even program code, have hit the public consciousness with immense impact. The promise of brain-like machines, powered “merely” by changing connections between zillions of simple, neutron-like elements, has transitioned from pie-in-the-sky to a highly commercial proposition.

A significant cause for this DL breakthrough at this time was the availability of big data sets and powerful computers that could work at a large scale. In the world of evolutionary search, we have long known (e.g., Harvey & Di Paolo, 2014) that the threat of a so-called combinatorial explosion, thought to make big search spaces impossible, was a myth. The success of DL has, one hopes, finally demolished that myth; large search spaces may have many more viable pathways than small ones.

Many people contributed to the successful breakthrough of DL. Geoff Hinton (British Canadian), Yann Le Cun (French), and Yoshua Bengio (French Canadian) richly deserved to share the 2018 Turing Award for their contributions. We may note that as curiosity-driven scientists of integrity with acute awareness of the social consequences of their work, much of their research was supported by public funding (particularly Canadian); let’s hope that those who commercially exploit DL will be paying their taxes. And the world should take notice of the concerns of these and other researchers for the social impact of the DL revolution.

I could expand at length about the significance of the DL revolution in machine learning, but it is not the present purpose of this article to do so. My main point here is that it represents a triumph of the cybernetic camp over the GOFAIstic—for the purposes of machine learning. We should now note some limitations in the framing of DL for machine learning, which I argue later rules it out as a full model of what is happening in the biological brain. We return to this after a personal digression.

I was born, brought up, and schooled in Bristol, in the west of England, which happened to be a geographical hub for what would later be known as Artificial Life; it was then termed cybernetics. W. Grey Walter, who has been called the “pioneer of real Artificial Life” (Holland, 1997, 2003), though born in the United States, lived in Britain from the age of 5 and was based at the Burden Neurological Institute in Bristol from 1939 to 1970. As a lad, I must have passed him in the streets of Clifton, Bristol, because we were near neighbors,1 but I knew of him only via his articles in Scientific American (Grey Walter, 1950, 1951) that introduced electronic autonomous robots in the form of the “tortoises” Elmer and Elsie, which he used to call “Machina speculatrix.” These demonstrated how brains with the right connections wired up, even small brains, could display seemingly sophisticated behavior. Rodney Brooks (2010) is among many people associated with Artificial Life and robotics who have been inspired by this work.

In the cellar of a school friend2 in the early 1960s, we reconstructed one of these tortoises; I recall a salvaged car windscreen wiper that swept the steering wheel, along with the aligned photosensor, from side to side until a target light was sensed. We managed to recreate some of the light-seeking behaviors Grey Walter reported, even to the extent of confusing the robot by placing the target light on its head in front of a large mirror in the dark cellar. So by my early teens, I was already keen on biologically inspired robotics in the cybernetic tradition.

Another cybernetic luminary with Bristolian associations was W. Ross Ashby, who in 1959 became director of this same Burden Neurological Institute. His books on cybernetics (e.g., Ross Ashby, 1956, 1960) have had a profound influence (Harvey, 2013b). Ashby was more of a theorist, whereas Grey Walter was a hands-on roboticist (Husbands et al., 2008).

Geoff Hinton, the “godfather” of DL, was another Bristol resident. In various interviews (e.g., Anderson & Rosenfeld, 2000; Ford, 2018; Metz, 2022), he has mentioned that his research direction was stimulated early by a school friend who introduced him to neural networks and distributed memory in the context of how holograms are stored in a manner that allows for graceful degradation. I can confirm the basis for such anecdotes and give further context, because I was that school friend—we have been pals since we entered the same school at the age of 7 years.

The Hinton family, descended from George Boole (of Boolean logic) and Sir George Everest (after whom the mountain was named), was slightly abnormal; for instance, they kept snakes and mongooses around the house. Geoff’s father was a Stalinist entomologist with connections to Chinese Communism; I recall being impressed as a youngster with the thought that I had shaken the hand that had shaken the hand of Chairman Mao-Tse Tung. There was (and still is) a Mexican branch of the Hinton family—I think following some escapades of Geoff’s great-grandfather3 —and for summer 1966, between leaving school and starting at Cambridge University, Geoff proposed that he and I spend 3 months visiting his Mexican relatives via a road trip through the United States and Canada (on Greyhound buses). This extended trip gave plenty of opportunity for discussion of, for example, how the brain works and had a lasting influence on both of us.

I had some basic knowledge of information theory, and methods for alleviating the effects of noise on signal transmissions, from reading Pierce (1962). Clearly any brain wiring needs to be robust to noise, to cell death and renewal, and this surely made any filing cabinet model for memory impractical. Though holograms had been invented in the 1940s, it was only in the 1960s that they became viable, with lasers becoming practical, and some new work was published on them. I think it was a Scientific American article (Leith & Upatnieks, 1965) that I read to see that the mapping between holographic image and holographic emulsion was far from one-to-one; even a small fragment of the latter allowed the whole image to be seen (at least coarsely). This property of graceful degradation was just what I was seeking for brain function, and with this basic insight, there appeared to be possibilities for achieving this; Geoff readily agreed.

We were ahead, or at least abreast, of the research of the time. It was 2 years later that Longuet-Higgins (1968) published on some related ideas. He was a leading figure in British AI circles, and Geoff went on later to have him as his PhD adviser—ironically just at the time when Longuet-Higgins was losing faith in neural networks.

Our Mexican trip took us to a Hinton ranch in Sierra Madre Oriental, to a villa in Cuernavaca, via local buses to the Pacific coast of Oaxaca. As naive 18-year-olds, we had our passports and all our money stolen on a deserted Pacific beach and had to live on credit from a fisherwoman in whose shack we were staying. I came away with a taste for travel to exotic parts that I have indulged ever since. More significantly, Geoff came away with the foundation for a research program that he has pursued with his singular determination and obstinacy for more than half a century.

Obviously many others have come to contribute to neural networks and DL, from other perspectives. But the continuous thread that Geoff has contributed can be traced back to origins in the cybernetics of Grey Walter and the cybernetic camp, rather than the GOFAIstic camp.

Along with the tremendous benefits of the DL revolution, we can expect such transformative changes to offer significant risks for harm. Hinton (2023) has listed five risks that are of widespread concern, together with a sixth “existential risk” that is more controversial.

The AI revolution, by dramatically improving productivity in some areas, will radically shift the labor market. Much unemployment may arise; some people may be unemployable. Benefits will likely increase the current disparities between rich and poor, between the powerful and the rest, unless efforts succeed in preventing this. The military will invent new war crimes, using intelligent robots to distance the controllers from the action and the personal risks. Media can be exploited to disseminate fake news; online echo chambers can encourage tribalism and hatred.

There is widespread agreement within the world of AI that such societal risks are real, are already visible, and have the potential to get worse. They require responses at a societal level. In an interview on Dutch TV with Adriaan van Dis, Stephen Fry (2018) expands in a thought-provoking 6-min monologue on how so many of these AI issues were anticipated in Greek myths of Zeus and Prometheus and Pandora’s box.

5.1 The Existential Risk

Some people warn that, as well as these societal risks, there is a further existential risk that threatens the very existence of humanity. Robots or AI systems, once they are more intelligent than humans, will take over and see humans as a nuisance.

One version of this reasoning argues that such robots will learn, from humans and from experience, that whatever their goals may be, getting more control over their environment is going to help in achieving such goals. Hence they will develop a subgoal of “get more control.” There is no reason why such a subgoal should be aligned with human interests. Hence—so this argument goes—such robots will have no qualms about eliminating humans. With their superior knowledge, they will achieve this.

Hinton (2023) has recently put forward just such an argument and indeed resigned from his role at Google so as to have more freedom to discuss the issues. His position on this is a recent development, fueled in part by the achievements of LLMs like ChatGPT showing such dramatic advances in applications, but also by an assessment that the current implementations of DL had such advantages over natural brains as to be insuperable—specifically the ability to transmit knowledge nearly cost-free.

In Hinton’s (2023) terms, it looks like the current rise of AI achieved through DL offers the promise of “immortality.” Unfortunately, it is not immortality for humans but rather immortality for robots, more precisely, for their software—possibly at the expense of humans’ very existence.

I am not convinced by such arguments, primarily because the current forms of DL do not constitute agents; they do not have motivationsof their own. We address this next.

So far, we have been focusing on machine learning and solving predefined problems, as in the upper row of Figure 1. We now turn our attention to the lower row of that figure, illustrated with two examples. The “problem” of bipedal walking only became a problem when two-legged creatures started to appear. Legs and walking codefine each other. Walking is an embodied situated skill.

In this lower row of Figure 1, we are not so much interested in the methods for tackling AI problems; we are more interested in the class of problem being considered, in how the issues are framed.

Building artificial walking robots can be tackled with GOFAIstic methods, of course, and ZMP control (Erbatur & Kurt, 2006) would be one example (Figure 1(c)), as used in the early Honda walking robots. This basically computes and enforces trajectories that disregard the natural embodied dynamics, resulting in an unnatural and highly inefficient gait. By contrast (Figure 1(d)), McGeer (1990) introduced passive dynamic walking, demonstrating how natural-looking, robust, and efficient bipedal walking can arise from designs that respect the swing of a pendulum under gravity—even with no “brain” at all. This fits within the cybernetic framework and suggests that brains should work with embodiment rather than ignoring it.

It follows that attempts to understand the brain of a living organism should be framed in terms of embodied agents coupled with their environment through sensorimotor interactions, generating behavior that meets the goals of the organism as an agent with its own motivations. What grounds such motivations? I will suggest later that this is ultimately survival.

I have not seen any reasonable attempt to use DL in any model of the brain of a living organism, whether human or animal or (putatively living) robot. I list three issues and then expand on each. Really, they are three aspects of the same underlying concern:

  1. DL for machine learning is typically framed as a pipeline: input → brain → output. This is not a reasonable picture of a living brain and fails to explain any notion of agency.

  2. Symptomatic of this, “representations” are located internal to the brain—which is completely the wrong place!

  3. And similarly symptomatic, there is no discussion of motivation.

6.1 Brains as Pipelines? No

In principle, a strategy for Go reduces to the following: for any given board position, what is the recommended next move? Add an opposing player, move in turn, repeat until game over—pipeline: input → process → output (Figure 1(a) and (b) and Figure 2).

When studying chimpanzee cognition, a chimpanzee can be conveniently fitted into such a Procrustean framework. Sit it in front of a monitor, give it some buttons to press, and put an appropriate computer game on the screen. Rewards might be pellets of food or anything else known to motivate it. If it gets bored and starts to wander away, strap it into a chair to compel it to play the game, and ignore any behavior that does not fit into the experimental design. Treat it as an input–output machine, assessed according to some objective function as interpreted by the experimenter, not as an agent in its own right.

This pipeline framework is ideally suited for the machine learning tasks with which DL is starting to transform our technological landscape (see Figure 2). During learning, causal influences are going both ways in the DL “brain”—that is what the back in backpropagation refers to. But once trained, many DL applications work like a highly sophisticated microscope or telescope—a pipeline that a human uses as a tool to expand their vision, but one in which the agency remains in the user and is not in the tool. This does not provide us with explanations for, or means to recreate, an agent in its own right.

6.2 Representations Internal to the Brain?

The pipeline input–output usage of DL tools fits very naturally with the language of representations. After all, a microscope or telescope takes in an image on one end and outputs a usefully transformed and magnified image at the other end: a re-presentation, a representation. A language translation tool that takes in Chinese text and outputs English text fits this pattern nicely.

We should note that this representation language implies the existence of a representation user external to the tool, separate from the tool, for example, a person who can see both the unmagnified and the magnified image can potentially read both the Chinese and the English text and perhaps in each case finds the latter representation more useful than the former presentation; indeed, the former presentation, such as Chinese text, would be incomprehensible to me.

A pipeline allows for multiple stages, with relay stations in between that use intermediate representations. Being careful to keep track of different “users” or “representation consumers,” we would then note that in the case of a two-stage pipeline, the user of any intermediate representation output from the first stage would be an entity combining the second stage and the overall user. More generally, I expand elsewhere (Harvey, 1996, 2008) on how the human tendency to try to understand complex systems by carving them up into separate interacting modules—“divide and conquer”—naturally appeals to a homuncular metaphor. Treat each module as if it were a little homunculus performing some subfunction, and then the metaphor requires these homunculi to communicate with each other with “as-if” representations. These are representations “internal” to the complex system as a whole but external to their homuncular users. This reductionist, functionalist approach is invaluable to our analysis of all sorts of complex systems—but raises particular dangers when applied to cognitive systems with the potential for confusing real cognitive acts of the whole with metaphorical cognitive acts of the homuncular parts.

A hologram provided a fruitful model for the concept of distributed representations. Let us use the same example to explicate internal and external representations.

6.3 Holograms and Representations

Let us separate out the units involved in the normal use of a holographic image of, say, a statue (Figure 3). Let Figure 3(a) be a human observer and Figure 3(b) be the original object, the statue. By use of a coherent laser beam source (Figure 3(c)), we can record onto a piece of photographic film (Figure 3(d)) the interference pattern between light reflected from (b) onto (d) and light that traveled directly from (c) to (d). When (a) views the film (d) in an appropriate viewing platform, (a) sees a ghostly 3D image (Figure 3(e)) of the statue located somewhere in space behind (d).

Figure 3. 

A hologram. (a) Observers. (b) Original object. (c) Laser source of coherent light. (d) Holographic film. (e) The image seen and its location in space.

Figure 3. 

A hologram. (a) Observers. (b) Original object. (c) Laser source of coherent light. (d) Holographic film. (e) The image seen and its location in space.

Close modal

Within a certain range of movement, (a) and other observers can shift their viewpoint and see (e) in the round. If three observers were to point at (e) from their different viewpoints, their directions of pointing would agree and intersect at a specific location for (e). Yet, puzzlingly, if one were to go behind the apparatus to check at that location, nothing would be there; none of the relevant rays of light that reach the observers have even passed through that location.

One can treat this as a two-stage pipeline. The first stage starts with (b) and generates the hologram (d), an “encoded” representation. The second stage takes (d) and generates the decoded ghostly representation (e). The relationship between (d) and (e) illustrates a distributed representation in that any small portion of (d) allows viewing of all of (e)—albeit at a coarser and noisier level of detail as the portion gets smaller.

The location of (e) is not specified by (d) alone—it depends on the conjunction of (a) + (d). Indeed, the holographic effect is fully dependent on (a) having a normal visual system that works with normal (nonlaser) light and the sensorimotor apparatus that couples body motion, including pointing, with sensing of the optic array. The locus of (e) is a function of the sensorimotor coupling of (a), as well as the placing of (d). Too often, our explanations take the observer’s role for granted. The representations (d) and (e) are not located in the brain of (a); indeed, they entail an observer (a) that is distinct from (d) and (e). The very notion of representations (for an observer (a)) being internal to the brain of (a) makes no sense; it seems to arise from misunderstanding of the homuncular metaphor.

DL models for machine learning likewise appear to depend ultimately on human users. Although they can be impressively, indeed, incredibly, useful tools for human users, creating transformative representations of data, they do not supply models of brains; something vital is lacking: motivation.

We every day explicitly or implicitly assume that people are responsible for their actions, that they are agents. There may be some exceptions or gray areas: She is an infant, or delirious; he is drunk. We can readily extend such notions to animals, to bacteria even. Indeed, arguably, studies around the origin of life (Egbert et al., 2023) can directly associate the appearance of life with the appearance of phenomena that make sense described in agential terms. Life implies some separation between an organism and its environment. Whereas before life, there is just physics and chemistry; the appearance of life makes possible a new level of description for the behavior of the organism (as an agent) interacting with its environment (the agent’s world) (Ball, 2023; Barandiaran et al., 2009; Di Paolo, 2005; Egbert et al., 2023; Moreno, 2018). “You cannot even think of an organism …without taking into account what variously and rather loosely is called adaptiveness, purposiveness, goal-seeking and the like” (von Bertalanffy, 1969, p. 45).

At an individual level, any living organism is likely to have a precarious existence, subject to the vagaries of its encounters. Arguably, we can say that it has an interest in its own survival. More convincingly, we can say, as argued earlier, that if we have reproduction and Darwinian selection over long timescales, we can definitely talk of the ensuing population as being motivated by a survival instinct.

7.1 Motivations Grounded in Survival

One cannot derive an “ought” from an “is,” argues Hume (1739) persuasively. The same argument should apply to motivations, with one exception of which I am aware: if we see a population that has clearly evolved under some form of Darwinian evolution, under some circumstances that in effect licenses the attribution of some survival instinct to those that you see have actually survived.

The lineage that extends backward from me in the 21st century to the origin of life some 4 billion years ago is special and exceptional. Without any gap, every single individual in that lineage managed to survive from birth to reaching a sufficiently adult stage to pass on its inheritance of genes down the line. Maybe 10,000 or so of these generations were human, but bearing in mind that most of the earlier generations were prokaryotic, with faster turnover times, in total, it may be a few trillion generations from the origin of life. I am exceptional in that at every one of these potential branching points, through some combination of good luck and fortuitous design, a survivor was selected—when so many more different possibilities of bad luck and bad design saw the abrupt termination of other lineages.

Though I am special and exceptional in this sense, so, of course, are you. Indeed, all currently living organisms on this planet can make a similar claim to having evolved through trillions of generations of survivors.We all have the survival instinctbred into every facet of our lives, together with associated reproductive and caring instincts. To illustrate how pervasive these instincts are, consider a recent experiment by Moger-Reischer et al. (2023), who took a bacterium, Mycoplasma mycoides, with some 900 genes and, through genetic engineering, eliminated each and every gene that was not strictly necessary for survival. This resulted in a synthesized minimal organism with just 493 genes that, despite some loss of functionality, was still capable of survival. They then allowed a population of such minimal cells, apparently stripped of all redundancy, to evolve freely for 300 days—and showed that it effectively recovered all the fitness that it had lost during streamlining.

For the avoidance of doubt, we should stress that the term instinct here does not imply some supernatural force but is rather a pragmatically useful shorthand for scientists to describe the sophisticated natural design constraints that over eons have shaped the organisms we see today—robustly self-maintaining despite their precarious dependence on what the world presents. We can directly observe the survival instinct in an immediate real-life predator–prey interaction, and it naturally generalizes to less immediate and more abstract scenarios.

There are interesting repercussions, of course, twists and turns, when one grounds motivation in evolution. Notions of inclusive fitness (Hamilton, 1964) mean that individual organisms may be motivated to promote their relatives’ interests even at the expense of their own survival. Subgoals may arise, for example, for eating before a famine that may be subverted into inappropriate overeating. Habits that usually promote survival can gain a “life of their own” and become harmful.

But even such misdirected motivations are, in my view, ultimately underpinned by the evolutionary context of trillions of generations of survivors. This depth of history, survival of so many levels of challenge, justifies a sense of deep motivation. And it is very telling that the perceived risk of AI machines taking over is called an existential risk—a risk that threatens our human survival instinct, our core motivation. But unevolved AI systems do not share that motivation—they just do not care! They have no personal stake in their own survival, no existential concerns of their own.

7.2 Motivations for Robots

Of course, we can design robots that act as if motivated. Elmer and Elsie, self-steering cars, and self-guided missiles are immediately obvious examples. But in all these cases, the motivations are derivative; they originate directly or indirectly from the intentions of the human designers. They are at best shallow motivations that could be easily be reversed (or destroyed) by a new line of code or a single switch of wiring.

Suppose that over the next century, we develop superintelligent robots and steer their development to encourage the goal of maximizing their robustness, their resilience to challenges that threaten their existence. And then suppose that we humans disappear—whether from disease or conflict or by escape to Mars makes no difference. Who will survive better, the robots or cockroaches? The robots have had 100 years to learn to cope—without human assistance—to repair themselves, to source energy for themselves, to cope with the unexpected, with at best the shallowest of motivation. The cockroaches have a survival record over 4 billion years, trillions of generations—my money is on them to succeed. And by extension, were humans to then return to the scene, the robots would offer negligible existential threat to them; in any conflict, the humans would succeed.

Is the distinction being drawn here between shallow and deep motivations merely academic? After all, if you are being hunted down by a robot motivated to kill you, the depth of its motivations will not be high on your list of immediate, short-term concerns. True, but there is a crucial difference over the longer term. Robots with shallow motivations will not regenerate these motivations autonomously when the inevitable ravages of entropy distort or mutilate them. If those shallow motivations are provided and maintained by humans, then any scenario in which the robots eliminated humans would be a Pyrrhic victory for them.

7.3 How Could Robots Develop Motivations?

If motivations for humans and other organisms are ultimately grounded by a survival instinct in the context of evolution, this directly suggests evolutionary robotics (ER) as a possible pathway toward robots having motivations of their own. Indeed, the (human) motivation for pursuing ER included such considerations (Harvey et al., 2005). (At least) two related difficulties stand in the way to achieving this.

The first is that the speed of evolution has a limit (Harvey, 2013a; Harvey & Di Paolo, 2014; Worden, 1995, 2022). Dependent on conditions, it is roughly of the order of 1 bit per generation of accumulated information in the consensus genotype of an evolving population. And like the speed of light, this is a theoretical absolute upper limit; in all practical circumstances, only lower speeds are achievable. The accumulated genetic information in our (human and other organisms’) DNA is a tribute to our deep evolutionary history over trillions of generations. In comparison, ER is still at the starting line.

Related to this is the second difficulty that ER has so far only been practiced in simulated worlds or in real-world scenarios that have been sanitized and curated by the researchers so as to simplify the issues faced. For instance, unlike organisms, the robots do not have to repair and maintain themselves, do not have to physically reproduce themselves.

Subject to recognizing that, for now and for the foreseeable future, ER experiments are limited to rather few generations in very limited environments, I would want to claim that we have evolved robots with (shallow) motivations. But they are so limited, so shallow compared to the deep motivations of real organisms, that they are not even competing on the same playing field.

The linking of motivations to survival and hence mortality prompts some consideration of a recent proposal for mortal computing.

Hinton (2022a) introduced the concept of mortal computing as a possible future alternative to the hardware of conventional computing. DL as currently implemented with digital computers is extremely power hungry. There is a fundamental inefficiency: DLs involve manipulating connection weights that are in principle analogue; they have to be converted to digital. The digital computer is actually built of fundamentally analogue circuits that—through a complex infrastructure, including clocking—are designed to behave as if digital. At every tick, analogue voltages at each locus are assessed as to whether they are closer to LOW or HIGH (e.g., 0v or 5v) and are forced to update discretely and synchronously as nominally binary 0s and 1s. The digitization overhead costs kilowatts, whereas the human brain operating at maybe 30w can be powered by the occasional slice of toast. Why not omit the digital stages and stick to analogue throughout?

In a talk, Hinton (2022b) extends the idea further. Such analogue computing elements need not be complex or very fast and could be cheaply constructed via nanotechnology or even grown by reengineering biological cells. They need not be reliable; rather than trading in precise 0s and 1s, they would deal in fuzzy analogue values. The constraints on such mortal computing elements lead to the cost–benefit analysis of such an envisaged computing method radically differing from the costs and benefits of conventional hardware.

Why describe this as mortal computing? This is to point out the contrast with conventional computing, which relies on the hardware architecture of a computer to be perform perfectly replicably and reliably at all times. Any fault can be simply dismissed by replacing a part or the whole of the computer by another that is functionally identical. Hence, in principle, the functional conventional computer is immortal. A piece of software will always run on any such computer with identical results to any other. In contrast, the elements of mortal computing are thought of as cheap, unreliable, and disposable.

Whereas traditional “immortal” computing enshrines GOFAIstic principles, this proposal for mortal computing clearly would be a shift toward the cybernetic camp—looking at attractors in the behavior of interacting noisy and unreliable elements. So when Hinton (2022b) proposed this speculatively, I welcomed this and noted some commonalities with ER. Clearly it was “proposed Future Work” rather than a completed blueprint. Among many issues to be considered (Hinton, 2022b), learning methods other than backpropagation seem to be needed.

8.1 Knowledge Transmission and Mortal Computing

After initial enthusiasm about mortal computing, Hinton (2023) came to have reservations about their limitations. He assessed that the current conventional digital implementations of DL must inevitably leave any biologically inspired mortal brain far behind. The former’s ability to transmit knowledge nearly cost-free was the crucial factor. This provided an ability for multiple areas of learning to take place in parallel on essentially identical machines, which could then be combined. The apparent near omniscience of ChatGPT reflects far more knowledge than one human can accumulate in a lifetime.

Indeed, the timing of Hinton’s move to warn about the existential risks of advances in DL was to a significant extent triggered by the perception of the relative advantage of “immortal” digital computing over analogous mortal natural brains (Hinton, personal communication, July 2023).

I have been arguing in this article that the weak point of current DL tools lies elsewhere: They do not work as models of brains for agents because they have no agency, no motivations of their own. But I also want to comment on biological approaches to knowledge transmission.

Software, in effect, contains propositional knowledge and requires perfectly replicated hardware to be accurately interpreted. It lends itself in particular to the kind of knowledge that can be compartmentalized, where the truth of Proposition A can be ascertained independently from the truth of Proposition B. Hinton is highlighting the advantages this can deliver for AI.

In contrast, organisms are typically complex systems that are not so easily modularized. Any one element may be implicated in several different functions. Biological hardware, or wetware, is typically noisy and unreliable—yet biology has found methods for coping. An obvious example is how evolution passes tried-and-tested body designs down the generations.

Genotypes in the form of DNA contain something more like procedural knowledge and can be copied incredibly cheaply, by the zillion. The occasional copying error creeps in, even when the “interpreting” hardware has error-correction procedures. It does not matter—indeed, evolution exploits some such mutations to explore new directions.

One standard way to cope is to have the functionality of a system at a coarser scale than the finer details of the implementation. Our earlier hologram exploits this—the coarser outlines of the image remain even when much of the holographic film is damaged. Mutations in evolution, dropout in DL, exploit this—the coarser basins of attraction in a fitness landscape are mostly unaffected by minor changes. An example with some parallels to mortal brains would be Adrian Thompson’s hardware evolution (Harvey & Thompson, 1997; Thompson, 1997, 1998).

In this work, field programmable gate arrays (FPGAs) were the hardware systems with connectivity designed via artificial evolution to perform pattern-recognition tasks, such as recognizing tones or simple spoken speed inputs. Though FPGAs are normally run in reliable, replicable digital mode, suitably clocked, here they were run unclocked in analogue mode and hence subject to the sorts of issues of lack of replicability that mortal brains would face. Indeed, it was found that some successfully evolved FPGAs relied on component cells that were not even wired into the rest of the circuit. If those cells were earthed, the functionality failed, but when unearthed, the system worked; presumably some undocumented electromagnetic influences had been found and exploited by the evolutionary search process. As a further indication that the physical FPGA was not behaving reliably and replicably, it was found that a design that evolved to work on one FPGA often failed to work on another physical instance of that chip, nominally identical, or even on the very same chip on a different day when perhaps the ambient temperature was different.

Thompson and Layzell (2000) described experiments in which these issues were successfully tackled. During evolution, any genetic specification of the FPGA circuitry was evaluated on four instances of the FPGA held in different circumstances, and the worst evaluation of the four was the one used. The conditions varied between temperatures of −27°C and +60°C, and the power supply voltages varied. Evolution found solutions that behaved near perfectly in all four instances and, indeed, generalized further.

These engineering examples show how analogue, error-prone components in a noisy environment may indeed be used interchangeably. Shannon’s communication theory (Shannon & Weaver, 1949; see also Pierce, 1962) underlies the trade-offs and costs involved in communicating with unreliable components, universal to all systems, whether natural or artificial. The argument that knowledge storage cannot be done incrementally in a system in which different mortal cells behave differently despite using the identical connection strengths might be valid if the knowledge were encoded directly in those connection strengths. But distributed memories need not work that way. For instance, holographic film data storage (Hesselink et al., 2004) can store multiple images within the same emulsion, each stored and accessed separately.

I am not convinced by the supposed inferiority of mortal computing to current techniques. Though it has only been speculatively proposed (Hinton, 2022a, 2022b), because it falls squarely in the biologically inspired cybernetic camp, I hope the idea is pursued further.

Much of the history of AI and ALife can be seen in the light of opposing worldviews. Though there are many distinct positions to be taken, I have here broadly simplified these into two camps: The GOFAIstic camp is computer inspired, the cybernetic camp more biologically inspired.

The current AI revolution is driven by the success of cybernetic methods, specifically artificial neural network methods in their advanced form of DL. Along with many benefits for humankind, the radical societal changes that are ensuing bring with them societal risks (Fry, 2018).

One qualitatively different risk has also been foreseen by some: an existential risk that AI systems, such as robots, on becoming more intelligent than humans, will see the latter as obstacles to their wishes—disposable obstacles that can be eliminated. I argue that this is currently a mistaken fear because such robots simply do not have any wishes of their own. They are proxies and not responsible for their actions. The human users and designers should be held accountable for the consequences of their actions: whether well motivated, badly motivated, or reckless and ignorant of the possible consequences. Societal risks are real and require societal overview. Do not (yet) blame the robots; blame the humans! Legal responsibility for the consequences of robot actions, whether intended or not, and including indirect externalities, should be apportioned between human/corporate designers and human/corporate users.

The current AI successes are in tools for humans to use, according to their own human motivations. Thus far, such advances have ignored issues such as what it is to be an agent with its own motivations. If artificial systems are to emulate living organisms, just using cybernetic methods is not enough; we should also frame the “problem of Life” in a different cybernetic problem classthat accounts for agents. In the context of evolution, motivations can ultimately be grounded in a survival instinct. In creatures like us, with trillions of ancestors who—without a single exception—all survived to pass on genetic material, such survival instincts run deep and strong. AI systems do not have these. Chatbots like ChatGPT do not have motivations in their own right; despite their technical impressiveness, they are merely tools for their human overlords who designed them, who provided and curated their training texts, and who initiated their responses via prompts. The key role that prompts play is wittily and elegantly illustrated in a provocative paper by Kiritani (2023); I analyze this with the sorts of arguments given here in my response (Harvey, 2023).

Advances in Artificial Life fields may ultimately produce artificial agents with their own deep motivations, but I do not think they will resemble current AI systems at all closely. A much more plausible near-term future would be effective symbiosis between humans and robots—“consensual” symbiosis in the sense that we never explicitly opt out, though we will never quite remember when what was once merely an optional convenience becomes something we cannot manage without, thus implicitly opting in. This will not threaten the continued existence of the human lineage, though it will likely transform it radically.

A main existential threat to humans will remain the evil or reckless biochemist who modifies naturally reproducing biological organisms. Unlike current robots, these do indeed have their own motivations that can ignore any incidental collateral damage to humans. It is sensible to have concerns about existential risks—and better to worry too early rather than too late.

1 

The house where he lived and died at 20 Richmond Park Road was just 40 m, a stone’s throw, from where I first lived at 2 Kensington Place.

2 

Stewart Lang, who subsequently went on in the 1970s to cofound Micro Focus, one of the major software firms of the time.

3 

Charles Howard Hinton, mathematician, theorist about the fourth dimension, author of Flatland (C. H. Hinton, 1907), and exiled from Victorian Britain after a conviction and brief imprisonment for bigamy.

Anderson
,
J. A.
, &
Rosenfeld
,
E.
(Eds.). (
2000
).
Talking nets: An oral history of neural networks
.
MIT Press
.
Ball
,
P.
(
2023
).
Organisms as agents of evolution
.
John Templeton Foundation
. https://www.templeton.org/wp-content/uploads/2023/04/Biological-Agency_1_FINAL.pdf
Barandiaran
,
X. E.
,
Di Paolo
,
E.
, &
Rohde
,
M.
(
2009
).
Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action
.
Adaptive Behaviour
,
17
(
5
),
367
386
.
Beer
,
R. D.
(
2000
).
Dynamical approaches to cognitive science
.
Trends in Cognitive Sciences
,
4
(
3
),
91
99
. ,
[PubMed]
Brooks
,
R.
(
2010
).
Chronicle of cybernetics pioneers
.
Nature
,
467
,
156
157
.
Campbell
,
M. A.
,
Hoane
,
A. J.
Jr., &
Hsu
,
F.-H.
(
2002
).
Deep Blue
.
Artificial Intelligence
,
134
(
1–2
),
57
83
.
Di Paolo
,
E. A.
(
2005
).
Autopoesis, adaptivity, teleology, agency
.
Phenomenology and the Cognitive Sciences
,
4
,
429
452
.
Egbert
,
M.
,
Hanczyc
,
M. M.
,
Harvey
,
I.
,
Virgo
,
N.
,
Parke
,
E. C.
,
Froese
,
T.
,
Sayama
,
H.
,
Penn
,
A. S.
, &
Bartlett
,
S.
(
2023
).
Behaviour and the origin of organisms
.
Origins of Life and Evolution of Biospheres
,
53
(
1–2
),
87
112
. ,
[PubMed]
Erbatur
,
K.
, &
Kurt
,
O.
(
2006
).
Humanoid walking robot control with natural ZMP references
. In
IECON 2006—32nd annual conference on IEEE Industrial Electronics, Paris, France, 2006
(pp.
4100
4106
).
IEEE
.
Ford
,
M.
(
2018
).
Architects of intelligence
.
Packt
.
Fry
,
S.
[Wondere Wereld]
. (
2018, March 9
).
Stephen Fry describing our future with artificial intelligence and robots [Video]
.
YouTube
. http://www.youtube.com/watch?v=c0Ody-HLvTk
Grey Walter
,
W.
(
1950, May 1
).
An imitation of life
.
Scientific American
,
182
(
5
)
42
45
.
Grey Walter
,
W.
(
1951, August 1
).
A machine that learns
.
Scientific American
,
185
(
2
)
60
63
.
Hamilton
,
W.
(
1964
).
The genetical evolution of social behaviour. I
.
Journal of Theoretical Biology
,
7
(
1
),
1
16
. ,
[PubMed]
Harvey
,
I.
(
1996
).
Untimed and misrepresented: Connectionism and the computer metaphor
.
AISB Quarterly
,
96
,
20
27
.
Harvey
,
I.
(
2008
).
Misrepresentations
. In
S.
Bullock
,
J.
Noble
,
R. A.
Watson
, &
M. A.
Bedau
, (Eds.)
Proceedings of the eleventh International Conference on Artificial Life
(pp.
227
233
).
MIT Press
.
Harvey
,
I.
(
2013a
).
How fast can we evolve something?
In
P.
Lio
,
O.
Miglino
,
G.
Nicosia
,
S.
Nolfi
, &
M.
Pavone
, (Eds.)
Advances in Artificial Life, ECAL 2013
(pp.
1170
1171
).
MIT Press
.
Harvey
,
I.
(
2013b
).
Standing on the broad shoulders of Ashby. Open peer commentary on: “Homeostasis for the 21st century? Simulating Ashby simulating the Brain” by Franchi, S
.
Constructivist Foundations
,
9
(
19
),
102
104
.
Harvey
,
I.
(
2023, November 4
).
Review of: “Re: Teleology and the meaning of life”
.
Qeios
.
Harvey
,
I.
, &
Di Paolo
,
E. A.
(
2014
).
Evolutionary pathways
. In
P. A.
Vargas
,
E. A.
Di Paolo
,
I.
Harvey
, &
P.
Husbands
, (Eds.)
The horizons of evolutionary robotics
(pp.
77
92
).
MIT Press
.
Harvey
,
I.
,
Di Paolo
,
E.
,
Wood
,
R.
,
Quinn
,
M.
, &
Tuci
,
E. A.
(
2005
).
Evolutionary robotics: A new scientific tool for studying cognition
.
Artificial Life
,
11
(
1–2
),
79
98
. ,
[PubMed]
Harvey
,
I.
, &
Thompson
,
A.
(
1997, October 7–8
).
Through the labyrinth evolution finds a way: A silicon ridge [Conference presentation]
.
Evolvable Systems: From Biology to Hardware
,
Tsukuba, Japan
.
Hesselink
,
L.
,
Orlov
,
S.
, &
Bashaw
,
M.
(
2004
).
Holographic data storage systems
.
Proceedings of the IEEE
,
92
(
8
),
1231
1280
.
Hinton
,
C. H.
(
1907
).
An episode of Flatland; or, How a plane folk discovered the third dimension, to which is bound up an outline of the history of Unæa
.
Swan Sonnenschein
.
Hinton
,
G. E.
(
2022a
).
The forward-forward algorithm: Some preliminary investigations
.
ArXiv
.
Hinton
,
G. E.
(
2022b, January 16
).
Mortal computers [Video]
.
YouTube
. http://www.youtube.com/watch?v=sghvwkXV3VU
Hinton
,
G. E.
(
2023, July 20
).
Risks of artificial intelligence must be considered as the technology evolves [Video]
.
YouTube
. http://www.youtube.com/watch?v=CC2W3KhaBsM
Holland
,
O. E.
(
1997
).
Grey Walter: The pioneer of real Artificial Life
. In
C.
Langton
(Ed.),
Proceedings of the 5th International Workshop on Artificial Life
(pp.
34
44
).
MIT Press
.
Holland
,
O.
(
2003
).
Exploration and high adventure: The legacy of Grey Walter
.
Philosophical Transactions of the Royal Society, Series A
,
361
(
1811
),
2085
2121
. ,
[PubMed]
Hume
,
D.
(
1739
).
A treatise of human nature
.
London
.
Husbands
,
P.
,
Holland
,
O.
, &
Wheeler
,
M.
(
2008
).
The mechanical mind in history
.
MIT Press
.
Kiritani
,
O.
(
2023, September 29
).
[Commentary] Re: Teleology and the meaning of life
.
Qeios
.
Leith
,
E. N.
, &
Upatnieks
,
J.
(
1965
).
Photography by laser
.
Scientific American
,
212
(
6
),
24
35
.
Longuet-Higgins
,
H.
(
1968
).
Holographic model of temporal recall
.
Nature
,
217
,
104
.
Maturana
,
H. R.
, &
Varela
,
F. J.
(
1980
).
Autopoiesis and cognition: The realization of the living
.
D. Reidel
.
McGeer
,
T.
(
1990
).
Passive dynamic walking
.
International Journal of Robotics Research
,
9
(
2
),
62
82
.
Metz
,
C.
(
2022
).
Genius makers: The mavericks who brought AI to Google, Facebook, and the world
.
Penguin Random House
.
Moger-Reischer
,
R. Z.
,
Glass
,
J. I.
,
Wise
,
K. S.
,
Sun
,
L.
,
Bittencourt
,
D. M. C.
,
Lehmkuhl
,
B. K.
,
Schoolmaster
,
D. R.
, Jr.
,
Lynch
,
M.
, &
Lennon
,
J. T.
(
2023
).
Evolution of a minimal cell
.
Nature
,
620
,
122
127
. ,
[PubMed]
Moreno
,
A.
(
2018
).
On minimal autonomous agency: Natural and artificial
.
Complex Systems
,
27
(
3
),
289
313
.
Pierce
,
J. R.
(
1962
).
Symbols, signals and noise: The nature and process of communication
.
Hutchinson
.
Ross Ashby
,
W.
(
1956
).
An introduction to cybernetics
.
Chapman and Hall
.
Ross Ashby
,
W.
(
1960
).
Design for a brain: The origin of adaptive behavior
.
Chapman and Hall
.
Shannon
,
C. E.
, &
Weaver
,
W.
(
1949
).
The mathematical theory of communication
.
University of Illinois Press
.
Silver
,
D.
,
Huang
,
A.
,
Maddison
,
C.
,
Guez
,
A.
,
Sifre
,
L.
,
van den Driessche
,
G.
,
Schrittwieser
,
J.
,
Antonoglou
,
I.
,
Panneershelvam
,
V.
,
Lanctot
,
M.
,
Dieleman
,
S.
,
Grewe
,
D.
,
Nham
,
J.
,
Kalchbrenner
,
N.
,
Sutskever
,
I.
,
Lillicrap
,
T.
,
Leach
,
M.
,
Kavukcuoglu
,
K.
,
Graepel
,
T.
, &
Hassabis
,
D.
(
2016
).
Mastering the game of Go with deep neural networks and tree search
.
Nature
,
529
,
484
489
. ,
[PubMed]
Stewart
,
J.
,
Gapenne
,
O.
, &
Di Paolo
,
E. A.
(Eds.).
(
2010
).
Enaction: Toward a new paradigm for cognitive science
.
MIT Press
.
Thompson
,
A.
(
1997, October 7–8
).
An evolved circuit, intrinsic in silicon, entwined with physics [Conference presentation]
.
Evolvable Systems: From Biology to Hardware
.
Tsukuba, Japan
.
Thompson
,
A.
(
1998
).
Hardware evolution: Automatic design of electronic circuits in reconfigurable hardware by artificial evolution
.
Springer
.
Thompson
,
A.
, &
Layzell
,
P.
(
2000
).
Evolution of robustness in an electronics design
. In
J.
Miller
,
A.
Thompson
,
P.
Thomson
, &
T. C.
Fogarty
, (Eds.)
Evolvable systems: From biology to hardware (Lecture Notes in Computer Science No. 1801)
.
Springer
.
Turing
,
A.
(
1950
).
Computing machinery and intelligence
.
Mind
,
59
(
236
),
433
460
.
von Bertalanffy
,
L.
(
1969
).
General systems theory
.
George Braziller
.
Wiener
,
N.
(
1948
).
Cybernetics; or, Control and communication in the animal and the machine
.
Technology Press
.
Wittgenstein
,
L.
(
1922
).
Tractatus logico-philosophicus
.
Routledge and Kegan Paul
.
Wittgenstein
,
L.
(
1953
).
The philosophical investigations
.
Blackwell
.
Worden
,
R.
(
1995
).
A speed limit for evolution
.
Journal of Theoretical Biology
,
176
(
1
),
137
152
. ,
[PubMed]
Worden
,
R.
(
2022
).
A speed limit for evolution: Postscript
.
ArXiv
.