Abstract
Metaphors that compare the computer to a human brain are common in computer science and can be traced back to a fertile period of research that unfolded after the Second World War. To conceptualize the emerging “intelligent” properties of computing machines, researchers of the era created a series of virtual objects that served as interpretive devices for representing the immaterial functions of the computer. This paper analyses the use of the terms “artificial” and “virtual” in scientific papers, textbooks, and popular articles of the time, and examines how, together, they shaped models in computer science used to conceptualize computer processes.
1. Introduction
Much of the computer technology that we use so effortlessly every day can be traced back to a fertile period of research and discovery that occurred in North America and Western Europe after the Second World War. Scientists working in a diverse range of fields including engineering, mathematics, cybernetics, and psychology set the foundation for the information age by focusing their attention on a new series of machines often called “giant brains” (Berkley 1949) for their ability to perform calculations quickly and efficiently. This post-war period is notable for the first uses of the terms “artificial” (in 1955) and “virtual” (in 1959) to describe the processes and structures of computing machines. The two terms were tremendously useful in understanding computing environments and their use proliferated quickly in the scientific literature to help people conceptualize the immaterial operations unfolding inside computers. The terms have since become familiar to modern ears as part of the noun phrases “artificial intelligence” and “virtual reality”, both considered cutting-edge sub-domains of computer science.
The terms can be thought of as part of the vocabulary searching phase of the burgeoning field of computer science where new concepts were debated in the scientific community through a shifting set of metaphors and words borrowed from adjacent fields like psychology and physics (Hesse 1966; Brown 2003; Cuadrado and Duran 2013). “Artificial” and “virtual” were introduced in the 1950s as modifiers to familiar nouns (e.g., intelligence, reality, memory, machine), in order to mark the nouns’ use as metaphorical. Framing computers as giant brains capable of artificial intelligence influenced the direction of computer science research up until the present day, evidence of which we can see in terms such as “neurons,” “memory,” and “perception.” In a similar way, the introduction of the concepts “virtual memory,” “virtual machines,” and “virtual terminals” allowed software designers to portray the complex processes of computing visually. Our use of terms such as “desktop” and “window” stem from this process of virtualization/visualization and have come to influence how we conceptualize what computers are capable of doing and how they function.
This paper will analyze metaphors of the artificial and the virtual used by scientists and computer engineers working during the so-called golden age of mainframe computer research after the Second World War by examining scientific papers, textbooks, and other text artifacts for evidence of productive theorizing with these concepts. By tracing the metaphorical roots of arguments laid out in the scientific literature of the time we can uncover the relationships between our scientific models and our cultural models. In doing so we can better understand the strategies scientists use to make sense of immaterial and abstract concepts in the sciences.
2. Metaphor Wars
There is a deep-seated suspicion by many scientists and engineers that metaphors are not appropriate for scientific writing. It is thought that they are too literary or too indulgent.
In the words of the Enlightenment scholar Thomas Hobbes, one of the greatest errors in language use is to “use words metaphorically; that is, in other sense than that they are ordained for, and thereby deceive others … such speeches are not to be admitted” (Hobbes 1651, p. 20). For Enlightenment thinkers, devoted as they were to rationality and objectivity, metaphor could have no place in scientific communication. Philosopher John Locke similarly counselled his readers to avoid using metaphors. “All the artificial and figurative application of words eloquence hath invented, are for nothing else but to insinuate wrong ideas, move the passions, and thereby mislead the judgment,” he wrote, “and so indeed are perfect cheats” (Locke [1689] 1998, p. 677). It seemed to him that the use of metaphor was more than just a literary conceit: it was a devious rhetorical trick that no honest scholar should ever consider.
Yet metaphor is ubiquitous in our language.1 Research in linguistics and discourse analysis has shown that the use of metaphor is not only essential for effective communication (Kovecses 2009; Cameron and Maslen 2010; Steen 2015; Ervas et al. 2017; Black 2018), but also quite possibly a necessary component of cognition itself (Deignan 2008; Gibbs Jr. 2008; Thibodeau and Boroditsky 2011). Instead of obfuscating the truth, metaphor often helps clarify inchoate concepts or new ideas. So-called ontological metaphors reduce abstraction and allow us to speak as if immaterial concepts such as data, knowledge, or emotions, are physical objects that can be collected, saved, given, or built up (Lakoff and Johnson 1980; Fusaroli and Morgagni 2013).2
Metaphors work through a process of analogical reasoning, mapping the properties of one domain (often called the source) onto a new domain (often called the target). The famous line spoken by Romeo, “Juliet is the Sun,” is a novel metaphor in which Juliet is the target and becomes associated with properties of the source domain, the Sun. But metaphors are also used in everyday discourse in clusters that point to a set of underlying cultural assumptions. A cluster of such metaphors can be found in spoken English, for example, that suggest the concept “time is money.” Time can be spoken of as being spent, saved, invested, and lost. “Time is money” is an example of an underlying conceptual metaphor (Lakoff and Johnson 1980) or root metaphor (Pepper 1972) that can help structure our thinking.
If people in a given field find particular conceptual metaphors, to paraphrase Claude Lévis-Strauss, “good to think with,” over time the metaphors will lose their metaphoricity and become “dead metaphors” considered as literal descriptions by most speakers (English 1998; Bowdle and Gentner 2005; Deignan 2008). English speakers do not think of the phrase “I spent two hours on the phone” as metaphorical; for most people, time has literally become something that can be spent and saved. This also gives us a broad definition of metaphor: any instance where a concept is represented with terminology or imagery taken from another domain and is meant to be interpreted non-literally, including instances of simile, analogy, synecdoche, metonymy, and visual metaphors (Schmitt 2005). Exploring the process by which novel metaphors lose their figurative associations can reveal cultural norms that become encoded in language. “Metaphors are reintroduced over and over again because they are satisfying instantiations of a ‘conventional’ or culturally shared model,” writes anthropologist Naomi Quinn (1991, p. 79). Good metaphors are culturally bound and “do not merely map two or more elements of the source domain onto the [target] domain,” she writes. “In doing so they map the relationship between or among elements as well” (Quinn 1991, p. 80). Yet in the process of that mapping, where certain similarities are emphasized, other elements become hidden or de-emphasized. Anthropologist Victor Turner warns us that metaphors “may be misleading; even though they draw our attention to some important properties of social existence, they may and do block our perception of others” (Turner 1974, p. 25). If metaphor can be understood as a filter on the world, it is just as useful to ask what the metaphors leave out as well as what they highlight. One way of understanding this selective property of metaphors is to focus on their entailments: any derivative comparisons or transfers of properties that follow from the original metaphor. “One must pick one’s root metaphors carefully,” writes Turner, “for appropriateness and potential fruitfulness” (Turner 1974, p. 25).
3. Metaphor in Science
One can see this culturally informed process at work in the history of science. Through repetition, successful metaphors become ingrained in the practice of science and come to influence the direction research takes. “An apt metaphor suggests directions for experiment,” writes chemistry professor Theodore Brown. “The results of experiments in turn are interpreted in terms of an elaborated, improved metaphor or even a new one. At some stage in this evolutionary process the initial metaphor has acquired sufficient complexity to be considered a model” (2003, p. 26). Think of the “solar-system model” of the atom (Gentner 1983), or the description of the human genome as the “book of life” (Keller 2003). They have become what science historian Richard Boyd calls a “theory-constitutive metaphor” (1979, p. 361), one upon which the entire field relies to make sense of observed phenomena. In scientists’ attempts to describe the physical world—with mathematical formulas, with models, or with words—the use of metaphor becomes a cognitive tool, a means of reasoning through the consequences of a proposed theory. Indeed, there may not be any other means of conceiving scientific concepts we cannot see, or communicating experiences that are inchoate, than by relying on metaphor (Fernandez 1974). Much to Hobbes’ and Locke’s chagrin, studies show that scientific discourse often has a higher rate of metaphor use than does everyday speech (Gibbs Jr. 2017). As linguist Alice Deignan states, “it is very difficult to find words that are not metaphorical to describe certain abstract things” (2008, p. 17).
Novel metaphors, in particular, are more common in scientific literature than they are in everyday speech as scientists strive to make sense of their new discoveries and communicate them to others (Beger 2016). One of the examples Aristotle used to illustrate the formula for analogy found in Rhetoric, was to draw a parallel between the propagation of light and the propagation of sound through different densities of media (Leatherdale 1974, p. 31). When metaphors like this are used in scientific writing, other scientists are essentially invited to interrogate the domain mapping to see where it breaks down. The entailments of the metaphor can be analyzed for their coherence with observed phenomenon (Hesse 1980). Through this process of stress-testing a metaphor, the body of scientific knowledge expands and the metaphor loses its novelty. Yet traces remain in our language that reveal the original metaphor, even if we no longer consider it as such. For example, in the nineteenth century when experiments first revealed evidence of electricity an analogy was made to fluid dynamics. Terms such as “current,” “flowing,” and “resistance” are now an indispensable tool for speaking about electricity (English 1998). In examining scientific research papers from the historical record, we can trace this network of influence back to its roots and interrogate our language for clues as to the cultural context that informed the authors’ choice of metaphor.
4. Metaphor in Computer Science
The field of computer science is replete with metaphors. Desktops, firewalls, viruses, and daemons all feature prominently in discourse about computers (Colburn and Shute 2008). The word “computer” itself is such a metaphor, originally referring to people who performed calculations. The word began to be used to refer to machines that aided in such calculations at the end of the nineteenth century (OED 2022). By the Second World War the term was commonly used in this way to describe “machines that could compute” and by the 1960s this definition had supplanted the original one, so much so that the original meaning needed to be marked through the use of the term “human computers.” In fact, much of the language that we are familiar with today surrounding personal computers and technology can trace its roots back to this post-war period. Two words, “artificial” and “virtual”, were used in this period as modifiers to familiar nouns to mark them as being used metaphorically. The two terms were introduced by different scientists working on different problems in computer science, but as we shall see, the terms perform a similar function, in allowing scientists to theorize about, and speak clearly about, highly abstract concepts that unfolded in the unseen interiors of computing machines.
Written texts, in science, gain their power through reference to other written texts (Latour 1987). By working backwards to find the moment when certain metaphors first become “entextualized” into “text artifacts” such as scientific papers and textbooks, we can trace the connections between our cultural models and our scientific models (Silverstein and Urban 1996). To do this we begin by looking for the introduction of novel metaphors in scientific discourse and subsequent “reuse with modifications” of the same metaphor or analogy by peers in the field (Goodwin 2018). Often, writers will use “metapragmatic markers” to draw a reader’s attention to a novel metaphor (Cameron 2003; Knudsen 2003; Muller and Tag 2010; Steen 2011; Steen 2017). These can include the use of simile, which adds the words “like” or “as” to a metaphor thereby instructing the reader to interpret the phrase as nonliteral. Writers also often include hedges such as “imagine …” or “a good comparison might be …” to gently introduce new comparisons that might be unfamiliar to readers. Quotation marks can also perform this function, working as ‘ironic quotes’ to signal that a non-literal interpretation is required. Adjectives can also be used as modifiers of nouns to suggest metaphor. When taken up by other scientists and used as frameworks for thinking about certain theories or processes, these novel metaphors tend to shed their marked status and the metaphorical nature of their roots fades away. Terms lose their ironic quotation marks, similes become metaphors, and hedges disappear. Words like client, debug, folder, or window are today likely to be used without any markers and native speakers often need to reflect for a moment for the metaphorical origin of the term to become obvious. In this way, figurative language becomes normalized through reuse as terms travel through webs of discourse. Accordingly, scientists come to rely on these terms, and the entailments therein, to build up their field of study.
5. The Artificial
In 1955, computer scientist John McCarthy wrote “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” a bid to the Rockefeller Foundation to financially support a summer-long meeting at Dartmouth College for scientists and engineers. This is the foundational text of the field of AI and is the first time the term “artificial intelligence” appears. The document can be understood as having a rhetorical or pedagogical purpose, to convince the Foundation to fund the project by explaining exactly what it is they hope to accomplish. The readers of the text are assumed to be non-specialist members of a review committee at the Rockefeller Foundation and, as such, the proposal is not overtly technical. The document is structured more as a memo than as a formal scientific paper, including seven short, digestible summaries of different “aspect of the artificial intelligence problem” for lay readers (McCarthy et al. 1955, p. 1).
“Dartmouth,” as the event is metonymically referred to, is “generally recognized as the official birth date of the new science of artificial intelligence” writes AI researcher Daniel Crevier (Crevier 1993, p. 49), even though only six people showed up in the summer of 1956 (plus the four organizers) after the funding document was approved. The narrative that collocates the first use of the term “artificial intelligence” with the field’s “birth” suggests that the event can be understood as a type of naming ceremony (Sapir 1994). Such a ceremony calls into being the entity being named, often as a birth or re-birth. When looking back on the conference, the standard narrative is that the rapidly advancing fields of computer science, cybernetics, game theory, and psychology had started to converge, and a new name was needed to cement the identity of the new field. For funders at the Rockefeller Foundation, a clear name representing a new field of study would have been an enticing proposition. “To label a discipline is to define its boundaries and identity,” writes Crevier (1993, p. 50). “The term was chosen to nail the flag to the mast, because I (at least) was disappointed at how few of the papers in Automata Studies dealt with making machines behave intelligently,” says John McCarthy about the summer of 1956 (Moor 2006, p. 87). Two participants, frequent collaborators Allen Newell and Herbert Simon, reportedly did not like the name artificial intelligence and persisted in using “Information Processing Language” (Crevier 1993, p. 51) to describe the field, but that name did not stick, perhaps because it was not as evocative a metaphor. There is a sense of striving in a good metaphor, a momentum that suggests future actions and ideas for expansion.
After the term “artificial intelligence” was first used in 1955, it proliferated quickly and suggested entailments that pointed to the underlying root metaphor a “machine is a brain” or “a machine is a person.” Terms such as “memory,” “deep learning,” or “neural networks”3 rely specifically on comparisons to the brain. The fact that a machine can be said to read, write, sleep, and catch a virus can be considered entailments of the broader anthropomorphic comparison to a person. Even the term computer itself, as explained above, points to the same root metaphor, as it takes a term used to describe human labour and applies it to the domain of the machine. “We are not going to apologize for a frequent use of anthropomorphic or biomorphic terminology,” writes computer scientist Oliver Selfridge, “they seem to be useful words to describe our notions” (1959, p. 513). In coining the new name for his field of study, McCarthy likely assumed readers’ familiarity with some of the existing discourse in computer science that anthropomorphised machines. Giant Brains, or Machines That Think (1949) by computer scientist Edmund Berkley, was released just a few years before the term artificial intelligence was first used. It was an extremely popular book written specifically to explain the “machine is a brain” metaphor to a mass audience. “Of course, the machines are not living,” writes Berkley, “but they do have individuality, responsiveness, and other traits of living beings …” (Berkley 1949, p. v). Berkley continues with a sentence replete with hedges, explaining that “these machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves” (Berkley 1949, p. 2).
If the machines in question are cast as brains, which are usually found within the heads of people, what kind of people are they? A closer analysis of McCarthy’s text can give us clues. The machines are cast as obedient people, faithfully executing the commands of the scientists who program them. “One expects the machine to follow this set of rules slavishly and to exhibit no originality or common sense,” McCarthy writes in his proposal (McCarthy et al. 1955, p. 6), explaining for the readers which elements from the source domain of human cognition must be applied to the target domain of machine computing. While the verb “make/made” is a verb applied to the machine as a subject four times in McCarthy’s text (i.e. “the machine … could make reasonable guesses”), the word appears twenty times in the text as applied to the machine as the object, in the sense of the scientist forcing the computer to do something, (i.e. “how to make machines use language” or “make the machine form and manipulate concepts”) further emphasizing the passive and obedient nature of the machines.4 The scientists themselves are described as agents who control the behavior of the machines. A verb like “try” appears seven times in the text, but only one of them applies to a machine. “Trying” implies flexibility of approach, which implies ambiguity, and perhaps failure, whereas it is assumed the machines will execute tasks provided to them accurately and successfully. The breadth of the agency given to machines is given here with a complete list of the verbs found in McCarthy’s proposal where “machine” or “computer” is the subject of the sentence, and below a list of the nouns used to describe a computer’s (possible) personality.
Machines Are People that can:
Copy, execute (x3), obey, form (x8), formulate (x7), make (x4), do (x8), work (x3), try (x2), manipulate, operate (x3), acquire, respond, improve (x4), self-improve (x2), find, solve (x7), guess (x3), simulate (x8), predict (x3), transmit, learn (x4), develop, be imaginative (x3), acquire, exhibit (x4), be trained (x2), abstract (verb), perform, assemble, explore, get confused, behave (x3).
Machines Are People that have (or could have):
Behavior (8), memory (2), a laborious manner, internal habits, language, a sophisticated manner, symptoms, character, higher function, originality (8), common sense, intuition (2), strategy.
One of the puzzles that remained for scientists using this root metaphor to understand computers was how programmers and users could communicate with the “ghost in the machine” (Ryle [1949] 2009), the intelligent agent lurking somewhere within the circuits. If machines were capable of intelligence, originality, and productive problem-solving, how could researchers make sense of the machine processes that were producing them?
6. The Virtual
As computing machines got faster and more complex, programmers started to use them for a wide range of applications and as such, getting time on a mainframe computer was fiercely competitive. In the mid-1950s, when the term artificial intelligence was first used, the concept of “time-sharing”5 was introduced to allow several programmers to use a computer at the same time (Popek and Goldberg 1974). By the early 1960s, programmers at Dartmouth College began working on a time-sharing system based on a new language called BASIC. The computer would segment tasks from different users and run the programs in parallel, but from the user’s perspective the computer appeared to only be running their program. Behind the scenes, the computers were swapping information in and out of memory at a furious pace in a process far too complex for any individual programmer to track. Another key metaphor, that of virtual memory, was introduced in 1959 as a way to hide the complex inner workings of the physical memory systems from the users (Cocke and Kolsky 1959). Programmers were presented with a visual interface that assigned chunks of data to virtual addresses, a simplified version of the physical addresses where the data was actually stored. A New Scientist article from 1974 describes virtual memory by invoking yet another metaphor: “Each computer has a main fast memory (like stacks of paper on a desk) and a slow memory (like a filing cabinet)” (“ICL and Burroughs …” 1974, p. 258). Computers automatically swap information to and from slow and fast memory based on programming requirements, yet the process remains hidden from users. “The use of the adjective ‘virtual’ in computing is roughly equal to ‘apparent’, ‘notional’, or ‘idealized’” (“ICL and Burroughs …” 1974, p. 258), the article explains. According to computer scientist Peter Denning, the term “virtual” was borrowed from optics where a virtual image (such as that shown in a mirror or a convex lens) is distinguished from a real image by the fact that it cannot be projected onto a screen (Denning 2001, p. 73).6
As complexity in computer design grew, by the 1970s terms such as “virtual terminal” and “virtual disk” “soon evolved as operating system designers adopted the strategy of simulating machine components and hiding the real machine’s complexity behind a simple interface” (Denning 2001, p. 73). The word “simulate” used here comes from the Latin root similis (similar), which also gives us the word “simile,” defined above as a metaphor that has been metapragmatically marked as such. Thus, the simulation of machine components with “virtual” objects can be seen as a metaphor where a visual interface is created to stand in for the real thing. Here the source domain is nothing less than the visual world, while the target domain is the immaterial world of computer processing. Historian Anne Friedberg defines the virtual as “of, relating to, or possessing a power of acting without the agency of matter; being functionally or effectively, but not formally of its kind” (Friedberg 2006, p. 8). This definition acts a reminder that, even though virtual entities are not “real” in the material sense of the word, they can, nonetheless, possess the “power of acting”7 and perform useful functions. As such, “the virtual image begins to have its own liminal materiality, even if it is of a different ontological order,” writes Friedberg (Friedberg 2006, p. 9). A layer of virtual objects was introduced in this period to do some very real work: as a site of translation between the artificial world of computer cognition and the real world of the human researchers.
The introduction of the term virtual in the 1960s gave scientists a new cognitive tool with which they could conceptualize the inner workings of the computer. Says one popular computer science textbook from 1968,
[T]he term virtual was introduced … to distinguish the machine language seen by the user from the physical facilities actually used internally to execute computer programs. In the discussion below, virtual will be used repeatedly as an adjective to distinguish facilities of the system seen by the user from corresponding physical characteristics. For example, ‘virtual memory’ will distinguish the memory seen by each user from the physical memory of the actual computer (Wegner 1968, p. 82).
The “virtual computer,” in particular, came to define the interface between user and computer that we still have with us today. In his 1968 book, Wegner defines the virtual computer as “the computer configuration which each user sees when he writes his program, distinguished from the physical computer that is actually available” (Wegner 1968, p. 82). The real components of the computer remain hidden from view, represented by simplifying graphics, graphs, and counters. “A virtual machine is taken to be an efficient, isolated duplicate of the real machine,” write Popek and Goldberg a few years later (1974, p. 413). In this way, the invocation of virtual components that culminated in the virtual machine or virtual computer fulfilled Alan Turing’s call for a universal computational machine. “With sufficient memory, any computer can simulate any other if we simply load it with software simulating the other computer,” writes Denning (2001, p. 73). As Friedberg explains, the term virtual itself “does not imply direct mimesis, but a transfer—more like a metaphor—from one plane of meaning and appearance to another” (2006, p. 11).
The virtual computer interface gave the programmer access to the artificial person, with their artificial brain, lurking inside the machine. In 1972 IBM was the first company to channel virtual computers into separate virtual terminals set up at individual work stations (Denning 2001, p. 73). This gave the user the illusion that they were working on their own stand-alone personal computer when they were really all connected back to one mainframe. This pattern of virtualizing concepts to make them more comprehensible can be still seen today with metaphors like the cloud which we can imagine as the “place” our data is stored online, even though it is distributed amongst a network of servers (Hwang and Levy 2015). Folders on Google Drive appear to be discrete and individually assigned to one particular user, but the reality behind the scenes is much more chaotic. Virtual servers are similarly provided to small businesses and hosting companies to give them the illusion that they have a dedicated server at their disposal.
7. Virtualizing the Artificial
If computers were giant (obedient) brains, with memory, language, and behaviour, then they needed (inter)faces with which they could communicate with their human programmers. The need for an engaging visual interface between programmer and machine meant that computers in the 1970s were created by designers as much as they were by engineers. Designers needed a way to create virtual representations of the giant brain’s thoughts—they needed to virtualize the artificial. Care was taken to design screens that were as intuitive as possible, creating fidelity not with the physical reality of the computer’s interior, but to the virtual world that had been created for the programmer’s benefit. Graphical User Interfaces (GUIs) were first considered in the 1960s as a way to represent the virtual mechanics of the computer with familiar visual icons and terminology. In 1969, the term “window” was first used by engineer Alan Kay to describe a variable-sized “virtual screen” placed within the actual screen that could visually contain information relevant to a particular virtual process (Friedberg 2006, p. 227). The year prior, engineer Douglas Engelbart delivered a famous demonstration at a conference of the first mouse, which he used to manipulate boxes on a computer screen he called a desktop (Friedberg 2006, p. 224). “As a metaphor for the screen,” writes Anne Friedberg, “the window proposes transparency, a variable size, and the framed delimitation of a view” (2006, p. 15). Mixing the window metaphor with that of the desktop allowed users to comfortably manipulate familiar virtual objects to communicate with the immaterial computer. By 1985, windows had morphed into to WindowsTM and these metaphors became so familiar they would come to define how people thought computers actually worked.
The strength of the window as metaphor is not the window itself: it’s the frame. A frame on a computer screen provides an “ontological cut” (Friedberg 2006, p. 5) serving to delimit a program’s processes from the undifferentiated mass of electrical signals beneath the surface. The content of a frame can be contained and understood as an isolate, as a process disconnected from other programs. At the same time as the invention of the GUI, the concept of frames was also being used as a productive metaphor in the field of artificial intelligence. Instead of programming computers by brute force to logically consider all the possible pathways of an unfolding program, computer scientist Marv Minsky suggested programming computers with heuristics, rules of thumb that guided higher-order decision making. In 1974 he suggested “frames” as a way to structure data in chunks that represented stereotypical situations like “being in a certain kind of living room, or going to a child’s birthday party” (Minsky 1974, p. 33).8 Minsky specified that he had in mind window-frames as a model as opposed to picture frames (Crevier 1993, p. 173) because one could peer through a window-frame in three dimensions. “For visual scene analysis, the different frames of a system describe the scene from different viewpoints, and the transformations between one frame and another represent the effects of moving from place to place,” writes Minsky (1974, p. 2). The entailments of this metaphor here suggest morphemic variations of the word, many of which Minsky uses, including framework, frame-systems, inter-frame structures, niche-frame, space-frame, conventional frame, frame-oriented scenario, and super-frame. Minsky goes on to explain that “a great collection of frame systems is stored in permanent memory” (1974, p. 8), a process only made visible through a virtual memory interface.
8. The Art of Illusion
The Graphical User Interface, upon its launch in the 1970s, was not immune to criticism. Much as Hobbes and Locke criticized the use of metaphor as a deceitful rhetorical trick designed to fool unsuspecting readers, many computer scientists disliked the GUI upon its market launch in the 1980s, dismissing it as “a cheap facade” (Stephenson 2009). Their complaint was that it hid the mechanics of the computer so effectively that users would forget about the mechanics of the actual computer and mistake the simulacra for the real. Computer programmer turned sociologist Sherry Turkle warned (echoing Baudrillard 1981) that this could lead to a “culture of simulation” and to an uncritical and technically illiterate population (2001). In a landmark paper in 1970, Peter Denning summarized the strengths and weaknesses of virtual memory by writing that “it gives the programmer the illusion that he has a very large main memory at his disposal, even though the computer actually has a relatively small main memory” (1970, p. 156). The problem with this illusion is that “the problem of storage allocation … thus vanishes completely from the programmer’s view” which gives the user the impression that memory is unlimited which can slow down the processing speed of the computer. At its worst, overtaxed memory systems could suffer from “fragmentation” or “thrashing,” which is a form of “complete performance collapse … when memory is overcommitted” (Denning 1970, p. 157).
Yet the idea that virtual interfaces are an illusion is, for many programmers, its very strength. To borrow a concept from Roland Barthes, the ontological metaphors explored here have the power of myth, which “organizes a world which is without contradictions because it is without depth” (Barthes 1957, p. 143). The “blissful clarity” of the desktop and the window (frame) gives the messy, underlying reality “a clarity which is not that of an explanation but that of a statement of fact” (Barthes 1957, p. 143). In 1968, textbook author Wegner admitted to being seduced by this illusory world. In an epigraph titled “Commentary on a Concept of Plato” he writes: “In performing computation we do not handle objects of the real world, but merely representations of objects. We are like people who live in a cave and perceive objects only by the shadows which they cast upon the walls of the cave … We go even further, forgetting altogether about the real objects that cast the shadows, treating the patterns of shadows as physical objects, and studying how patterns of shadows can be transformed and manipulated” (1968, p. vi).9 This reification of the virtual is often described in the field of computer science as an illusion or a trick. “Software might even be considered a form of incantation,” writes historian Nathan Ensmenger, “words are spoken (or at least written) and the world changes” (2012, p. 763). Morton Heilig, inventor of the Sensorama, a proto-virtual reality system, wrote in 1955, that, “every capable artist has been able to draw men into the realm of a new experience by making (either consciously or subconsciously) a profound study of the way their attention shifts. Like a magician he learns to lead man’s attention with a line, a color, a gesture, or a sound” (1955, p. 248). After all, participants in virtual worlds often enjoy being tricked. “It is not that they are confused about which world is real,” writes cinematic artist Myron Krueger, “it is just that they are ascribing greater significance to the illusory world for the moment, as they do when watching a movie or a play” (1991, p. 201). Krueger distinguishes between two types of movements that are possible in what he calls artificial reality: magic or metaphor. Bodily movements that “could happen” in the real world, like waving a hand or bouncing a ball, are metaphors, while entirely new models of cause and effect, like flying or teleporting to Ancient Greece, belong to the realm of magic. It is clear where his interest lies: “traditional computers are pure magic,” he writes (Krueger 1991, p. 116).
Scientists in the field of artificial intelligence also embraced the art of the illusion. Computer scientist Joseph Weizenbaum released a paper entitled “How to Make a Computer Appear Intelligent” in 1961 that suggested a computer program’s ability to trick users into thinking it was intelligent was the true measure of the “success” of an AI program (Weizenbaum 1961). Soon thereafter he created a program, ELIZA, that could hold “conversations” with a user through a text interface (Weizenbaum 1966). His most famous script, DOCTOR, cast the computer as a Rogerian psychotherapist, responding to statements posed by the user with questions that simply parroted back the main clause of the statement in the form of question10. He meant it as a parody of the current fashion for “intelligent seeming” programs, but many reporters and scientists did not catch the meaning of the word “appear” in the title of his earlier paper and gushed about the anticipated use of intelligent machines to transform the field of psychology (Suchman 2007, p. 47; Natale 2019). But the willingness of people to read intention and agency into the program’s responses is the very purpose of virtual interfaces; software is designed as a gateway to access the inaccessible. “Software is … what defines our relationship to the computer,” writes Ensmenger. “It is what we experience when we interact with the machine. It turns the generic, commodity computer configuration—screen, keyboard, and the (quite literally) black boxes that contain all of its essential circuity—into a multipurpose collection of capabilities that reflects our particular requirements and desires” (2012, p. 761). In this sense developers are translators: parsing the messages from the artificial brain inside the computer and representing them in simple, familiar, virtual patterns.
9. Conclusion
Computer science is a field that is different from other sciences. It is a relatively new field and “a discipline that creates its own subject matter. Computers and computational processes are both studied and created by computer scientists” (Colburn and Shute 2008, p. 528). The virtual world that computer scientists created to help them visualize computer processes has become a fully-inhabited world, one that in turn suggests new procedures and processes for scientists to test. Although it remains virtual, it becomes its own place; it has lost its status as a metaphor and become an ontological reality for people to think and play with. “Virtuality implies that the seemly, the ‘almost real’ no longer depends upon a reality to imitate, but is itself a state of being, an irreducible world orientation that brings with it the reassurance of intuitive, first-hand experience,” write Darren Tofts et al. on the history of cyberculture (2004, p. 106; see also Halpern 2014). New inventions in computer science do not lie somewhere “out there” in the world, waiting patiently for a naturalist to discover them. Instead, they are suggested by, and then irreducibly woven into, the fabric of the virtual world.
Metaphors of the artificial and the virtual changed the field of computer science tremendously in the years after the Second World War. The root metaphor that “a machine is a person,” or “a machine is a brain,” helped frame the entire field as an investigation into a special kind of intelligence: one that was created artificially, through copper wires rather than biological neurons. “As new media are introduced, metaphor functions as accommodation, wrapping the newly strange in the familiar language of the past,” writes Anne Friedberg (2006, p. 15). To access this new world of artificial cognition, programmers needed an interface they could use to communicate instructions to, and receive insights from, the computer. The hardware of the computing machine became decoupled from the software that ran it. “A computer program is invisible, ethereal, and ephemeral,” writes Ensmenger. “It exists simultaneously as an idea, as language, as technology, and as practice” (2012, p. 763). As computers morphed from room-sized machines with physical knobs, dials, and punch-cards to digital systems with incomprehensible electrical micro-circuits running within, programmers needed a way to visualize what was happening inside the black (or beige) box. This visualization occurred through virtualization: the creation of an entire world of virtual objects on the screen which could serve as emissaries between the real world and the artificial one emerging from the depths of the machine.
Virtual computing environments have become tremendously valuable as a place where virtual experiments are run, to test models on everything from climate change to economic growth. “This is a fundamental shift in the epistemological foundations of the scientific enterprise,” writes Ensmenger (2012, p. 770). The production of knowledge from these virtual worlds is powerful and influences events in the real world. As such, virtual entities gain a sort of materiality, achieved by exercising agency, a property that emerges only through their interactions with people. In this sense, the metaphors used in (computer) science can themselves be considered as virtual objects that gain materiality over time as they shed their metaphorical connotations. “[Metaphors] reside in the immateriality of language, yet they refer to the material world,” writes Friedberg (2006, p. 12). “A computer metaphor acquires near materiality as a virtual object,” she continues, through the process of entextualization and reuse by peers in the scientific community (Friedberg 2006, p. 220). The realm of the virtual is, in this sense, the realm in which metaphors live, yet they exercise their agency on how we live our lives in the real world, as scientists, designers, and philosophers.
Notes
Including, ironically, in the writings of both Locke and Hobbes. In Leviathan (1651), Hobbes goes on to say that “metaphors … are like ignes fatui [will-o-the-wisp],” (Hobbes 1651, p. 30) which is notable not only because it is a metaphorical statement about metaphor but because it is explicitly marked as such with the words “are like,” which turns it into a simile. The book’s title is also a metaphor, one sustained over the entire volume, which compares the democratic state to the great sea beast from the Bible. Consider also Locke’s “empty box” or the “tabula rasa” (blank slate) used to describe the mind at birth, both (extremely effective) metaphors (see Forrester 2010).
The term metaphor is itself an ontological metaphor based on the Greek word meaning the movement or transfer (metaphorà) of a physical object. It was first used by Aristotle in his Poetics to describe the transfer of an idea from one place to another, thereby objectifying the idea and giving it an ontological reality (Garett 2007, 1457b1–30).
The prefix neuro- is remarkably versatile, as seen in terms such as neuroinformatics, neural processing unit, and neuromorphic computing, and can be considered a metaphorical morpheme, a word part that can be used creatively by scientists theorizing on different structures or experiments. The concept itself was used by AI pioneer (and Dartmouth attendee) Marvin Minsky for his PhD thesis entitled “Theory of Neural-Analog Reinforcement Systems and its Application to the Brain-Model Problem” (1954).
In 1964, engineers at Dartmouth College first used the metaphor of “master” and “slave” to refer to the hierarchy of virtual machines in their time-sharing system. Now the terms are used in several different ways in computer science. Debates are currently underway on whether to explicitly excise the terms from the discipline considering the traumatic connotations conjured up by their use (see Eglash 2007).
Note the congruence with the conceptual metaphor mentioned above that treats time as an ontological reality like money that can be shared, divided, or saved.
The concept of virtual image is discussed in Arianna Borrelli’s contribution to this volume.
In a moment of foresight appreciated by philosophers, in his volume Matter and Memory (1896), Henri Bergson describes (human) memory as inherently virtual because of its immateriality. In addition, he describes perception as a form of “virtual action” and anchors his metaphor explicitly in the source domain of optics, the field from which the term virtual was initially borrowed.
In the same year, sociologist Erving Goffman proposed using frames as a metaphor for organizing human experience in the social sciences in his book Frame Analysis: An Essay on the Organization of Experience (1974).
Computer scientists love Plato’s allegory of the cave. During the 1960s, a team at the University of Illinois built a language for teaching computer programming called PLATO (Programmed Logic for Automatic Teaching Operations). They later designed an exhibit called CAVE (Cave Automatic Virtual Environment) which consisted of a room with rear-projection screens as walls through which a visitor could wander, interacting with 3D objects.
One can see that the cadence in the below example matches quite closely to that of a stereotypical conversation with a therapist:
Patient (P): You are not very aggressive, but I think you don’t want me to notice that.
Artificial Therapist (T): What Makes You Think I Am Not Very Aggressive
P: You don’t argue with me.
T: Why Do You Think I Don’t Argue With You
P: You are afraid of me.
T: Does It Please You To Believe I Am Afraid Of You
P: My father is afraid of everybody.
T: What Else Comes To Your Mind When You Think Of Your Father … (Weizenbaum 1966)