A Danish proverb, sometimes attributed to Neils Bohr, states “Prediction is very difficult, especially about the future.”1 Predicting the future is a challenging problem, particularly when dealing with an open system. That doesn't seem to stop us from trying.

Science fiction has a long history of making predictions. Some examples are laughably quaint; robot maids and butlers, a staple of 1950s era science fiction, seem less and less plausible as we make advances in robotics and artificial intelligence—real-world AIs seem much more likely to be task-specific, rather than generalists, at least in current lifetimes. Other imagined technologies turn out to have been quite prescient. H. G. Wells described an automatic sliding door in When the Sleeper Wakes, published in 1899; the first one was installed in 1960. In 1968, the movie 2001: A Space Odyssey showed characters using tablet computers in everyday life. I remember computer watches and computer books from Inspector Gadget and other fiction in my childhood; the GPS watch and tablet computer I got in grad school felt like the realization of those tropes. Sometimes the science fiction even spurs the technical invention; Martin Cooper's work on mobile telephones at Bell Labs was driven, in part, by the desire to bring about the communicators from the television franchise Star Trek.

Even harder to predict than the specific technological advances is how those advances will impact society. Star Trek envisioned a future where automation and replicator technology would eliminate material need. However, an episode in its first season still revolved around a group of women going off into space to meet unknown men so they could become wives; the writers failed entirely to realize that there would be no reason for women to subject themselves to such treatment—as many women actually did in historical settings such as the American West in the late 1800s, seeking better economic opportunities—in a post-scarcity society. It's easy to project unchallenged assumptions about society into a future where those assumptions would be completely unfounded.

How, then, can one make any sort of reasonable predictions about the future?

One path, taken by Zach and Kelly Weinersmith in their new book Soonish, is to focus on relatively near-term technology and its ramifications. Near-term, because it's much more likely that we can predict technical changes 20 years from now than 200; technology, because it's easier (though still difficult) to see what technologies might become possible, and what effects these specific technologies could have, than to predict other large-scale societal changes. In this book, the authors lay out ten emerging technologies that have the possibility of huge ramifications in a time frame ranging from a decade or so (if things progress rapidly) to a century (if the technology actually works, but it takes us, as a species, a while to figure it out). They present the current state of each particular technology, with citations to the literature and interviews with domain experts. They point out what seem to be the biggest technological barriers, and explore experts' current thoughts on surpassing those barriers. Finally, the authors present an array of possibilities for this technology, both beneficial and harmful; after all, the subtitle of the book is “Ten Emerging Technologies That'll Improve and/or Ruin Everything.”

The Weinersmiths aren't trying to specifically say what will happen with these technologies; they're trying to explain the range of foreseeable results, and identify the decisions society will have to make about these tools. This is a key point of their approach, and something that sets them apart from many futurists, who attempt to predict specific changes unfolding in one particular manner. Interestingly, of the ten technologies addressed here, five—programmable matter, robotic construction, synthetic biology, bioprinting, and brain-computer interfaces—are of obvious interest to the artificial life community. The other five—cheap access to space, asteroid mining, fusion power, augmented reality, and precision medicine—all have potential implications in ALife as well, but are somewhat less central to our field.

This book is aimed at a general audience; the intended readership is more science enthusiast than professional scientist. But it's also clear that the authors have done their homework. Kelly is a Ph.D. biologist who studies parasites that alter their host's behavior in ways that benefit the parasite; Zach is an internet cartoonist whose monthly reading list consists of one or two dozen nonfiction books. The point is to not become experts in each of the technologies presented, but instead to get a general sense of what we can do, what looks plausible in the near term, what the hurdles are, and why a technology could have a very large impact. And, of course, to have some fun while reading it; you shouldn't pick up a book where one of the authors is a cartoonist and be surprised that there are cartoons and jokes spread throughout.

Given the scope of the book, perhaps the best way to get a sense of what the book is is to look a little more deeply at an individual chapter. Here I delve into brain-computer interfaces; while the technology itself is less central to artificial life than that in some of the other chapters, the relevance of the topic is still clear, and the societal implications are clearly aligned with many other topics in artificial life.

In the chapter on brain-computer interfaces, the Weinersmiths first provide an overview of the most prominent current methods of detecting brain signals, whether electromagnetic (such as EEGs) or metabolic (such as MRIs), to far more invasive procedures such as putting electrodes directly onto the surface of the brain, or even piercing the brain with them. Individual approaches generally either lack spatial or temporal resolution, and/or they trade off between getting a fuzzy global picture and a detailed view of a small subsection. The authors point out that due to the large risks involved in brain surgery, currently the only people undergoing brain surgery are the ones where the potential upside might be huge. For example, patients with severe epilepsy that doesn't respond to medications might have their symptoms alleviated by electrical impulses applied directly to the brain via implants. The authors make the point that we honestly don't know why some of our therapies work, but they're used on patients for whom the benefit-to-risk ratio is pretty extreme. And even if our understanding is poor, a neuroscientist and neurosurgeon they interviewed (Eric Leuthardt) argues this has the potential to follow the general arc of plastic surgery: Procedures that were initially therapeutic (say, to deal with accidents like traumatic burns or birth defects) spread to people looking for enhancements (pick your favorite celebrity here).

There are many potential benefits to better brain-computer interfaces. Some of these are obvious, and extensions of current trends. How many phone numbers do you remember off the top of your head? If you're like most people, it's notably fewer than you did 20 years ago (presuming you were old enough to know any phone numbers 20 years ago)—this is information storage that a large fraction of the population has offloaded to their phones. How many other benefits to a better memory can you think of? But beyond this, potential upsides of better brain-computer interfaces discussed include far better prosthetics—up to and including using an interfaced device to move a natural limb that has been paralyzed by spinal cord damage; therapeutics for people with neurological diseases like Parkinson's or Tourette's; enhanced communication; and providing sensory data to people who currently lack it. All have the potential to be revolutionary for at least a portion of the population.

But not all the possible changes envisioned here are positive. The most obvious potential pitfall is one of privacy—as concerned as many are about the security of their social media and bank accounts, imagine the possibility of someone having access to not only every fool thing you say, but things you think without saying. Other possible downsides identified include the potential for others to directly influence your brain without your permission; the dissipation of long-standing communities with their own perspectives (for example, the way that some members of the deaf community object to cochlear implants); the pressure that could be exerted on many to undergo dangerous procedures if even a small proportion of people do so in order to get an advantage; and the possible obliteration of certain minority groups (in 1972 experiments were performed to try to electrically induce heterosexuality in a gay man—something we would find morally troubling now, less than 50 years later).

The strengths of this book are obvious. It is interesting; it lays out a range of possible futures instead of trying to claim definitively that one specific one will come to pass; and it points out the range of tough choices society will have to make if the technology advances as planned. The weaknesses stem primarily from this being a popular science book for a general audience; the chapters where the content would seem to be the most interesting to you will also be the ones that are least informative to you, because if you're reading the Artificial Life journal, you're already at a much higher level of expertise than the book's intended audience. Thus, I'd recommend this book primarily for reading the sections about potential technology you are much less familiar with; that puts you more in line with their objective, which is to give the reader a general sense of where we are and an array of things that might unfold, but without the technical detail you'd be tempted to look for in material related to your specific research interests.

We don't know which of these scenarios will unfold. We don't even know that one of them will unfold; something else entirely could happen, maybe driven by progress in a seemingly unrelated field. As the authors point out in the introduction, one of the devices we're now using to read brain signals requires superconducting materials, which we have primarily due to a chain of events involving Michael Faraday accidentally liquefying a gas 200 years ago while making glassware, which spurred advances in refrigeration technology, which led to the ability to make things extremely cold by immersing them in liquid helium, which eventually led to superconductors. Technology is often nonlinear. Even so, I'm willing to make the prediction that predicting the future is an activity unlikely to go away anytime soon. This book is an appropriate way to go about it.

Note

1

I say this, knowing full well that my most cited research article is one where I predict the rate of evolutionary change in a bacterial system based on only the early data from it. Turns out, I like predicting things too.