In 1875, Richard Caton, an electrophysiologist, discovered that the human brain could create electrical signals and promptly reported his findings to the British Medical Association. In 1929, the German scientist Hans Berger published a report about the alpha signal emitted by the human brain. Originally nicknamed “Berger’s Wave” (now referred to as EEG), it was the first time that the signal had been formally discussed in print. Five years later, Professor (Lord) Adrian and 
Dr. (Sir Bryan) Matthews demonstrated the actual electrical activity of the human brain during a meeting of the British Physiological Society in Cambridge. This breakthrough revealed and alternatively “performed” (though no one thought of it as such at the time) the brain by using an amplifier and oscillograph placed on a stage. The performance (or experiment), both visual and sonic, was fairly basic: a moving pen, held in place by a mechanical arm, made some wavy lines on a piece of paper. The EEG device was connected by electrode pads to the subject’s scalp, directly above the external occipital lobe. These results were then magnified and displayed onto a projection screen for all to see. The scientist/performer would open and close his eyes or concentrate very intensely while trying to solve a problem like tying and untying a knot. Using graphical recording instruments to display brainwaves was a popular practice at the time, part of a larger program intended to introduce new discoveries that could be adapted for wider public engagement. The trend originated with the French physiologist Etienne-Jules Marey, who pioneered both motion capture and the use of machines to display circulatory, respiratory, and muscular systems. These events also demonstrate how scientific research into the human brain’s electrical activity was intertwined with live mixed-media performance right from the start.

After World War II and propelled by the Cold War, Sputnik, and the desire to adapt the human organism to space travel, the posthuman condition arose. In 1960, Manfred E. Clynes and Nathan S. Kline truncated the words “cybernetic” and “organism,” inventing the term “cyborg.” A cyborg was a new type of human entity required to survive the rigors of space travel. Artists began using brain computer interfaces (BCIs) around the same time in underground practices spearheaded by Alvin Lucier, Nina Sobell, Richard Teitlebaum, John Cage, Nam June Paik, and David Rosenboom. On May 5, 1965, Lucier created Music for Solo Performer directly from his own alpha wave brain signals at Brandeis University’s Rose Art Museum, where he was director of the chamber chorus. Cage, whom he had met around 1960 in Italy while on a Fulbright grant, helped spatialize Lucier’s audio waves during the performance. This was a crucial intervention as Lucier was busy incorporating timpani, bass drums, and gongs, alongside traditional Western percussion instruments into his brain-powered repertoire.

On Music for Solo Performer, Lucier collaborated with Edmond M. Dewan, a physicist who worked at the Hanscom Field Air Force Base in Bedford, Massachusetts, not far from Brandeis. Dewan was researching brainwaves for the U.S. Airforce and generously supplied the necessary technical equipment: two Tektronix type 
122 preamplifiers, one Model 330M Kronhite Bandpass Filter set with a range of 9 Hz to 15 Hz, an integrating threshold switch, and electrodes. Dewan was good friends with the cybernetics theorist, Norbert Wiener, who had encouraged him to study brainwaves. In his research, Dewan found that alpha rhythm changes could be used as a control system to turn a lamp on or off or to produce a beep indicating an on or off signal that could spell out words in Morse code. Lucier’s performance inadvertently forged a seminal link between military and scientific research and brain art, one that subtly zigzags back and forth even today. In 1967, Dewan published the results of his experiments in Nature, demonstrating that individuals could learn to control their alpha rhythms in order to send Morse code. This utilitarian use of brain signals highlights the baked-in tensions between the clinical, social, and useful deployments of BCIs versus using BCIs for more purely artistic speculation. Lucier suggested everyone has their own electronic studio inside of their head anyway, so producing electrical brainwave performances was not such a big leap. Lucier wanted brainwave performance to leave the hospital and research centers and enter the world of the concert hall, which it had already, at least within underground art scenes.

In 1967, the composer Richard Teitelbaum used alpha rhythms, heart beats, and breath to make In Tune. In 1972, the mixed-media and sound artist David Rosenboom created Portable Gold and Philosopher’s Stones using feedback to control brainwave processes. The following year, Nam June Paik created A Tribute to John Cage in which Cage wore an electrode attached to an oscilloscope that was controlled by Paik. Also in 1973, the visual artist Nina Sobell fashioned her Interactive Brainwave Drawings as the set and props for an electronic theatre piece, encouraging participants to become both actors and observers of a mental and physical event. The literary theorist N. Katherine Hayles refers to these performative visual and sonic works as “cognitive assemblages”; as such, these early assemblages situate emergent brainwave art in the quest of self-exploration of the 1960s and 1970s.1 These experiments are also part of philosopher Michel Foucault’s concept of “technologies of the self,” as well as scientist Gregory Bateson’s theories of the “cybernetics of self” and the “circuited self,” phrases that rose in tandem with the now very common term “interface.”2

Jacques J. Vidal, a scientist at UCLA’s Brain Research Institute, published a research paper, “Toward Direct Brain-Computer Communication,” in 1973, inventing the term BCI (known then as The Brain Computer Interface Project) in order to record the “evoked responses” of the brain to external stimuli.3 It was part of a “man-
computer dialogue” where the interface was also the work. Vidal speculated that EEG signals had the potential to control prosthetic devices, and perhaps even spaceships. He understood, however, that it would be decades before the technology would be ready—an astute and accurate observation. It was not until the millennium that clinical applications of BCIs really gained traction. This took place in 1999 at the first international meeting on BCIs in New York during which computer scientists and human-computer-interaction designers presented various applications for administrative, scientific, and industrial uses. Creative practice was not even on their radar, despite artists having already experimented with brainwaves for almost thirty-five years.

As computing power continues to grow, humans are incrementally merging with technology. This is part of the continuum of the posthuman or human-animal prosthesis, either through invasive or non-invasive means, which integrate humans with intelligent machines. It is a phenomenon easily observed in daily life when people constantly check their mobile device for hours on end or are obsessed with their Fitbit wristbands. The integration began in earnest in 1984 when the artist and software engineer Steve Mann invented the first prototype of the EyeTap, a wearable computing device replete with a camera that constantly records whatever is in front of it. In 2012, Mann permanently secured the EyeTap into his skull. This unfortunately resulted in his being attacked at a McDonald’s in Paris where restaurant patrons believed their privacy was being invaded. Donna J. Haraway, a prominent technology theorist, has prophesied the social disruption cyborgs will create because their presence disrupts hierarchical structures and controls.

In 1998, the first cyborg engineer, Kevin Warwick, combined his nervous system with technology by implanting a silicon chip into his arm that allowed a computer to monitor his movements via radio waves. These developments occurred as the use of the personal computer was initially popularized (alongside changes to the internet and the World Wide Web), and digital health care and entertainment technology were first adopted. The Australian artist Stelarc has had decades of experience with “the posthuman,” although he explicitly and insistently denies he is part of that discourse. By and large, Stelarc is fascinated by the different modes that allow animals to navigate their environments, such as a bat seeing with ultrasound, snakes perceiving in infrared, dogs in black and white, and people with disabilities completing their daily tasks. The Catalonian artist and composer Neil Harbisson is a day-to-day functional cyborg who implanted the eyeborg, an embedded device in his skull. The eyeborg translates colors into sound, which is then interpreted by a surgically implanted chip in his brain. Harbisson helps people adapt to life with technology incorporated into their bodies through the Cyborg Foundation, an 
organization he founded with the dancer and fellow cyborg Moon Ribas.

Born with achromatopsia, a condition of the retinal pathway where he sees only the colors black, white, and gray, Harbisson is legally permitted, as part of his 
official passport photo, to display the eyeborg that is implanted into his skull. It arches over his head, dangling in front of his forehead, and can only be removed surgically. The eyeborg senses three hundred and sixty different colors and sends the vibration or wave lengths of the color frequencies through a computer chip that translates them into sound. By sensing different sounds, Harbisson is able to discern what color he is viewing. Harbisson has three holes cut into in his skull: one to hold the eyeborg, one to anchor the chip, and another to allow the vibrations from the device to vibrate through his occipital bone. This is how he hears 
colors—including ultraviolet and infrared, those not normally detected in the human spectrum. The device is charged via a USB port in the back of his head every few days. The eyeborg was developed with the help of the engineer Adam Montandon, who gave a talk about cybernetics in 2003 at Dartington College of the Arts where Harbisson was a student. It was later augmented by the software develo­per Peter Kese and the engineer Matias Linza, who helped Harbisson refine the device into an implantable chip. Montandon came up with the idea of a device 
that could translate color-wave frequencies into sound-wave frequencies. (The eyeborg started off as a webcam connected to a computer in a backpack attached to a set of headphones before being refined into an implantable chip and sensor.)

Harbisson breaks down the category of the cyborg into three distinct camps: 
mechanic, electronic, and cybernetic. Mechanic refers to someone who might be missing a leg and gets it replaced with a carbon prosthetic. If the leg has sensors that inform the user of something, such as how many times the user steps with it, then it is electronic or bionic. However, if the carbon leg allows the user to sense variations in temperature, such as hot or cold, or other types of nerve experiences, then the leg is said to be cybernetic. Harbisson sees the technology as an art form and a social movement, and his foundation emphasizes both of these aspects. The technology has unavoidably seeped into his daily life, as he dresses to sound good, and loves to go to supermarkets because the rows of cleaners and detergents are so bright and interesting to listen to. Because he is not someone who experiences synesthesia, he made up a new word for his experiential state, calling it “sonochromatism” or “sonochromatopsia.”4 It derives from the Latin sono (sound) and the Greek chromate (color) or opsia (seeing).

Nina Sobell attaching electrodes in EEG: Video Telemetry Environment (also known as Brainwave Drawings, 
1975). Neuropsychology Lab, Sepulveda VA Medical Center and Contemporary Arts Museum Houston. 
Photo: © Nina Sobell. Courtesy the artist.

Nina Sobell attaching electrodes in EEG: Video Telemetry Environment (also known as Brainwave Drawings, 
1975). Neuropsychology Lab, Sepulveda VA Medical Center and Contemporary Arts Museum Houston. 
Photo: © Nina Sobell. Courtesy the artist.

Close modal

Noor: A Brain Opera—Is There a Place in Human Consciousness Where Surveillance Cannot Go? (2016) by 
Ellen Pearlman. Photo: © Ellen Pearlman. Courtesy the artist.

Noor: A Brain Opera—Is There a Place in Human Consciousness Where Surveillance Cannot Go? (2016) by 
Ellen Pearlman. Photo: © Ellen Pearlman. Courtesy the artist.

Close modal

In 2014, Harbisson composed music based on the colors of Barcelona’s Palau de la Musica and directed musicians and singers to perform his music based only on the colors they saw in front of them. It was the first time a chorus performed solely from color. That same year, he created the world’s first skull-transmitted painting. He asked members of the public to paint a canvas in Times Square in New York. Their results were captured on video and Skyped back to HyphenHub, a performance space some twenty blocks away. The Skype transmission was uploaded to an iPad, and with a special iPad app the color frequency of the painted color was transmitted into Harbisson’s head through a wireless USB connection at the back of his skull. After listening to the frequency of the color, he painted it onto the canvas as a performance in front of a live audience, matching the color being painted in Times Square.

A subset of this cyborg implanting is the “grinder” movement, “biohackers” or “biopunks” who self-implant devices into their bodies. Grinders have their own website, forum, and chat such as biohack.me. They self-implant RFID chips or magnets among other types of devices into their body, and if they can’t do it themselves (DIY), they post on their forum trying to find covert operatives who will perform these operations. Biopunks talk about what type of painkillers to use and post photos of their implants in progress. They use histrionic subject lines such as “Looketh Upon Me. Tis I Who Be Magneto” and marvel that a magnet implanted in a finger can effortlessly lift paper clips.5 Grinders have their own manifesto, itself 
inspired by Eric Hughes’s “A Cyberpunk Manifesto.” Oddly enough, representatives of IARPA, or Intelligence Advanced Research Projects Activity (under the Office of the Director of National Intelligence) have attended a number of biohacking conferences in the past. Created in 2006, IARPA was consolidated from a number of other spy and military agencies to explore disruptive types of changes and is particularly concerned with neuroscience and wearables as ways to track and investigate human biometrics. IARPA reaches out to artists directly, and in 2023, sponsored an “Arts Proposers Day.” Although the topics that were solicited are relatively diverse, IARPA clearly recognize the entwinement of military biometric research and arts practice. Haraway has called this union the “illegitimate offspring of militarism and patriarchal capitalism.”6

This acceleration, fueled by gaming culture, neuro-gaming, and the military, engenders a new type of neocortical warfare. It is augmented through expanded network capacity, big data, mass surveillance, the miniaturization of technology, and new breakthroughs in neurophysiology, computer science, and human-computer interaction. It is a new type of arms race, focused on gathering personal information and monitoring interaction behavior, which produces a neuro-psycholinguistics of control, combining the biologic with the non-biologic. This activity includes robust data harvesting and the use of brainwave data as an endless conduit of information. The Defense Advanced Research Projects Agency (DARPA) pursues posthuman 
applications and currently funds studies of direct implantation of devices into the human brain. Some of these initiatives are based on medical research designed to help those with disabilities or wounded in war, but they also facilitate the development of brain-powered killer drones as part of an incipient wave of neocortical warfare.

Brain biometrics, different from fingerprints or eye scans, cannot be duplicated or copied. Biologic information becomes a source code where the body is “compiled.” This serves as a new type of surveillant assemblage with information systems that translate data into new versions of abstracted data. This data is then reassembled and decontextualized into data flows where even newer types of information are extracted. Extraction forces an electronically mediated psycholinguistics producing new modes of cognition and signifiers, like virtuality and a simulated incorporeality. New interpretations of the human are necessary to distinguish between the biological and more diffused channels of information. Accordingly, DARPA’s goal of “total information awareness,” part of this simulated incorporeality, imagines soldiers communicating telepathically and fashioning robots as intelligent as cats and mice. DARPA aims to produce small memory implants combining virtual reality, normal vision, and internal visual perceptions. Widely known as Perceptually Enabled Task Guidance (PTG), this novel technology combines artificial intelligence, machine perception, automated reasoning, and augmented reality. Its stated 
goal is to allow mechanics, medics, and other actors to perform physical tasks 
beyond their normal capabilities with greater accuracy.

This new virtuality can only be developed or enabled by countries that have robust military and strategic centers of power. It literally engenders a looming “theatre” of neocortical warfare using microelectronics. Microelectronics is the physical foundation of the way the virtuality of artificial intelligence and decisions are processed. When implantable chips are placed into the human animal, it serves as the tangible location of potential grids of control. Plans are underway at DARPA to develop three-dimensional systems or Next-Generation Microelectronics Manufacturing (N3M2). These systems are tasked with improving warfighter performance by investigating new neural architectures and data-processing algorithms, linking the human nervous system to multiple devices and facilitating new types of human-
computer interaction. This includes non-invasive neuro-technologies under the moniker of the Neural Signal Interfaces and Applications (NSIA) program. It should be noted that the United States is not the only country currently investigating these technologies, as nations around the world are commencing their own decades-long initiatives in order to map the human brain for themselves. These initiatives are usually partnerships between research science centers, national defense departments, and private start-ups that contribute to the cortical arms race. For example, Elon Musk’s brain-implant company Neuralink announced that it had received approval from the U.S. Food and Drug Administration to begin human clinical trials with brain implants. Neuralink’s “sewing machine robot,” designed to place neural-thread implants, was originally developed at the University of California at San Francisco and funded by DARPA.

BCIs are classified according to their invasiveness, that is, how deeply they are implanted. BCIs are also rated for their vulnerability to neural cyberattacks. The names of some of these attacks sound like they could be indie record labels: Neural Flooding, Neural Jamming, Neural Scanning, Neural Selective Forwarding, Neural Spoofing, Neural Sybil, Neural Sinkhole, and Neural Nonce. These attack styles have already been simulated in a mouse’s visual cortex by intercepting styles of neuro-modulation and spiking or subverting the signal. Depending on the attack style, a spike reduction ranging from five to twelve percent was achieved, which means that a mouse’s brain went off-kilter from its normal behavior ranging 
between five to twelve percent. Depending on the scale of the attack, some styles were deemed best for short-term spiking, while others were more effective in producing long-term changes. Extrapolating to spiking or subverting the human brain’s functionality to make it dysfunctional is not a one-to-one correlation, but it is certainly a possibility in the near future.

Meta, the parent company of Facebook and Instagram, is making an AI platform using a functional magnetic resonance imaging (fMRI) device that is able to 
decode brain activity into words and sentences correctly about seventy-five percent of the time. This is accomplished by reading books to volunteers, scanning their brains, and predicting what words they might be hearing. Currently, this is a non-
invasive procedure and focused only on speech perception. The AI algorithm Stable Diffusion, released in 2022, was used with both text and visual information to better understand what is going on inside the brain. That experiment also used fMRI brain scans in order to detect changes in blood flow to different regions of the brain. It specifically used data gathered from the occipital and temporal lobes, both involved in image perception. The algorithm was able to generate an image displaying the general content, layout, and the visual parameters of a photograph that a test subject had been shown. The occipital lobe activity revealed by the fMRI was then used to recreate the layout and general perspective of the photos that the subject had just seen. (The study also utilized keywords taken from the image captions that accompanied the photos.) All of this means that by scanning the brain, scientists will soon be able to recreate facsimiles of what a subject is perceiving. Scientists can already predict what a mouse sees by decoding brain signals through a machine-learning algorithm called CEBRA (pronounced “Zebra”) that reveals hidden structures within brain data. CEBRA does this by decoding what the mouse sees from a model while it is watching a movie (following an initial training 
period) in order to map brain signals in the visual cortex.

As part of developing the brain for use in cortical warfare, Australian soldiers are able to control killer robots using eight sensors inside a military helmet that coordinates with a Microsoft Augmented Reality HoloLens. An AI decoder translates the soldier’s brain signals into simple instructions that are sent to a robot so that the soldier can stay focused on his or her surroundings. The augmented brain-robot interface (aBRI) has a ninety-four percent accuracy rate in human-directed movement. A soldier only has to imagine the direction they want the robot to move, and it does so. The only training that is needed involves learning how to focus on the flickering of the HoloLens headset.

Brain art and performance is now a common occurrence in galleries, museums, alternative spaces, and universities. Most of the performances work with artists who compose in real time in a multimodal and multimedia context with both performers and audience members. Currently, there are a number of artistic BCI environments that allow users to play with and modify animations and sound 
environments. There are also examples of BCI control of instruments and other tools for artistic expression and exploration. These categories consist of input, mapping, output, format, and audience involvement. Most brain artworks fit into some combination of these methods. There are also four basic types of BCI controls within these methods: passive, which relies on preprogrammed artist material; selective interaction, which allows the user to control emotions in order to change the end result; direct, where the user chooses specific outputs like musical notes or brush strokes; and collaborative, in which multiple users interact either individually or with one another. Most of the processes are non-invasive, meaning they do not involve implantation of sensors.

Noor: A Brain Opera—Is There a Place in Human Consciousness Where Surveillance Cannot Go?, my first brainwave opera, was developed as a performative work 
reflecting my understanding of the pervasive and increased use of surveillance, both network and biometric. Noor, which debuted in 2016, also incorporated the acceleration and development of the posthuman and new global initiatives within the scientific and military communities with regards to the exploration of consciousness, and, more specifically, the brain. This process formed the theoretical foundation for the creation of the brain opera in a 360-degree immersive theatre. During the opera, the performer’s brainwaves were the drivers that launched videos, a sonic environment, and a libretto while she directly interacted with an audience, who simultaneously watched the performer’s brainwaves as they were displayed in real time. Noor employed an Emotiv EEG brainwave headset that measured the performer’s brainwaves using preset emotional thresholds of interest, excitement, meditation, and frustration. This opera, I believe, suggests that the posthuman, consciousness, and surveillance are coalescing at an accelerated rate. The technique of the surveillant assemblage is something that I applied throughout the development of Noor. I achieved this by taking suggested images or phrases about the life of Noor Inayat Khan, the historical figure on whom the opera was based, and replaying them at random moments according to the different emotional states of the performer wearing a brainwave headset.

In 2020, I created AIBO, an emotionally intelligent artificial intelligence brainwave opera that asked, “Can an AI be fascist?” It built upon the themes of Noor by using a BCI connected to a bodysuit of light worn by the performer as if it were an exterior nervous system. In performance, she would trigger image banks of emotionally themed images and sounds. AIBO included a custom-built “sicko” or perverted AI character that ran live in the Google cloud and whose emotions were analyzed for synthetic feelings. The perverted AI was built from scratch using public domain materials detailing male perversion as well as war-focused books and movie scripts. AIBO also worked with surveillant assemblage, expanding the assemblage beyond only BCIs to include the built demons of AI, now a focus of DARPA’s research.

1

N. Katherine Hayles, “Human and Machine Cultures of Reading: A Cognitive-
Assemblage Approach,” PMLA 133, no. 5 (2018): 1225.

2

Michel Foucault, Technologies of the Self: A Seminar with Michel Foucault, ed. Luther H. Martin, Huck Gutman, and Patrick H. Hutton (Amherst: University of Massachusetts Press, 1988); Gregory Bateson, “The Cybernetics of ’Self’: A Theory of Alcoholism,” Psychiatry: Journal for the Study of Interpersonal Processes 34, no. 1 (1971): 1–18.

3

Jacques J. Vidal, “Toward Direct Brain-Computer Communication,” Annual Review of Biophysics and Bioengineering 2 (1973): 157–180.

4

See Ellen Pearlman, “I, Cyborg,” PAJ: A Journal of Performance and Art 37, no. 2 
(2015): 86.

5

Frank Matheson, “Looketh Upon Me. Tis I Who Be Magneto.,” Biohack.me, 
April 2015, https://forum.biohack.me/index.php?p=/discussion/875/looketh-upon-me-tis-i-who-be-magneto.

6

Donna J. Haraway, “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century,” in Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1991), 151.