Genesis of New Forms of Electronic Music
“Joel Chadabe is the only composer I know of who is seriously involved with automatic electronic music,” wrote Tom Johnson back in 1972 in the Village Voice [1]. Of course, there had been many composers who had explored the uncharted territories of musical composition coming out of machines. Their work is well documented, and they have been quite influential: Lejaren Hiller, famous for his string quartet in four movements, the Illiac Suite, composed at the University of Illinois at Urbana-Champaign (1956–1957), opened the way to “serious” musical composition generated by a computer; Hiller went on to compose several works by programming a computer with specific algorithms handling musical operations. On the other side of the ocean, in France, Iannis Xenakis [2] and Pierre Barbaud [3] also engaged with computational music around the early 1960s.
The distinction in the Village Voice statement is that these pioneers programmed a machine to generate musical scores, which, in turn, were given to human performers. Soon, however, with the arrival of Max Mathews’s acoustic compilers, such as MUSIC III (1960), MUSIC IV (1963) and MUSIC V (1969), it became feasible to program algorithmic compositions and have computer-generated sound synthesis as the result. The computer seemed a tool well suited to algorithmic music, where the generation of pseudorandom numbers was a particular contribution of the software approach.
However, during the 1960s, a number of inventors developed analog circuits for the purpose of creating electronic music: in California, Donald Buchla; in Rome, Paolo Ketoff; and on the East Coast, Robert Moog. Later, individual musicians, not entirely satisfied with the synthesizers available on the market, created their own systems. This was the case with Salvatore Martirano. During the same period when Chadabe was assembling a system of commercial electronic modules around the idea of interactive music, Martirano, starting in 1969, was building the SalMar Construction, a very large synthesizer design for creating music through automated systems reacting to gestural inputs. David Rosenboom, who on occasion collaborated with Martirano, published an excellent description of the architecture and the purpose of the SalMar Construction [4].
Chadabe, in a way, did the same, but with the synthesizer modules made by Moog. He was led to this by his first contacts with electronic music. These occurred in Rome, where he was in residence. He became friends with another American composer, John Eaton, who had great hopes for using electronic technology in live performance, a very difficult endeavor at that time. Rome was actually a good place for this project, because the American Academy of Rome had an electronic music studio, which had been set up by Otto Luening, visiting composer and cofounder of the Columbia University electronic music studio, which, in 1959, became the Columbia-Princeton Electronic Music Center [5]. It was also a good location because a brilliant electronic engineer, Ketoff, had just built a synthesizer, the Fonosynth, now at the Deutsche Museum in Munich. Under the guidance of Eaton, he undertook the conception of a small, portable synthesizer he called the Synket, for Synthesizer-Ketoff. Eaton went on to realize the first-ever composition with live synthesizer, Songs for RPB, for Synket, two pianos and voice, in 1965 [6]. Chadabe witnessed all this as he became attracted to electronic music, just before he returned to the United States to be hired that same year by the State University of New York at Albany (SUNY Albany).
Emergence of Interactivity
The school asked Chadabe to set up a studio, and he turned to Moog, who worked in Trumansburg, New York. Progressively, with the help of successive grants, he and Moog built the largest ensemble of Moog modules, which he then called the Coordinated Electronic Music Studio (CEMS) [7]. He described the project as a “new approach to analog-studio design” [8]. The system thus assembled through the collaboration of Chadabe and Moog consisted of a large number of modules, which were divided in three categories: the audio system (oscillators, noise source, filters, random signal source, mixers, amplifiers); the control system (envelope generators, sequencers and other ancillary devices); and the timing system, which provided the clock pulses for the sequencers. It was to be built without keyboard, unlike the instruments made by Moog. Instead, the sound-producing modules were to be placed under the command of control voltages, which, in the CEMS, were mostly a bank of various sequencers. Random generators were used to produce automatic performance controls. This way, music could be programmed by setting the steps of the sequential switches and the speed of the clocks; the random generators controlled pitches and durations, so that when the system was started, music was automatically produced. At the same time, Chadabe, through joysticks attached to the device, would interact with the random processes. This way, he wrote, “the performer, in turn, influences the instrument by performing. A performer and an interactive instrument have a relationship that is conversational” [9].
At a time when microcomputers did not exist, this was a major step toward a studio under the control of analog devices. Because Chadabe had added an important function to his setup, the joysticks attached to the sequencers and the clock, a musician could alter and diverge the course of the programming. He inaugurated this approach in the 1971 piece Ideas of Movement at Bolton Landing, in which he used the handles to modify the behavior of the system: This was one of the first interactive pieces of electronic music. With Ideas of Movement at Bolton Landing, he became an actor in the emergence of interactivity. Thereafter, Chadabe started to investigate all sorts of systems enabling this relation between the composers/performer and the electronics.
From this early start, Chadabe saw electronic music composition as a way to place at center stage interaction between the musician and the devices.
A Fragile Medium
Chadabe was aware that electronic music was a fragile medium. Technology changed rapidly; it was not easy to connect to a large audience; distribution of recordings was not stable and rather confidential. Furthermore, the history of this music was curiously poorly understood: Composers tended to focus on their work by adopting the latest technology, and scholars did not have the necessary training to understand something that, often, was only produced by loudspeakers [10]. When musicians used real-time techniques, it was even worse for them, because it led to a type of materiality whereby gestures activated sounds without instruments, through mysterious interfaces. Of course, there was also an increasing number of mixed pieces, in which instruments or the voice played with electronic sounds, but even this did not much motivate music scholars. The 1960s and the 1970s saw several attempts to make this music and the technology behind it more accessible, and Chadabe did participate in this effort. He collaborated on the ephemeral journal launched by Moog, the Electronic Music Review [11], and in 1975, he wrote a chapter for a book edited by Jon Appleton and Ronald Perera, The Development and Practice of Electronic Music [12].
In fact, the 1970s saw quite a few papers by Chadabe: in Musique en Jeu in Paris, Computer Music Journal, Melos in Germany. … It was as if teaching, programming synthesizers and computers and composing were not enough: He strongly felt the need to communicate and share his knowledge, his ideas and also his interrogations.
Chadabe as Programmer
Chadabe had eclectic tastes in music; he often confessed privately that he was fond of American musicals—from which he would sing songs for his friends—and he made no secret of loving jazz, which he performed effortlessly on the piano. Among his favorite tunes were “Stella by Starlight” and “Misty,” which he enjoyed playing. This may explain his move from serious electronic music in the 1960s—an example would be Blues Mix (1966)—to a somewhat lighter kind of electronic music. This happened around the time when he invented a program that generated music, often melodic, with a steady beat. This software, which he called M, developed from the late 1980s for the Macintosh computer, created a form of light electronic music, the sounds being produced by a Yamaha DX7 digital synthesizer. This is when Chadabe ventured into compositions with a strong jazz feeling, such as those gathered on the album After Some Songs (1986–1995) [13]. This development may also have been influenced by his collaboration with percussion player Jan Williams, with whom he toured extensively.
An Ear to the Earth
Like many other musicians of the late twentieth century, Chadabe approached the climate change crisis from a musician’s perspective. In around 2005, he started a movement he chose to call “Ear to the Earth.” He wanted to use the microphone to capture environmental sounds that our perception was missing because of our built-in way of focusing on some sounds while masking the rest. He was aware that, contrary to human perception, the microphone captured all sounds without discrimination. As Luc Ferrari once said of the microphone recording: “It picks up a straw in the neighbor’s eye and reports a beam” [14]. This had already been observed by Rudolph Arnheim, who in 1936 wrote:
It [the microphone] artificially cuts out slices of reality, by this isolation making them the objects of special attention, sharpening acoustic powers of observation and drawing the listener’s attention to the expression and content of much that he ordinarily passed by with deaf ears [15].
The website attached to Ear to the Earth no longer exists, but other initiatives now cover this ground. Ricardo Dal Farra’s Balance-Unbalance series of conferences deal with much more than music but give a large place to artists’ perspectives. In Asia, PerMagnus Lindborg, a longtime Singapore resident now operating from Hong Kong, includes the question of environmental awareness in his compositions and as chair of the Data Art for Climate Action (DACA) dual conference (Hong Kong and Graz, scheduled for 12–15 January 2022). It is also of note that the keynote speaker for DACA, Andrea Polli, is an artist whose work addresses the climate crisis in various ways. One in particular relies on sonification, by which physical phenomena that can hardly be observed visually are transformed into sounds. Artists have long become interested in sonification. One, Leon Theremin, invented his thereminvox around 1920 in a Saint Petersburg laboratory to convert outputs from a gas variation–measurement device into changes in the pitches of an electronic oscillator. From the observation that moving hands in front of the circuit modulated the frequencies emitted by the device, he developed his invention, which he then called the etherphone. It became successful in creating music and sound design for music composition and for films and is still popular today.
It was during the 1990s that sonification became a research field. Led by Gregory Kramer and others, the scientific approach was often accompanied by artistic creations [16]. This is because composers took a significant role in making their art a component of social and environmental issues. Ear to the Earth became a part of a larger group of like-minded artists, such as Dal Farra, whose Balance-Unbalance found an echo in Leonardo [17]; Polli, who, as a visual and sound artist, has been at the forefront of creative perspectives in that field; and Will Schrimshaw, creator of an approach to “sonifying the earth” through geophonography [18].
In some cases, sonification took a whimsical turn, such as when the poet Rainer Maria Rilke described in 1919 what he called “primal sound.” He associated the lines that can be observed on the surface of a skull with those of a phonograph record and wondered if they could be turned into sounds. As he put it:
The coronal suture of the skull (this would first have to be investigated) has—let us assume—a certain similarity to the closely wavy line which the needle of a phonograph engraves on the receiving, rotating cylinder of the apparatus. What if one changed the needle and directed it on its return journey along a tracing which was not derived from the graphic translation of sound, but existed of itself naturally—well, to put it plainly, along the coronal suture, for example [19].
Interaction with Digital Systems
I consider myself very lucky to have been close to Chadabe. I first met him when I co-organized with Barry Truax a UNESCO-sponsored computer music workshop and conference in Aarhus, Denmark. Chadabe had made the trip from New York, and this was where he presented his piece for two theremin antennas, Solo [20]. He had just acquired the first commercial digital synthesizer, a Synclavier, from a small Vermont company, New England Digital Corporation. At that time, this was a microcomputer driving a bank of 16 sine wave oscillators arranged in a frequency modulation configuration. The device had to be programmed using the XPL language, a derivative from Algol. This particularly suited Chadabe, as he could write programs that generated music. The software for Solo was written by Roger Meyers from his ideas. Because the system operated in real time, Chadabe could interact with the Synclavier. He had long been friends with Moog, who, besides building synthesizers, had been making theremins since the 1950s, so he asked Moog to provide two 6-foot antennae, based on the theremin instruments (which he soon replaced with car radio antennae for ease of transportation; see Fig. 1). With the left-hand one, he acted upon the tempo at which the Synclavier generated events; the right-hand one was used to change the software patches for a variety of timbres—what would be called in MIDI parlance a “program change.” The ability to change the orchestration on the fly has since been a large part of real-time computer music systems, and Chadabe was a pioneer [21].
Performance device with gestural input from two antennae, from Joel Chadabe’s patent No. 4,716,804, filed in 1985.
Actions in the Community
After he retired from SUNY Albany in 1998, Chadabe continued teaching at New England colleges, such as Bennington College; he had already started teaching there in 1971 and was the director of its electronic music studio. He also joined the Manhattan School of Music (in 2001), as director of the electronic music studio, and shortly after, New York University (from 2002), as visiting professor. After he discovered the Kyma system [22], he made sure that those sites were equipped with it, and it became his main teaching and composing tool. His actions as educator have had deep influence on his students. He was always transmitting new ideas on the place of sound, music and the environment in his teaching, as well as in his writings.
Chadabe will be remembered as someone who always encouraged others to share information on their activities, undertake new ones and take part in the artistic and social networks he fostered. This is how he envisioned the Electronic Music Foundation (EMF), which was, in his mind, a receptacle for collecting primary information and documents on the history and the current state of electronic music.
Its website hosted a number of interviews with living composers, for instance. Following the demise of the site, these precious documents found their way to online journals such as eContact!, thus preserving the work of the foundation. The Ear to the Earth website was also conceived on a similar basis: Musicians, environmentalists and sound artists were to share their production and performances through the platform. Finally, Chadabe’s most recent endeavor was to develop the Intelligent Arts website. In this project, past and upcoming concerts were listed, often with pictures and sound clips. One could also find the various books published by Intelligent Arts. These were mostly small eBooks, which Chadabe called “ebookini.” One can find books by Richard Kostelanetz, Xenakis, Leigh Landy, Ramon Sender and others. He had just published a short one himself, devoted to the technology used to create, store and disseminate music. He called it simply It’s About Sound [23].
Much remains from Chadabe’s multiple activities. His music was assembled in a four-record set in 2020; this covers a large portion of his musical output. His book Electric Sound: The Past and Promise of Electronic Music, published in 1997, in addition to a full history of the field, presents numerous interviews with composers [24]. To accomplish the task of collecting these testimonies, Chadabe traveled the world, sat down with his colleagues and conducted interviews with well-prepared questions. The book gives a living and exciting view of the development and current practice of composers and sound artists involved in the creation of electronic music, and this makes the publication a priceless one. The Electronic Music Foundation Institute (EMFI) [25] is now, under the guidance of Benjamin Chadabe and several dedicated and passionate friends of Joel, pursuing Chadabe’s dreams. These are but some of his powerful legacies.