Hubert Howe (see Figure 1) received AB, MFA, and PhD degrees from Princeton University, where he studied with J. K. (“Jim”) Randall, Godfrey Winham, and Milton Babbitt. As one of the early researchers in computer music, he was a principal contributor to the development of the Music 4B and Music 4BF programs. In 1968, he joined the faculty of Queens College of the City University of New York (CUNY), where he became a professor of music and director of the electronic music studios. He also taught computer music at the Juilliard School in Manhattan for 20 years.
Howe has been a member of the American Composers Alliance since 1974 and has served as its President from 2002 to 2011. He is also a member of the New York Composers Circle and has served as Executive Director since 2013. He is currently active as Director of the New York City Electroacoustic Music Festival, which he founded in 2009. Recordings of his music have been released on the labels Capstone and Centaur, among others.
This conversation took place over Zoom during March and April 2022. It begins with a look at Howe's student years at Princeton and traces his pioneering journey through to his musical activity today. Aspects of his composition and programming work are discussed, as well as his thoughts on pitch structure and timbral approaches to composition. More information about his music and work can be found at http://www.huberthowe.org.
Early Days
Mark Zaki: Tell me about your early days; what initially drew you to music?
Hubert Howe: I grew up in Los Angeles in a time before television. We used to have a radio that I would listen to at night for a program called Symphony Hall. I had been singing as a child, but when I went to play an instrument, I chose the oboe, because the other kids wanted to play flute or clarinet, but they needed oboists. I quickly realized I was getting myself into something a lot more difficult than what most of the others were doing. But I liked playing oboe, and I started studying with a teacher and became pretty good at it.
When you play an instrument like the oboe, they always need you. My teacher was a freelance musician, and he would take me along as a second oboist. I got to play in several community symphonies from Orange County to the San Fernando Valley.
When I was a senior in high school, I started playing with the UCLA orchestra. I got to play English horn on works like Debussy's Pelléas et Mélisande and the West Coast premiere of Samuel Barber's Vanessa. These were great experiences, as they were big, long pieces that took an awful lot of work, and there were several performances of them.
Zaki: Did you continue to play when you went on to college?
Howe: Yes, I played in several groups as an undergraduate when I got to Princeton. There were opportunities to play new music by graduate students. Any time there was a group that needed an oboist, I would be it. I also played in the orchestra, which has always been a rather serious thing at Princeton.
Zaki: It's a strong organization and they have always had some great players. Did you know anything about the Princeton faculty before you got there?
Howe: When I got to Princeton [in 1960] there was a shock of reality. The only person I had heard of before was Roger Sessions, who was truly one of the great early 20th-century American composers. I didn't know much about him. He turned out to be rather nice, but Milton Babbitt was the leading light there. Everybody was interested in him.
Zaki: Babbitt was certainly an iconic figure. Did you get to work with him as undergraduate?
Howe: A little bit. The crucial thing for me about Babbitt initially was that I heard a performance of “Vision and Prayer” [1961, soprano and synthesized tape], which just floored me. I started talking to Babbitt about it in detail. I was interested in the sounds and the synthesizer and everything about it. I started studying what I could, although there was very little available at that point. I eventually came to see Babbitt as a role model and mentor.
First Compositions, the Computer, and Writing Music IVB
Zaki: You came to Princeton to study music. How were you introduced to the computer? Was it through Babbitt?
Howe: No. I had roommates who were studying engineering and they were using the mainframe at school. I was interested in what they were doing, and I learned how to program. Then Jim Randall got me involved in a research project to do things like compute the number of chords and subcollections and so on. I did several studies with him like that. I also computed all the possible all-interval sets for Babbitt, and he was very happy with that. From that experience, I ultimately wrote my first article, called “Some Combinational Properties of Pitch Structures,” which was published in one of the first issues of the new magazine Perspectives of New Music (Howe 1965).
Eventually Randall got this idea of going up to Bell Labs to learn computer music. He wanted to bring a copy of the program they were using back to Princeton and run it there. I started going up with him, and during that time I met Max Mathews and learned about Music IV. I had no idea how important a person Max was, but he was always very nice and generous to me. Joan Miller, who was one of the programmers there, was exceptional. She was an acoustic researcher, and when we started studying her code, we found it was amazingly well written.
Zaki: You were playing and doing some programming. How did you get into composing then?
Howe: I did not compose until I got to Princeton. When I started composing, it was not professional work, and I didn't see that yet as my life's work. At that time, I was writing tonal music. In doing that, I was trying to be original, in ways that I thought were in fact original. That was the music I was studying then, and that is what they teach you during your undergraduate years. I started out writing tonal music, but I quickly realized that it was not going to be innovative. I just came to feel that it was not possible to write tonal music without sounding like something that had been done before. What's more, I was sitting in this context of people doing all kinds of original, really new approaches to music, so I naturally started thinking along those lines. I started hearing all this other stuff that became much more interesting to me.
Zaki: And you stayed at Princeton to pursue graduate work in 1964.
Howe: Yes, I decided to stay because that was exactly when Princeton was importing Music IV. As I was starting grad school, I developed a very close relationship with Godfrey Winham and I started studying composition with him. Godfrey and I later wrote Music 4B, which kept the basic concepts of Music IV but changed the score format and began to add numerous new units to the program. [Note: Whereas Bell Labs used Roman numerals in naming their music programs, Winham and Howe used Arabic numerals.]
Godfrey was a very interesting and intelligent person and was independently wealthy, so he didn't have to think about making a living. He basically devoted his life to thinking about music and writing some of it. He said to me that he did not think that he was primarily a composer, and he did not write very much music; but everything he did write was outstanding. His best works were his articles. His senior thesis at Princeton was remarkably well done, and of course his dissertation, “Composition with Arrays,” was highly influential on me.
We would have our composition lessons on Saturday night for about three hours at a time. He would go over everything. I think one of the nicest comments he made to me about a piece that I wrote was “Well, that wasn't so bad.” But when he would go through it, he would look at every little detail. To see his insight and the way his mind worked was illuminating.
Zaki: While you were at Princeton did you have any connection with the activity at Columbia?
Howe: I was one of the first students to go from Princeton to Columbia to study electronic music in the studio there. During my time at Columbia, I got to know Vladimir Ussachevsky quite well. I also met Pril Smiley and Alice Shields and was able to observe how they worked. At the time it was frustrating, because you could not do something seemingly simple like take a recording and just transpose it up. They had procedures where you might manipulate the sound using tape speed, for example, but that would change the durations as well as the frequencies. Overtones would get moved into different areas, so it wouldn't sound the same.
What I found was that working in the studio was interesting from the standpoint of learning about sound. What they were doing then, which is generally the case with electroacoustic music now, is that you would hear sounds that you would never hear in music otherwise. You start thinking about the musical qualities of these sounds. I began getting the feeling that I wanted to be able to compose with those qualities, and of course this is what you can do with the computer, if you can figure out how to specify it.
Zaki: Today we walk around with exponentially more powerful devices in our pockets that can do all sorts of operations that must have been unimaginable then.
Howe: Working with mainframes in those days was a completely different experience than what you find today with modern computers. Punch cards were the only interface with the computer, and there were a lot of problems with them. Cards would get out of order, and everything would be screwed up. You put your deck together and submitted it. Hours later you would get a printout.
To use the computer to generate music you had to write in assembly language. The rate we were using was 10,000 samples per second. That meant that you would be doing a ton of operations just to generate something like a sine wave. The only way you could make that work was to have an efficient assembly language program that would put that all together. Music 4B, which was a revision of Bell Labs’ Music IV, was all written in assembly language.
Zaki: Did Princeton have a digital-to-analog converter, or were you still going up to Bell Labs?
Howe: Originally, we had no converter, but Bell Labs gave us their old one, which was 10 kHz, monophonic. If you wanted stereo and wanted a higher sampling rate, you had to go to Bell Labs. What I would do is work everything out at 10k mono, and when I felt that it was ready to go, I would make a stereo version and take it to Bell Labs on a digital computer data tape. Then they would convert it and you would get back an analog audio tape. The most you could put on one data tape was about two minutes’ worth of music. If you were producing a larger piece of music, you would have to splice the final product together from multiple audio tapes. If you had a nine-minute piece, that was a lot of splicing. The data we walk around with on a flash drive today is so much more than what we had on the reels in those days.
Zaki: I think one advantage you had, working that way, is that you had time to think about what you were doing.
Howe: There are advantages, which I later came to appreciate. For one thing, you really must know what you are doing. You can't do anything until you describe it completely, and in ways that people who write [conventional] scores don't think about. You had to figure out what you wanted, to specify that in detail, and then put together a computer program that would work. You had to make sure you had all the bugs out of it. It was difficult, but it was worthwhile.
Music 4BF
Zaki: How did you come to develop Music 4BF?
Howe: The IBM 7094 at Princeton was getting overrun and out of date and just could not keep up with the demand. Princeton upgraded to the next level of computers that IBM came out with, the System/360. That meant that we could no longer use our assembly language programs. We had to rewrite everything, and that was when I wrote Music 4BF. I translated all the resources of Music 4B into Fortran.
We would then have started a new project to design an assembly language program for the 360; but then Barry Vercoe moved to Princeton, and he just wrote the program [Note: Music 360, part of which was written in 360 assembly language and part in Fortran].
Vercoe was a fantastic programmer. Music 360 took the score and orchestra resources from Music IV and Music 4B, and those resources persisted all the way through Csound. People started using that from then on. It was a great program and well thought out, even better than Music 4B.
An Affinity for Atonal Music
Zaki: I imagine that the environment at Princeton was conducive to exploring nontonal music. There certainly was a strong serial influence and presence there because of Babbitt.
Howe: When I was at Princeton, I would say about half of the composers were writing tonal music and the other half were serial composers. But I was not attracted to serialism. I was more interested in the kind of atonality where you didn't have to complete the aggregate every so often. I was more interested in the relationships between the smaller groups of notes. What I did was develop a conception of music that was based on pitch relationships. I think that is the way people hear music; they hear what's happening locally. They only develop a picture of the larger aspects of the piece upon repeated listening.
When I started thinking about not writing tonal music, I got really interested in the atonal music of Arnold Schoenberg. Particularly things like the Book of the Hanging Gardens, Three Piano Pieces (Op. 11), the Five Pieces for Orchestra (Op. 16), and Five Pieces for Piano (Op. 23). Later, I got into the music of Bartók, particularly the Mikrokosmos.
Zaki: Not Schoenberg's serial music?
Howe: Some of it, but mostly his transitional music. However, I also read George Perle's Serial Composition and Atonality, which is a well written and insightful book. I read the theories of Allen Forte, but I was never really convinced that he understood the music. He once wrote an article that claimed that Schoenberg's Op. 11, No. 1 was a perfect demonstration of Forte's atonal theories. I rewrote some of the Schoenberg passages that were discussed in the article in a way that I felt did conform more to the theory Forte was describing. That helped me work out my thinking as well. As all this was going on, I began to develop my own great vision for musical coherence. This is something I ultimately worked into my dissertation, “Multi-Dimensional Arrays.”
Thinking about Pitch Structure in Atonal Music
Zaki: You have an interesting way of dealing with your musical material compositionally.
Howe: One important point I would stress is that all my music, whether electronic or acoustic, is based on pitch relations in many different dimensions. My first insight was the idea that to make sense of music not based on triads, you must have some meaning and identification attached to any groups of notes. That led to the concept of pitch structures, and I think every pitch structure has a unique identity that is identified by an interval structure. We don't have a name for a structure like (0, 1, 3), but it's unique and does not sound like anything else.
My use of pitch structure as a basic concept is not the same as Allen Forte's theory of set complexes. He thinks that the interval content is the defining thing, regardless of where the intervals occur within the notes. For example, to Forte, (0, 1, 3) and (0, 2, 3) are the same. They are not the same for me, they are related by inversion.
Zaki: Is it an ordering issue?
Howe: No, it's a structural issue. Forte looks at (0, 3, 7) and (0, 4, 7) and says they are the same because the interval content is the same. But one is a major triad and the other a minor triad. If there is no difference between those, then we have no understanding of tonal music.
Zaki: He was just checking off intervals.
Howe: Yes, and I think that is one of the basic things he got wrong. The basic structure in my music is an array. These are structures of at least two dimensions, where you must consider the structure of both the chords and the voices, like a series of chords. Most arrays that I use are 3×4, 4×4, or 5×4.
Another thing I learned from Jim Randall is that there are four multiplicative operations, each of which can generate a unique structure when applied to a set of notes. Everybody is familiar with identity and inversion, but there is also a cycle of fifths equivalence and a cycle of fourths equivalence. These four operations are referred to as M1, M5, M7, and M11.
[Editor's note: For further explanation, see, for example, https://en.wikipedia.org/wiki/Multiplication_(music).]
Zaki: Multiplying each pitch class in the set by the interval number?
Howe: Yes, but not all sets produce four different intervallic structures. In fact, there are sets that only have one form—for example, (0, 2, 4). This will be the same no matter which of the four multiplicative operations you apply. Many structures have just two different forms, but there are several that have four different forms. I classify all the structures that exist by type, whether they have just one form, or two, or four, and by the operations that produce them. If there are two, they can be two that are related by inversion, two related by identity, or two that are related by cycle of fourth or fifth equivalence.
Zaki: Do you see the array as the horizontal being the temporal ordering and the vertical dimension as simultaneity?
Howe: That is a simple way of looking at it, but they can also be turned on their end where you can make the voices the chords and vice versa; and the elements of the array do not always end up being adjacent in the music. I use groups of notes that produce four structures and create arrays from those. So, you can have arrays that are 3×4, 4×4, 5×4, and 6×4.
It turns out that when you do this, there are large families of groups that are related by some basic identity. There are four particularly important families, which are related by the tetrachords (0, 1, 2, 7), (0, 2, 3, 5), (0, 3, 4, 7), and (0, 3, 6, 9). The last of these is the diminished seventh chord, the complement of which is the octatonic scale. There are many other smaller families.
Some arrays produce just eight notes, and some produce all twelve notes. With a 3×4 array you can have three collections of four notes that generate eight pitch classes, and you can have four collections of three notes that can generate all twelve. The same is true with tetrachords. There are sets of tetrachords where two groups of four tetrachords only produce eight notes, and two of them produce all twelve. When you get to pentachords and hexachords, they always produce all twelve. There is also the concept of weighted pitch classes, where a particular pitch class might appear twice in the array.
To give you an example of how this can work, you can have a 3×4 array contained within a 4×4 array. You can also have a 4×4 array contained within a 5×4 array or a 6×4 array. That leads to possibilities where you can create note successions based on common tone relations. In some of my pieces I have entire sections that are related in a manner like that.
Zaki: You are giving it a pivot point, so to speak.
Howe: Something like that. In twelve-tone music, it's very often these combinatorial relations that determine how things go. This is similar, but it is a little more complicated because you must look at both the chords and the voices.
Queens College and Teaching
Zaki: After you finished your PhD, you were offered a teaching position at Queen's College, CUNY.
Howe: Yes, in 1967. Queens College had a really distinguished faculty when I started teaching there. There were twelve composers on the staff. The big names were George Perle and Hugo Weisgall. I saw that people like Perle and Weisgall were not writing what I was writing, but they were clearly very decent and serious composers. The problem for me was that nobody there was interested in what I was doing. A typical response was ”I don't care about your computers.” Later, when everybody started using computers for word processing and such, they were asking me for recommendations on what computers they should buy.
Zaki: Nobody was doing electroacoustic music when you started at Queens?
Howe: No, nothing. One thing to remember is that computers were expensive. Few places had the ability to do this kind of stuff. Bell Labs was ahead of everybody, and I think that was just because Max Mathews decided that this is what they were going to do, and he had the power to do it. They were going to get there first.
Zaki: What about the facilities at Queens?
Howe: After I had been at Queens for about a year, I got the chance to set up an electronic music studio. We got a couple of tape recorders and a Moog synthesizer. Later we got a Buchla system, but Moog was the standard.
One of the first challenges was that Queens did not have a computer. As soon as I got there, I let the people in the sciences know that I was interested in using a computer; I got onto a committee that oversaw deciding what computer to get. I told them it had to have a converter, and they were very receptive to that idea. Ultimately, they got an XDS Sigma 7 (originally called Scientific Data Systems, later taken over by Xerox). It had a digital-to-analog converter and was a decent mainframe machine.
I wrote an assembly language program like Music 4B for that machine called Music 7, which we used for several years. I think we got the machine in the early 1970s and it lasted through the late 1980s. At that point, CUNY had set up its own computer center in Manhattan, so we could go in there and use a big 360 machine. I was able to use Music 360 there and write tapes. That was much faster than the XDS machine.
Zaki: Were you working with synthesizers at this point too?
Howe: Actually, I was. I never wrote any music involving the Moog or Buchla synthesizers, but I learned a lot about sounds from them. Much later, I did write music for synthesizers, mainly the Yamaha DX7. One important thing to mention is that Bob Moog went to Queens College. He was delighted when we put the studio in, and he came to the college and opened it up for us.
I found it very interesting to work with the sounds on the Moog synthesizer, and probably one of the most interesting things about it was the filter. I later wrote a piece called “Mosaic” [2001, computer music] that used filtering in different ways. And I got to play that piece for Moog at a concert. In the introduction, I talked about how his filter had influenced me in this, even though I hadn't used his instrument to make the sounds.
NeXT Steps
Zaki: When did you start moving away from working with big metal?
Howe: Around 1990 personal computers began to get more powerful. The NeXT machine was the first one that had D/A and A/D converters built into it, and this became much more common. Then there was a period where you could get sound cards that you could put into a personal computer that had a converter on them that were pretty good. Now of course, you don't even need that. The computers that we have today have everything built right into them.
Zaki: Did you work on the NeXT platform?
Howe: I did for a while, yes. When the NeXT came out it was great, it was way ahead of everything else at the time. But then Intel came out with its series of Pentium processors, and each one was better than what the NeXT processor could do. You could overclock them, and NeXT just got swamped. Plus, NeXT computers were always very expensive.
Zaki: I remember at Princeton in the early 1990s, you could not turn around without bumping into a NeXT machine. Paul Lansky was a real fan, and the whole program was centered around them. I cut my teeth on the NeXT platform.
Howe: They were great machines. But now your personal computer is much more powerful and can run rings around them. Since the 1990s I have been using personal computers that were powerful enough to make real computer music.
Spectral Considerations
Zaki: In addition to your computer music, you have written a considerable amount of music for traditional instruments and conventional forces. Does your compositional approach using pitch structures and arrays change with respect to the medium, or is there crossover between your acoustic and electroacoustic music?
Howe: All of my music uses the same structural basis of arrays. While I had written small instrumental pieces all during this period, I had a major opportunity to write instrumental music when I was a visiting professor at the University of Alabama in 1988–89. I had a smaller teaching schedule, and I had to give a few lectures and such. This gave me more time to write music. I wrote my first symphony there, which is the longest piece I've ever written, over 40 minutes long. Only the second movement has ever been performed. I'm not so sure now that I would want the whole thing to be played.
Both that piece and my second symphony, which was written in 1992, have a kind of 19th century sense of harmonic prolongation. I came to feel that that sense of prolongation wastes time, you don't want to hear an atonal chord prolonged as long as you might want to hear a triad.
So, I began a new direction with my quintet [1994, flute, clarinet, violin, cello, and piano], which I think is one of my better pieces. At 18 minutes, it might require too much of an attention span for many people today. This is my first piece in which I used the idea of instruments fading in and out together. I thought that was a very interesting and beautiful idea, and it became a bigger part of my computer music where fading can really be controlled more accurately.
Zaki: That sounds like a precursor to your interest in spectral merging.
Howe: Possibly, but that was an idea that goes back to Milton Babbitt, who asserted that you need three successive partials for the fundamental to be heard. I found that is not true. When you have a group of harmonic partials entering individually, at some point the fundamental jumps out. A lot of my timbral music plays with that concept. There is one level of pitches going on in the overtones and another in the fundamentals. The fundamentals don't often emerge until the sound has been going for a while.
Timbral Music
Zaki: How did you get interested in timbral music?
Howe: In those early years, I wrote several timbre studies, and one of my early pieces, “Improvisation on the Overtone Series” [1977, computer music], was composed of overtones. As you listen to it, you realize how all that affects the timbres that go by. You realize that timbre is much more than what instruments can produce.
Another thing I had done in my student days was to study acoustics extensively. I concluded that nobody really understood musical instruments particularly well. Acousticians describe the ways instruments are constructed and talk a little bit about timbre, but instrumental sounds are much more complex. There are all kinds of nuances and things like pitch instability, rhythmic inaccuracies, vibrato, and intonation deviation, and they are all mysteries. These are all things that are not specified in music notation. Early authors of books on acoustics simply ignored all these ideas, but then their topic wasn't really music.
When I was going to Bell Labs, I got to know people like Jean-Claude Risset. People there were interested in these things, and they did some early studies in acoustics. Risset studied trumpet and violin tones. His study of trumpet sounds is interesting. He did an experiment where he had a room full of professional trumpet players and had a curtain from behind which he played sounds. He asked the trumpet players to tell him which sound was a real trumpet, and which one was a computer-generated trumpet. They couldn't tell the difference! So that says something about how accurate their understanding of it was.
Zaki: These were samples?
Howe: No, Risset was synthesizing sounds using Music V. He did an analysis of the sounds, which revealed things like the envelope for each harmonic. He used the properties from the analysis to build the sounds.
Then, people like John Chowning at Stanford started analyzing sounds and did similar studies. The thing about all these guys, and this is true of Bell Labs as well as Stanford, is that there seemed to be a real focus on identifying instruments, or voices, or qualities of sound. We want to say that is a cello. Or that is a violin.
Zaki: By identify, you mean to label a particular sound's signature?
Howe: You have a complex of properties that go into creating a sound, and the goal is to identify what it is. There is no idea that you can be creative with these properties, like vibrato, intonation deviation, envelopes, timbre, and other things. One example I sometimes use is that in instrumental music if you want to have a pizzicato sound you must use a stringed instrument. In computer music, there is no reason we can't think of having a pizzicato on a clarinet tone. That type of creativity was just beyond their realm of focus. When Chowning came up with the idea of spectral fusion—and I think the first time was in his piece “Phoné” [1980–81, four-track tape, based on sung-voice synthesis]—he had this brassy sound that went on for a long time and that finally morphed into a female voice singing. I think it was important to him that you had two endpoints along that continuum that were clearly identifiable. To me, what was interesting was what was going on…
Zaki: … in the middle, right.
Howe: Exactly, that's where you find things that you do not hear in instrumental music. I am very interested in using those properties creatively.
Overtones and Undertones
Zaki: Some of those properties you are describing, like vibrato for instance, are spontaneous and variable means of injecting humanity into a performance. It gives the music a humanizing element.
Howe: You could think of it that way. But the thing is, when using vibrato in computer music you are not limited to what musical instruments can do. You can have vibrato at a much faster speed than anybody can physically play, and much slower too. You can have it changing in a structural way between one and the other.
Zaki: Paul Lansky used to talk about the corporeal aspect of a sound, and how for a listener, if you could understand the physicality in the sound, you would have a different relationship to the sound than if it was purely abstract. If you can imagine the hamster wheel going around, or it sounded like something was being struck or plucked for example, you would have a certain kind of understanding of that sound. So, I wonder if there is a point when you try to “pluck a clarinet sound” and we do not understand the mechanism, if that makes it confusing somehow? Or create a kind of distance?
Howe: If you think about it, with most computer music, the listener is constantly wondering what produced the sound. The computer can take instrumental or acoustic sounds and transform them in ways that you could never have in the physical world. I am sure that, when people are listening to computer music, they are wondering what is producing it. But I don't see doing something without that kind of physicality, or if the listener can't imagine a physicality, as invalidating the sound.
Zaki: Oh no, I'm not implying that it is invalid, I would just suggest that listeners might have a different kind of relationship to the sound, and that this could frame their understanding of it. Of course, all listeners bring their own baggage and associations to bear on what they are hearing.
Howe: That is certainly true. One thing about my computer music is that I use a lot of created sounds, sounds that are not associated with traditional instruments. That doesn't mean that there would not be parts of them that would remind people of those instruments, but I am trying to do something different, do something new. Originality in sound is an important point.
One of the main emphases in many of my pieces concerns overtones, going back to my study of acoustics. My compositions “Improvisation on the Overtone Series,” the Timbre Studies, and the Harmonic Fantasies involve harmonic generator instruments that are long and complex; I have many different components in them. Each component might be devoted to a different overtone and so forth.
I also use overtones in a different way in my inharmonic studies. This goes back to a piece I heard by Risset called “Inharmonique” [1977, soprano and two-track tape]. It was a piece for voice and computer dealing with inharmonic sounds, and I thought it was really interesting. As I explored this idea, I developed several ways to deal with inharmonic components of a sound, like pitch compression, which takes all the notes and compresses or expands them into a certain interval. There are also frequency shifting and the undertone series. Another was using irrational numbers like square roots or pi as the basis for a spectrum.
We don't hear low tones very well, at least until you get up to the range of the piano, and even there we have problems with the lower notes. To make realistic undertones, you must start above the highest note on the piano, so they come down into an area where you can hear the undertones. I found if you compress undertones, then instead of going down to zero they can be compressed down to some pitch that is audible.
Zaki: You really need a very high fundamental then.
Howe: Yes, and the undertone series goes down from the fundamental.
One of my pieces, “Emergence (Timbre Study No. 8)” [2012, computer music], deals with the idea of when you hear enough overtones to actually hear the fundamental. Part of the point of these pieces is the idea that you are going to hear pitches on different levels, as overtones and fundamentals.
That idea also works into my music based on filtering. When you use filters, you can get things that are similar to both instrumental and vocal sounds, vowels and so forth. They are all characterized by having formants in particular areas. Rather than trying to imitate instruments, when I use filters, I am usually focusing on pitches, so that you have one pitch level where you hear the fundamentals, and then, a higher level where pitches are brought out by the filtering, which might be changing as well.
I wrote a piece “Timbre Study No. 6” [1997, computer music] using both fixed and variable filters, which sound like you have vowels and diphthongs going on at the same time, all of which create some sort of pitch object. The fundamentals are also creating related pitch objects. The challenge is that, to make all this work, there must be at least four octaves between the fundamental and the overtone. One of the consequences of that is that on extremely low notes you have a lot of things that you can do, but on higher notes there is less. One of the things you will find if you listen to those pieces is that there are a lot of very low notes that go down to the bottom of the piano range and even below that.
Thinking about the differences between fundamentals and overtones, I developed an idea that I called “usable frequencies in music.” You can't really hear things accurately below the bottom end of the piano. Down there, they don't really make a pitch, they start to make individual beats, so to speak. The way you incorporate low frequencies like that into the music is to use them as vibrato speeds or for other variable qualities. I have done things where I will have a note, and the vibrato speed is six octaves below that. You would never know that, but that is a fact of what is happening at that point.
Zaki: So it acts as a low-frequency oscillator.
Howe: Yes, I think of it as a frequency that is doing something in the music while something else is happening. At the upper end, you have the problem that different people have different abilities to hear the upper limit of their frequencies. As you get older you lose some of the sensitivity to high frequencies. In fact, a lot of young people lose the high end because they are listening to music that is too loud.
So, I think that 12 or 13 kHz is about as high as you should go in music. That does restrict the overtones that you can use, but the highest note on the piano is a little bit over 4 kHz. 8 kHz is an octave above that, and 16 kHz is another octave above that. So somewhere about an octave and a half above the end of the piano is the limit that you can consider. That is for everything: overtones, fundamentals, and everything else.
Zaki: That is still pretty high.
Howe: We're not used to hearing things that high in music, but that does not mean that they do not exist. Live music is very complicated. In instrumental music, it is certainly possible to produce frequencies go that high. They are not necessarily heard in and of themselves, but they can create combination tones and beating and other things like that.
Creating an Electroacoustic Music Festival in New York
Zaki: You mentioned your time in Alabama, where James Paul Sain was one of your students. There is a direct lineage there with the establishment of the New York City Electroacoustic Music Festival.
Howe: Yes, he was one of my students and later got a job at the University of Florida. After he got there, he started an annual event called the Florida Electronic Music Festival (FEMF), which I attended for many years. When James said that he was going to stop FEMF in 2008, I happened to have a good group of computer music students in Queens. I asked them if they would be interested in doing something in New York, and they agreed. We put a festival together and presented the first New York City Electronic Music Festival (NYCEMF) at the CUNY Graduate Center in 2009. We had nine concerts and late-night events at Galapagos and the Issue Project Room in Brooklyn. It was a big success. The next year we organized a similar kind of festival that was a bit bigger and was just as much of a success.
Following this, the students all got jobs and left the university. I wanted to continue NYCEMF, but I realized it was necessary to take a different approach. I contacted everybody in the New York City area who was serious about electroacoustic music, and we formed a steering committee. The committee's members stretch from Rutgers University to Stony Brook, but otherwise it is centered in New York City. We also have some exceptional performers on board like Madeleine Shapiro and Esther Lamneck, who are mainstays of the festival.
The timing didn't work in 2012, so we next presented it in 2013, still at the CUNY Graduate Center. After that, I realized that CUNY was too difficult a place to work. We next moved to the Abrons Arts Center, which worked out well for several years. Probably the most significant thing that happened during that time was that the New York Philharmonic included us as part of their biennial. Alan Gilbert was the director at the time, and he was quite interested in new music. Unfortunately, he ended up getting increasingly frustrated by the reception he was getting for his efforts to bring new music programming into New York.
Zaki: Yes, the Philharmonic has always had a tricky relationship with new music.
Howe: As it turned out, the next director of the Philharmonic was not interested, so regrettably that connection fizzled.
Zaki: I have been around NYCEMF since the beginning, but I am curious, from your vantage point as director of a major festival and as a curator, what changes have you seen in the music and the approach to electroacoustic music over the years?
Howe: There has been a big expansion of music coming from Asia. We get many submissions from China, for example. They have a lot of interest in electroacoustic music. Europe is also very big, although I get the feeling that European music is culturally ingrained and compartmentalized in certain ways. They continue to do what they have done for many years.
Zaki: Like the acousmatic traditions you see in the UK, for example.
Howe: Absolutely. But one thing for sure, those places get much better funding.
Zaki: The Covid pandemic has also created a challenging environment for the arts. How did you manage that with NYCEMF, and do you see any changes or adjustments continuing in the long term once the pandemic shakes out?
Howe: We did virtual festivals the last two years, and we can leave everything online so people can revisit the music. I think that has been beneficial, and we will be leaving some of the music online from now on. That brings a kind of persistence to the festival; it shows that the event happened but is still there, in a virtual sense.
Looking Forward, Looking Back
Zaki: You are retired from Queens College now, but you are not showing any signs of slowing down.
Howe: Well, the good thing is I have been able to compose much more and do a lot of other things I have wanted to do. I have also written several articles since I retired. I always thought I would teach forever, and when it came to academic work, teaching is what I liked the most. That is what I miss about it, too.
Zaki: It is interesting to note that you have come full circle from your childhood introduction to music in a sense and have taken up singing again. Has that influenced your composition at all?
Howe: No, not yet, anyway. I always had a thought about writing some choral music, in which I think I might bring back tonality, if I did. I'ill think about it as we get into the nitty-gritty. The choir in which I sing is performing two rather difficult pieces: Leonard Bernstein's Chichester Psalms and an oratorio by the composer R. Nathaniel Dett called The Ordering of Moses. That is a rather interesting, impressionistic piece that has some strange harmonies that come out of nowhere. It has certainly been a worthwhile piece to learn. So, we will see.
Zaki: Singing is about as fundamental as it gets.
Howe: Yes, that's for sure.