Abstract

On several keyboard instruments the produced sound is not always dependent exclusively on a discrete key-velocity parameter, and minute gestural details can affect the final sonic result. By contrast, variations in articulation beyond velocity have normally no effect on the produced sound when the keyboard controller uses the MIDI standard, used in the vast majority of digital keyboards. In this article, we introduce a novel keyboard-based digital musical instrument that uses continuous readings of key position to control a nonlinear waveguide flute synthesizer with a richer set of interaction gestures than would be possible with a velocity-based keyboard. We then report on the experience of six players interacting with our instrument and reflect on their experience, highlighting the opportunities and challenges that come with continuous key sensing.

Several keyboard instruments offer a more-or-less subtle position or gesture-dependent control on the timbral and temporal characteristics of the sound of a note, as reviewed by McPherson (2015) and Moro (2020, chapter 2). The Ondioline, an electronic synthesizer invented in 1941 by Georges Jenny, is a particularly outstanding demonstration of how the effect of continuous key position, combined with side-by-side vibrato, can produce a remarkably expressive instrument, even by today's standards (Fourier, Roads, and Perrey 1994). Regardless, for many years it has widely been accepted that the scalar parameter of onset velocity is enough to characterize the qualities of a note for the purposes of synthesizing or analyzing a performance on a keyboard instrument (Ortmann 1925; Moore 1988).

The complex gestural language of a digital musical instrument (DMI) performer is reduced in dimensionality and bandwidth according to the mechanical, sensorial, and software constraints of the interface, projected down through a bottleneck and then expanded out again into the parameters that control the sound generation (Jack, Stockman, and McPherson 2017). Any data not actively selected for digitization will not reach the sound generator nor affect the resulting sound, and it will consequently get lost in the process. When a keyboard DMI is designed to let through its bottleneck only the information relative to the note pitch and velocity, all those more-or less-subtle forms of control available on those instruments, whose behavior is not entirely explained by discrete key presses and velocity, will disappear. Some of the attempts to overcome the discrete characteristics of the keyboard interface and widen its interaction bottleneck, such as the Seaboard (Lamb and Robertson 2011) and the Continuum (Haken, Tellman, and Wolfe 1998), did so by completely transforming the mechanics of the instrument, its haptic and tactile response, and the technique required to play it, eventually retaining little similarity to the traditional keyboard beyond the spatial location of the notes.

The instrument we present in this article follows a different approach: We entirely preserve the mechanical and tactile response of a traditional keyboard and we augment it by sensing the vertical position of each key. The key is no longer an on/off switch, instead it becomes a continuous controller whose instantaneous value and temporal evolution can be used to control sound generators with a degree of detail in certain respects similar to that of the Continuum or the Seaboard, but with the advantage of preserving a largely familiar interface. In the process of widening the bottleneck represented by the keyboard, it is inevitable that we partly defamiliarize the keyboard interface; as the meaning of existing sound-producing gestures is altered, different gestures that would normally be equivalent assume distinct meanings, and the range of available gestures is expanded. The experience of trained piano players with this instrument gives us an insight of the possibilities that continuous key sensing opens up to expand the gestural and sonic vocabulary of keyboard playing, how these are balanced by the disruptions to the player's expectations, and how they relate to the player's preexisting technique. We propose to analyze the experience of players encountering a new instrument as they progress through three stages: expectation, understanding, and execution.

Background

The importance of gestures on keyboard instruments that transcend the mere concept of velocity has an established place in keyboard practice, as pianists and researchers alike have long recognized that the apparently limited interface of the keyboard nonetheless supports a rich gestural technique. The concept of touch on the piano encompasses not only the finger–key interaction but also elements of body and hand posture, gesture, and motion (MacRitchie 2015). The vocabulary of whole-body preparatory gesture that leads to a key press can vary greatly from one performer to the next and from one note to the next. Pianists of the last two centuries placed a strong emphasis on the importance of touch and of its effect on the performance of a phrase or even of a single note (Doğantan-Dack 2011). Scientists, on the other hand, have traditionally concentrated on easily measurable quantities such as timing and intensity, and have seemingly concluded early on that differences between different types of pianist touch would uniquely lead to variations in the intensity of the produced tone (Ortmann 1925).

In piano literature, a strong emphasis is placed on the difference between two classes of touch: pressed and struck. A pressed touch (also called legato or nonpercussive) starts with the finger resting on the surface of the key before pressing it. A struck touch (also called staccato, percussive), on the other hand, occurs when the finger is moving as it engages the surface of the key. It must be noted that these categories are the two extremes of a continuous spectrum of possible variations on the key-press gesture. Pressed and struck touches can each be used across a range of dynamic ranges and there is a wide overlap between the dynamics achievable with each of them, even though the loudest dynamics can only be achieved comfortably with a struck touch (Goebl, Bresin, and Galembo 2005). Although Ortmann found that the use of one or the other touch could indeed affect the distribution of the acceleration of the key during its downward motion, his conclusions showed that there was no intrinsic sonic difference between the two. Later research showed that the accessory noises of the finger–key impact and of the key–keybed impact can indeed give the listener a cue to the type of touch used (Goebl, Bresin, and Fujinaga 2014) and that struck touches have a brighter spectrum than their pressed counterparts for the same final hammer velocity, because of the microoscillations induced on the hammer by such a touch (Vyasarayani, Birkett, and McPhee 2009; Chabassier and Duruflé 2014). The use of pressed or struck touch has also been shown to produce audible differences on other keyboard instruments such as the harpsichord and the Hammond organ (MacRitchie and Nuti 2015; Moro, McPherson, and Sandler 2017).

The capability of shaping the tone of a note on a keyboard instrument is typically confined to the instants of its onset and release. On the acoustic piano, the release of a key can be used to continuously control the return of the felt dampers, thus allowing the performer to change the sound of the release transient, or effectively moving from a “release instant” to a more prolonged “release gesture.” Other instruments provide a more-radical continuous control during the entire duration of the note, some of which we mention here, but for an exhaustive review we refer the reader to McPherson (2015). The clavichord allows one to slightly change the pitch of the note throughout its duration, as the “tangent” of the key is itself resting on the string, acting as a bridge, and so varying pressure during the duration of the note can achieve a vibrato effect (Kirkpatrick 1981). On tracker pipe organs, the key opens the valve that controls the airflow into the pipe, thus allowing the performer to continuously control the emission to a certain extent (Le Caine 1955). The Hammond organ, an electromechanical organ introduced in 1934, has nine contacts per key that close at slightly different points in the key throw, allowing some control during the onset transient (Moro, McPherson, and Sandler 2017).

Early electronic keyboards such as the ondes Martenot and the Ondioline had some form of continuous control dependent on key position (Fourier, Roads, and Perrey 1994; Quartier et al. 2015). Remarkably, these features have virtually disappeared from later instruments, with the exception of some prototypes such as Hugh LeCaine's touch-sensitive organ (Le Caine 1955), Robert Moog's Multiply Touch-Sensitive (MTS) keyboard (Moog 1982) and Andrew McPherson's Magnetic Resonator Piano (McPherson and Kim 2012) and piano scanner (McPherson 2013). The only widespread example of using the position of the key as a modulation source has been largely limited in commercial devices to the use of aftertouch, that is, pressing into the keybed once the key has reached the end of its travel. Aftertouch was featured on many monophonic synthesizers, as early as 1972 on the ARP Pro Soloist. Polyphonic aftertouch was famously available on Yamaha's flagship polyphonic synthesizers, the GX-1 and the CS-80, and even became a part of the MIDI standard (MMA 1983), although in its polyphonic version it is rarely implemented in commercial devices to this day (McPherson 2015). For completeness, we mention that the Bösendorfer CEUS grand pianos can sense and output continuous key position (Goebl et al. 2008). This does not affect the sound generation, as the piano is entirely acoustic, but it can be used for performance analysis (Bernays and Traube 2013).

Nonlinear Waveguide Flute Synthesizer with Continuous Key Sensing

Perry Cook (2001) outlines principles for designing music controllers, encouraging instrument designers to find and take advantage of the player's “spare bandwidth.” By augmenting an existing instrument, as opposed to creating a completely new interface, the instrument designer can add to its control capabilities while capitalizing on preexisting sensorimotor skills. As long as the augmentation fits largely within the spare bandwidth of the player, the disruption to regular playing techniques can be minimized (McPherson, Gierakowski, and Stark 2013). As we discussed in the previous section, differences in the type of touch used on the piano produce relatively minor differences in the sonic outcome. Associating clearly distinct sonic outcomes to different types of touch is therefore one of the opportunities for potential augmentation on the keyboard, with another one being the use of the vertical position of the key as a continuous controller.

To study how keyboard playing skills generalize to changes in the mapping of the keyboard interface and explore the potential for using continuous keyboard controllers, we designed a keyboard instrument based on a physical model of a flute that associates several continuous gestures on the key with a clear sonic effect. We were looking for a sound model well suited for continuous control and that could produce a plausible sound in response to percussive gestures, which is why we ultimately settled on a flute. Physical modelling synthesis is particularly attractive for this application because physical models lend themselves well to reproduce the behavior of acoustic instruments, as well as exhibiting some unexpected behaviors of their own that often yield remarkably rich and naturally sounding results (Borin, De Poli, and Sarti 1992; Castagne and Cadoz 2003). Although we are not concerned with the realism of the model, we expected that a connection to reality through physical plausibility can help players to understand the behavior of the instrument more intuitively.

The instrument uses an off-the-shelf weighted keyboard controller without any mechanical modifications but with expanded sensing capabilities. By using it as a continuous controller, we extend the concept of keyboard beyond its common understanding and we challenge some of the basic assumptions underpinning most keyboard instruments: discreteness of presses, effect of touch, and independence of keys. The key becomes a continuous controller and the key position affects the sound throughout the duration of a note, not just at the onset and release. Onset velocity is not used as such by the sound generator, but the percussiveness of onsets is detected and produces a percussive sound, thus assigning a distinctive sonic meaning to different types of touch. The interaction between keys is recast: Pressing two neighboring keys at the same time results in an interaction between them, producing a pitch-bend gesture with the second key acting as a continuous controller on the pitch of the first. The high-level block diagram of our instrument is displayed in Figure 1.
Figure 1

Block diagram of the physical modeling flute controlled with continuous key position.

Figure 1

Block diagram of the physical modeling flute controlled with continuous key position.

Sensing, Control, Sound

We combined Andrew McPherson's (2013) keyboard scanner for sensing key position with a Bela embedded computer (McPherson and Zappi 2015) for sound generation. By using a high-speed serial bus between the two we implemented a custom real-time environment to streamline the communication and achieve an action-to-sound latency consistently below 5 msec. A block diagram of the system comprising the keyboard scanner, the Bela board, and all the relevant peripherals and communication buses is detailed in Figure 2. We have published further details of the technical implementation elsewhere (Moro and McPherson 2020).
Figure 2

Block diagram of the system comprising the keyboard scanner and the Bela board.

Figure 2

Block diagram of the system comprising the keyboard scanner and the Bela board.

Two boards of the scanner were fitted on a Yamaha CP-300 digital keyboard, covering the range from B3 to B6 (38 notes), with the actual sounding pitch transposed one octave below. None of the sounds or electronics from the Yamaha were used, only its weighted keyboard. A picture of the instrument is shown in Figure 3.
Figure 3

The keyboard scanner installed on the Yamaha CP-300.

Figure 3

The keyboard scanner installed on the Yamaha CP-300.

Vertical Position

The keyboard scanner uses optical-reflectance sensors to detect the vertical position of the key by shining an infrared LED on the surface of the key and measuring the amount of light reflected back into a phototransistor. It utilizes an acquisition technique based on differential readings to reduce the effect of ambient light and it supplies the differential readings for each key at a 1-kHz sampling rate with 12-bit resolution. The distance of the key from the sensor is approximately inversely proportional to the amount of reflected light, and we compute the normalized vertical position of the key by linearizing the scanner's light readings after calibrating the scanner.

Gesture Detection

A percussiveness metric can be computed analyzing the temporal evolution of the key position over time, which we are able to do thanks to the high sampling rate of the keyboard scanner. Bernays and Traube (2012) obtain a percussive metric from the ratio of the key depression at half the attack duration to the maximum key depression and the average of the key depression curve. This approach presents two disadvantages that make it unsuitable for our application, as it postpones the computation of the metric until the key has reached the key bottom: It adds latency to the detection, and it does not work in the presence of incomplete key presses. The approach we use builds upon the one introduced by McPherson (2013), which considers instead the ballistic collision that causes the key to bounce off the finger shortly after the initial finger-key impact, using a state machine to segment the key motion and extract features as soon as they are available.

Figure 4 shows the key and velocity profiles of a typical percussive key press played on the Yamaha CP-300, sensed through the keyboard scanner. As the key is hit by the finger, kinetic energy is transferred from the finger to the key, and the key starts a fast downward motion while it temporarily loses contact with the finger, which is still moving downwards but more slowly. The key is moving freely downwards and the kinetic energy progressively dissipates until the key stops and eventually starts moving upwards. Shortly after that moment, the finger, which has kept moving down all along, catches up and the key starts moving downwards again, this time under the direct pressure of the finger. This behavior is reflected in the velocity profile by an initial spike due to the impact, and in the key position profile by a local maximum during the early part of the onset, corresponding to the point where the key starts the upwards motion.
Figure 4

The position and velocity profile of a percussive key press.

Figure 4

The position and velocity profile of a percussive key press.

Our percussion-detection algorithm starts by detecting a local maximum in the key position during the early part of the key onset. When a maximum is found, the program looks back at the recent history of the key position to find the maximum value of the velocity, and that value is then used as the percussiveness metric.

Aftertouch

Aftertouch is the term used to indicate the extra pressure put into the key once it reaches key bottom. Some keyboards provide dedicated aftertouch capability that is often achieved by placing a strip of compressible material, whose electrical properties change with the applied pressure, under each key. On keyboards without aftertouch, a fully depressed key is held against a felt padding; if the player presses further into the key, the padding is compressed, so that the key can travel a bit more. This extra motion into the padding can be sensed with the keyboard scanner, as long as the key bottom point, where the aftertouch region starts, is determined accurately. During calibration we record the key bottom position and the maximum amount of key displacement achieved by pressing into the padding for each key. These two values are used to normalize the aftertouch range across the keys.

Monophonic Key Detection

Monophonic synthesizers require a strategy to decide which note is currently active when several keys are pressed at the same time. Some common strategies on traditional keyboards are lowest-key, highest-key, or most-recent-key priority. These priority schemes are only really meaningful in the context of discrete key presses, however, where a key can only be “pressed” or “not pressed” at any given time. In the case of an instrument where the key position continuously shapes the sound, like ours, a more-complicated model is needed for the interaction to be intuitive. We created a priority algorithm that can be defined as “most-recent and deepest priority” to be used with our platform when in monophonic mode, which aims at being intuitive for the player, so that the latest key that has seen considerable action is the active one, unless it is being released, in which case if another key is partially pressed and moving down, this can take priority. To know when a key is being released, we use an expanded version of the key-motion state machine presented by McPherson (2013), modified to work with continuous gestures. The state machine then informs the dynamic activity thresholds that ultimately decide which key is active at any time.

Sound Generation

The starting point for the sound engine is a nonlinear waveguide physical model of a flute developed in the Faust programming language (Michon and Smith 2011). This inserts a nonlinear, passive, all-pass filter (NLFM) modulated by the input signal into the waveguide delay line to create interesting natural and unnatural effects. We modified the model to provide control over the length of the delay of the air jet between the mouth and the mouthpiece, so that it allows the user to generate overblown tones and interesting turbulent and multiphonic timbres when set to noninteger fractions of the bore delay (McIntyre, Schumacher, and Woodhouse 1983). We also added an auxiliary input to inject an arbitrary signal into the waveguide. The resulting model is shown in Figure 5.
Figure 5

Block diagram of the nonlinear waveguide flute model. Bold labels indicate the parameters exposed for real-time control.

Figure 5

Block diagram of the nonlinear waveguide flute model. Bold labels indicate the parameters exposed for real-time control.

The Faust compiler produces a C++ file that contains the DSP code as well as wrapper code for the platform on which it will run, which we modified to integrate it with the keyboard scanner library. Our full code is available online and implementation details can be found in the first author's dissertation (Moro 2020, chapter 5).

From Discrete to Continuous

The original Faust implementation of this synthesizer would, upon receiving a MIDI note input, trigger envelopes applied to the air pressure, providing smooth fade in and fade out of the note and introducing a 5-Hz modulation to produce a delayed vibrato effect.

When using a continuous keyboard controller, all the automations are replaced by the player's action on the key itself, and the parameters from the physical model can be controlled by the performer's gestures. The air pressure (intensity of breath) is controlled by the vertical position of the active key. The pitch (length of the bore) is controlled by the current active key and during bending gestures by the vertical position of the bending key. The jet ratio (angle between lips and mouthpiece) is changed during a pitch bend alongside the pitch parameter. If a key is struck percussively, a percussive sound is injected into the resonant bore via the auxiliary audio input.

Gestures and Sounds

Figures 611 display the pressure and key position (top), a time domain representation of the generated sound (middle), a frequency domain representation of the generated sound (bottom), as well as the notation we used to indicate the gesture (left). Audio recordings of these and other examples can be found in the supplementary materials at https://doi.org/10.1162/comj_a_00565.s
Figure 6

Two notes (a), the first fully depressed (forte dynamic), the second one partly depressed (mezzoforte dynamic). In this and the following figures we see musical notation (a) and graphic representations of the resulting audio. The graphs display pressure and key position (top) and the generated sound represented in time domain (middle) and frequency domain (bottom). Audio recordings of these and the following examples can be found in the supplementary materials at https://doi.org/10.1162/comj_a_00565.

Figure 6

Two notes (a), the first fully depressed (forte dynamic), the second one partly depressed (mezzoforte dynamic). In this and the following figures we see musical notation (a) and graphic representations of the resulting audio. The graphs display pressure and key position (top) and the generated sound represented in time domain (middle) and frequency domain (bottom). Audio recordings of these and the following examples can be found in the supplementary materials at https://doi.org/10.1162/comj_a_00565.

Figure 7

A note fade in and fade out by progressively pressing and releasing the key.

Figure 7

A note fade in and fade out by progressively pressing and releasing the key.

Figure 8

Pressing the key into the keybed (aftertouch), to obtain a “growl” sound. Notice that the mapping between key position and air pressure changes when entering the aftertouch region (above key position 1.0).

Figure 8

Pressing the key into the keybed (aftertouch), to obtain a “growl” sound. Notice that the mapping between key position and air pressure changes when entering the aftertouch region (above key position 1.0).

Figure 9

Growl vibrato, obtained by repeatedly pressing heavily into the keybed.

Figure 9

Growl vibrato, obtained by repeatedly pressing heavily into the keybed.

Figure 10

A percussive key press. Notice the spike of the key position at the beginning of the note, which is detected as a percussive gesture, in turn injecting the noisy burst into the audio signal.

Figure 10

A percussive key press. Notice the spike of the key position at the beginning of the note, which is detected as a percussive gesture, in turn injecting the noisy burst into the audio signal.

Figure 11

Pitch bend from G4 to B4 and back down to the G4.

Figure 11

Pitch bend from G4 to B4 and back down to the G4.

The mapping of key position to air pressure will make it so that when the player presses the key with a swift, decisive gesture, similar to a forte on a piano, the corresponding sound will attack immediately. A regular key press that goes all the way to the bottom of the key, giving a full, rich tone, and notes of different dynamics can be obtained by pressing the key partially and sustaining it at that level (see Figure 6). Conversely, to fade in or out a note, the player can press or release the key more slowly. The tone will then transition from an airy, inharmonic breathy sound to a fuller tone, richer and richer in harmonics as the air pressure increases (see Figure 7). The intensity and timbre of the note once the key has reached the bottom will always be the same, what changes between a slow and a fast press is uniquely the shape of the onset transient. Pressing into the keybed in the aftertouch region gives access to an extended range of pressure that yields a growling sound (see Figure 8). At any point in the key throw, vertical oscillating motions on the key will naturally translate into a tremolo effect. When pressing into the keybed, a gentle vibrato effect can be obtained by pressing lightly, or a more intense one, which reaches the growl point, by pressing harder (see Figure 9).

When a percussive key press is detected, a percussive sound, a prerecorded sample of a person vocalizing a “T” sound into a microphone, is injected into the resonant bore of the physical model through the auxiliary audio input. This is not strictly equivalent to the effect of a flute player pronouncing a “T” sound into the mouthpiece, although the resulting “chuff” is reminiscent of the sound of a sharp attack on a flute. Figure 10 shows an example of percussive press.

A pitch bend is generated when holding one key down and progressively pressing one of the keys within a major third interval. The vertical position of the bending key then controls the pitch of the tone. Bending a note on a transverse flute is done in practice by changing the distance between the upper lip and the mouthpiece, therefore resulting in a timbral change during the bending, before changing the pressed keys to jump to the destination note. Our sound model does not include toneholes, as it implements a slide flute, however the sound of a pitch change obtained by simply adjusting the length of the bore is pretty flat and uninteresting. We therefore implemented a hybrid approach where we change both the bore length and jet ratio, producing a more turbulent and unstable transition sound, akin to the one obtained when gliding between notes on a transverse flute. An example of how jet ratio and pitch (bore length) change during a pitch-bend gesture is shown in Figure 11. A state machine comprising a leaky integrator is implemented in software so that, if the player lingers in the pitch bending space, the sound can become unstable and break into a multiphonic sound, which produces unique sonic results. If, at that point, the player quickly fully depresses the bending key, it enters the “high state”, an overblown mode where the jet ratio is fixed to 2, which corresponds to a second harmonic overblow.

Playing the Continuous Keyboard

We conducted a study in which a keyboard player would spend four hours with the instrument across two sessions. The player initially freely explored the instrument alone, followed by a guided training on the techniques and capabilities of the instrument, and concluding with unsupervised work towards the composition of a short piece on the instrument. A total of six professional musicians took part in the study, all of them classically trained piano players, three of them with extensive experience in contemporary piano practice while the other three had extensive experience in popular music. One player, P1, had performed and recorded in several occasions with another continuous keyboard, the Magnetic Resonator Piano (McPherson 2010), and, alongside P2, had taken part in an earlier study with another continuous controller (Moro 2020, chapter 4).

The first session started with the individual participants freely exploring the instrument for 15 minutes, after which the investigator would gather each player's impressions and then briefly explain the basic capabilities of the instrument. This was followed by a guided training session where the player was taken through several short exercises to learn the fundamental techniques of the instrument. For each exercise, a score and prerecorded audio examples were provided. Throughout the study the investigator would give feedback to the player on the execution of the techniques. We prepared eight études that are slightly more difficult than the exercises and simple melodic fragments presented during the training. The first four études had accompanying prerecorded audio, and the player was given a score in which only notes and rhythm had been indicated, while the instrument's extended techniques were omitted. Listening to the recording, they had to annotate the score with the extended techniques and then play the piece. The remaining four études were fully annotated but did not have audio examples, so the player had to perform them solely based on the notated techniques. Examples of the training materials and études are provided in Figure 12 and the full set is available in the first author's PhD dissertation (Moro 2020, appendix B; also reproduced in the supplementary materials). The last 20 minutes of the first session were for the player to autonomously start working on a short composition on the instrument.
Figure 12

Selected examples of the materials provided during the training session: first, simple initial exercises concentrating on the fundamental techniques of nonpercussive (a) and percussive touch (b); then integration of techniques in simple phrases (c); finally techniques in longer musical contexts (d and e). Audio recordings of these examples, as well as further examples, both as audio files and music notation, are with the supplementary materials at https://doi.org/10.1162/comj_a_00565.

Figure 12

Selected examples of the materials provided during the training session: first, simple initial exercises concentrating on the fundamental techniques of nonpercussive (a) and percussive touch (b); then integration of techniques in simple phrases (c); finally techniques in longer musical contexts (d and e). Audio recordings of these examples, as well as further examples, both as audio files and music notation, are with the supplementary materials at https://doi.org/10.1162/comj_a_00565.

The second session started with up to 60 minutes dedicated to finalizing the composition, which was then performed in front of the investigator. The investigator then gave the player access to some of the internal parameters of the instrument to fine-tune the key response. Last, the études were played once again.

Throughout the two sessions the investigator conducted four semistructured interviews (between 10 and 30 minutes each) to gather the players' findings, their impressions and struggles, and their insights on the evolution of their technique, the compositional process, and the affordances of the instrument. We refer the reader to the first author's dissertation (Moro 2020, chapter 5) for full details on the study. Here we focus on some of the most relevant outcomes.

Initial Discovery

The first few minutes of contact between a player and a new instrument are a particularly insightful moment. The extent to which players discovered and understood the capabilities of the instrument during their initial 15-minute exploration is summarized in Table 1.

Table 1:

Summary of Results of Initial Discovery

Continuous control Pitch bend Aftertouch Percussion Multiphonic High state P1 U[0:00] 
P2 U[4:23] 
P3 U[9:20] 
P4 U[6:58] 
P5 U[4:25] 
P6 U[0:10] 
Continuous control Pitch bend Aftertouch Percussion Multiphonic High state P1 U[0:00] 
P2 U[4:23] 
P3 U[9:20] 
P4 U[6:58] 
P5 U[4:25] 
P6 U[0:10] 

For all participants we indicate which features of the instrument they discovered and to what extent the features were then explored.

N: not produced. The effect was not audible at any point of the exploration.

P: produced. Participant produced the effect but did not actively explore it thereafter.

E: explored. Participant spent time investigating the effect.

U: understood. Following exploration, the participant managed to reliably produce the effect and understand the techniques involved.

In brackets, the time (mm:ss) from the beginning of the task when the participant first become aware of the effect of continuous control on the pressure of the sound.

By the end of the task all participants realized that the key position could control the produced sound in a continuous fashion. Only P1 and P6, however, discovered it at the very beginning of the session, and P1 did so because of previous familiarity with the keyboard scanner. It took everyone else several minutes to realize it. Participant P4 had already discovered and explored the pitch-bending effect for over one minute before having the idea to explore the effect of key position in a single-key gesture. A handful of seconds into the exploration, P3 executed a short series of repeated partial key presses of increasingly greater depth, but did not notice the effect of the key position on the sound generation. Subsequently aftertouch and pitch bending were discovered and explored for several minutes before the effect of key position in single-key gesture became clear. Participant P5 had an epiphany moment while playing in the lower register of the instrument, where the attack of the sound is by nature slower. From the video, it seems that this participant initially thought that the velocity would affect the ramp-up time of the onset, and only after three repeated slow presses noticed that the control was position-dependent, reacting by smiling visibly.

Most players discovered the pitch-bending gesture, although not everyone fully understood that the primary key had to be held down for pitch bending to take place. Several players also encountered the multiphonic effect achievable with multikey gestures, and often spent a significant amount of time playing with it whenever they would stumble across it, but only P1 was able to elaborate a strategy to achieve it systematically.

The only one to notice the effect of percussive gestures during this task was P5. The discovery took place while playing two-handed fast repetitions on the same key. A fairly reliable technique for this effect was quickly achieved and immediately integrated into a funk-style bass line. The other participants did not notice the effect during this task, even those who had inadvertently triggered it did not notice. Only one player discovered the aftertouch.

Execution of Techniques

After the initial discovery and our explanation of the techniques, participants had a grasp of what the effect of individual techniques was, but they had not fully realized what potential musical results they could expect when using them in the context of a musical phrase, or a larger piece. The training session thus helped them to better understand the expressive potential of the instrument and in the remainder of the study they explored and developed their techniques further.

Participants found that holding a key in a partly pressed state, although an easy concept to grasp, was not always easy to achieve; they would try to compensate for their uncertainty when performing partial and progressive key presses in several ways. To provide a stable anchoring point for their movements, most of our participants would rest the palm of their hand on the frame of the keyboard, at the front of the keys, something that they would never do while playing conventional keyboards. Several of them also reported that they looked at the keys more than they normally would, using their eyes as an aid when performing continuous gestures. Another strategy that was adopted to exert more control on progressive key presses was to use more than one finger per key. All players used two hands for most of their playing, especially from the end of the first session onwards. The second hand was often used to prepare the following note in advance in a slow passage involving partly pressed notes. We asked two of the players to repeat a short passage without looking at the keys and without resting their hand on the frame. In both cases, executing the passage without these aids resulted in a performance very close to what they had previously achieved, suggesting that using visual cues and the keyboard's frame as a reference point could be avoided easily through further training and increased confidence.

Percussiveness was by far the hardest technique to learn for most participants. The training exercises on this technique were found to be the most challenging by the players, especially when several percussive notes were played in a sequence, or when the percussive note was at the end of the phrase, or fell on the fifth finger, whereas placing the marcato on the first note of the phrase was normally easier. We computed the hit rate as the ratio between the number of percussive notes they successfully played and the number of percussive notes in the score during each player's last performance of the études at the end of the study, as reported in Table 2.

Table 2:

Metrics of Percussion Accuracy per Participant

Participant P1 P2 P3 P4 P5 P6 
Hit rate 0.61 0.67 0.34 0.68 0.86 0.90 
Participant P1 P2 P3 P4 P5 P6 
Hit rate 0.61 0.67 0.34 0.68 0.86 0.90 

Participants P1, P3, and P4 struggled throughout the study while exploring several different gestures trying to find a reliable one. Although P2 achieved a relatively low hit rate in the études, in the participant's own composition several percussive touches were included and were executed well. Participants P5 and P6 achieved a reliable technique and did so fairly quickly, without experimenting with several different techniques. In general, the acoustic accessory noise produced by finger–key and key–keybed impact seemed to be louder for those players who were struggling the most, as if they were trying to put more energy into the gesture than those who found a more reliable technique.

New Techniques

Our players spent over an hour alone working towards their compositions. During this time they had the possibility to further explore the instrument, and some of them managed to develop original new techniques. The monophonic character of the instrument was seen as a limitation by many and several of the new techniques were aimed at circumventing this limitation and recreating a sensation of polyphony. While partially depressing two keys and, with microscopic adjustments, P3 alternatively made one or the other the deepest one, obtaining a rapidly alternating glitchy effect as the keyboard controller gave priority to one or the other. Participant P4 would hold one note partially or fully pressed, or even in aftertouch mode, while fully depressing another note for a short period of time with a finger of the other hand, as if plucking it.

While holding a note fully pressed with the left hand, P5 played an arpeggio of sixteenth notes with the right hand. During the sixteenth-note rests in the right hand, the left-hand pedal note would then play because of the monophonic voice stealing we implemented, as shown in Figure 13. Another technique was developed by P5, taking advantage of a glitch in our sound generator, creating what was dubbed “air noise”: By rapidly pressing a key without percussion, and keeping the weight on the key until the end so that it is immediately pushed into the aftertouch region, a note would be produced with no harmonic content but only some colored noise. If the key is slightly released, the note starts a harmonic oscillation at the expected pitch.
Figure 13

Transcription of one bar of a performance by participant P5, with notation of the separate hands, as played, and the resulting sound as heard.

Figure 13

Transcription of one bar of a performance by participant P5, with notation of the separate hands, as played, and the resulting sound as heard.

A multikey technique was developed by P6 in which a partially pressed key in the high register was held while playing a staccato ostinato on fully pressed keys in the low register. The high key would be initially pressed only slightly, so that it would not produce a periodic tone, and over time its depth would then be changed, as shown in Figure 14. Depending on the vertical position of the high key, its result on the produced sound would vary between colored noise (when lightly depressed), to pitched, decaying resonances (when depressed further), to fully sounding periodic tones (when fully depressed). This technique would not be achievable on a regular monophonic synthesizer, because there would be no easy way of obtaining different timbres for the two notes, the way it is possible here by controlling the depth of the held note.
Figure 14

Participant P6 holding one note partially in the high register (D5) while playing an arpeggio in the low register.

Figure 14

Participant P6 holding one note partially in the high register (D5) while playing an arpeggio in the low register.

A fade-out vibrato technique was also developed by P6 as an extension to the regular “pressure vibrato” we introduced in our training exercises. This was started by a pressure vibrato oscillation while the key was fully depressed and keeping the oscillating motion going while progressively releasing the key.

General Feedback

Participant P5 pointed out that the lack of mechanical support from the key made continuous gestures more complicated: “Where is the focal point in the weight of my arm to hold that note and control it?” This was in contrast to this participant's experience as brass player, where the mouthpiece provides the required support. Even if comparably small movements affect the sound, they all happen on the mouthpiece.

The process of learning the instrument often involved relearning techniques previously acquired on the piano, focusing the attention on previously ignored aspects of the gesture. As mentioned above, the technique used for percussiveness seemed unnatural to P1 and went against skills acquired over thousands of hours of training. Similarly, it would often be the case that the attack of a note would not be “clean” because of slightly depressing a neighboring key. This gesture, which would normally not produce any sound on a piano, resulted in an unwanted pitch bending or a transient glitch on our instrument, requiring participants to pay more attention to the cleanliness of their technique.

There was a general consensus among participants that the skills acquired and the time spent learning this instrument would bring improvements to their regular piano and keyboard playing. Overall, the additional cleanliness and attention to unwanted movements required by the instrument were seen as improving the overall technique and control.

Many participants shared the opinion that controlling the dynamic of the notes was not straightforward. Although fade in and fade out could be achieved with good accuracy, attacking and sustaining a note at levels other than the forte dynamic corresponding to key bottom, was challenging. Physical modifications to the instrument were also suggested for improving dynamic control. Two players suggested that an extended key travel would increase the tolerance for the very accurate movements that are currently required, and another suggested that dynamic haptic feedback could be added to the key to facilitate maintaining a given intermediate position. When asked whether they would be able to control each finger on a polyphonic version of the instrument with the same accuracy they do now, most acknowledged it would be hard, but it was not impossible to learn. P6 suggested that fine individual control may be, in the polyphonic case, less important than a global sense of modulation and variation.

Insights

Despite the fundamentally uncommon capabilities we built into it, our instrument presents itself on the surface as a remarkably “normal” keyboard. Most of our participants played it for several minutes before discovering that the key position controls the dynamic of the sound. They would start playing using their normal technique and expectations, and the instrument responded in a largely expected way. That is, the instrument emitted a sound of the appropriate pitch. In a matter of seconds, they realized that the instrument could only play one note at a time and the velocity of the press would not affect the loudness of the resulting sound. To their eyes and ears, the initial experience must have not been very different from their previous experience playing monophonic synthesizers. Even when P5 autonomously discovered the “percussive” effect, the understanding of the technique was still heavily grounded in the common notion of key velocity, and so was the player's first attempt at describing the effect of key position. Simply observing these initial responses gives us a clear indication that preexisting techniques can easily be used on our instrument, which in turns denotes the presence of a strong expertise transfer (Krakauer et al. 2006).

When the effects of continuous key position were discovered, autonomously or after being introduced by the investigator, players did not struggle to understand them. Gestures such as slowly depressing the key to fade in or fade out the note, or holding the key partially pressed to achieve a dynamic change, are fairly intuitive. Executing them accurately, however, comes with several difficulties, as the training and motor skills required to control the micro movements of the key for a sustained period of time substantially differ from those needed for obtaining discrete events, as it is common on regular keyboards. In other words, for these techniques piano skills do not necessarily generalize to the instrument. The aftertouch, growl, and the vibrato gestures are easier to perform, because the key is resting against the felt at the bottom of the key, which offers mechanical resistance, acting as a reference point for the performer's finger.

Fast or Percussive

Many players found it hard to perform the gesture required to trigger the percussive effect. We have seen indications that even understanding the gesture required to obtain the effect was a challenge in itself. Some players tended to think about it in terms of a “high velocity” gesture, or otherwise requiring a large amount of energy, whereas all that was needed was controlling the initial impact of the finger as the key press started. Explaining the expected mechanical behavior of the finger-key system during a percussive touch, the way our algorithm expects it, seemed to help some performers understand it, yet putting it in practice was not always straightforward. Several players modified their percussive technique in the course of the study, each of them settling, in the end, for their own very personal approach, often without managing to achieve a reliable strike. Comments made by P1, in particular, showed that training as a pianist was an obstacle in the quest for the percussive touch: Being used to “keep the weight on the key” made it harder to let the key bounce off the finger, as required by our instrument. It was further suggested that inexperienced players would find it easier to learn this and other techniques, as they would have no embodied preconceptions. This can be seen as a case of interference between standard piano training and the technique needed to play our instrument.

Freedom of Choice … or Lack Thereof

We know from the literature that on the piano, given the relatively low bandwidth of a single key press, a player is free to choose from an enormous number of different gestures to obtain a desired sonic outcome (MacRitchie 2015). The choice of the gesture could depend on training, personal preference, musical context, and musical momentum, but ultimately it is largely irrelevant for the sonic outcome. Several of our participants mentioned that the techniques they used for the percussive gesture were drawn from their piano experience. This was often accompanied by a remark that, on the piano, the specific technique would not really make a difference in the sound produced, and they could feel free to choose to use it depending, as P4 expressed it, on “mood, strength, stamina, or fingering.” This seems particularly revealing. When we start assigning special meanings to some of these gestures, as we did with the percussiveness, the degrees of freedom available to the players decrease and they have to find and learn what the “right” gesture is. Playing these new gestures is then difficult at two levels. In terms of execution, there is the intrinsic difficulty of learning a gesture. At a higher, conceptual level, however, there may be an even more fundamental problem: Players lose the freedom of choice in the moment—no matter the stamina or the mood, their choice of technique will be restricted to the one that gives the expected outcome. This could possibly have a bigger impact in the long term than simply learning and adopting a new gesture, as it requires a new, much stricter, performance discipline.

When unwanted movements of other keys caused by the idle fingers resulted in unexpected audible results, players were quick at learning to control them to overcome these minor interferences. The small adaptations needed were seen as enriching by the players for their regular keyboard technique, as they required cleaner playing and increased awareness and control. This can be seen again as a sort of transfer, but this time taking the skills from our instrument back to the traditional keyboard.

Appropriation

The literature presents several examples of musical instruments whose limited affordances stimulate players to explore the constraints and to develop new techniques to push the boundaries beyond the original intentions of the instrument designer. An example of this can be found in a paper by Gurevich, Stapleton, and Marquez-Borbon (2010), in which a rich set of gestures, interactions, and playing styles emerges from players engaging with a simple one-button instrument. Thor Magnusson (2010) suggests that affordances in musical instruments tend to be more obvious (e.g., a key is to be pressed) than constraints, and that exploring the latter tends to be a large part of the discovery process of an instrument. Zappi and McPherson (2014) suggest that constraints stimulate the exploration of the capabilities of an instrument, and ultimately lead to appropriation—that is, they “adapt and adopt the technology around them in ways the designers never envisaged” (Dix 2007). All of our participants initially lamented the lack of polyphony in our instrument as a limitation. In the course of the study, however, four of them elaborated their own original techniques precisely to overcome this limitation and to be able to establish a sort of harmonic structure in their pieces with multikey gestures taking advantage of the characteristics and capabilities of our instrument. Interestingly, these gestures are not rooted in piano technique, and the sound they produce does not even have a counterpart in flute playing: They are entirely new techniques, developed specifically around our instrument. Our players therefore reacted to one of the constraints that the designers put on the affordances of the instrument by appropriating it.

During our training session, we did not inform our participants about some of the affordances we built into the instrument, namely, the “multiphonics” and the “high state.” Participant P5's “air noise” technique exploited an error we made in the sound generator, thus revealing to us, the creators, an unknown affordance. We expect that, from the perspective of the player, all of these must have appeared to be unexpected behaviors, “glitches” in the instrument. Yet, each of these made their way into some of the pieces that were composed during the study, thus making the instrument's imperfections a signature characteristic of the instrument's sound. This is another case of appropriation, and it adds to a long-standing practice of taking advantage of less-than-ideal behaviors and technological failures for creative purposes, so much so that they become part of the identity of the instrument and of the repertoire, even when they were not part of the instrument designer's original idea (McSwain 2002; Cascone 2000; McPherson and Kim 2012).

Learning to Play or Learning the Player

In our analysis we have looked at the hit rates in Figure 2 to get an idea of how reliable each player was in performing the percussive gesture. The outcome was that only two out of six participants seem to have reliably learned how to play this gesture. The question we have been asking so far is whether the performer can find the right gesture that will make the instrument play in the manner expected: Can the performer learn how to play? Because P6 quickly became proficient with a technique that satisfies the current requirements of our detector, it is legitimate to expect that anyone else could learn the same technique. Looking at it from a different perspective, however, the hit rate values in Figure 2 can be interpreted as an indication of how good our percussiveness detector is, and the outcome is that it was not particularly good at detecting “percussive gestures” the way P1, P2, P3, and P4 meant them. On the other hand, it did a good job at detecting “percussive gestures” the way P5 and P6 intended them.

If we were to ask people to play a C-major chord on a guitar and they did not succeed, we can with a certain confidence say that those people cannot play guitar rather than claiming the guitar (or the luthier who built it) was not up to the task. Our instrument, however, has not gone through the centuries of iterations that have brought the guitar to becoming what it is. Ours is a newborn, and therefore when someone struggles with it, we have to ask ourselves whether it is the player's or the instrument's (i.e., our) fault. Additionally, although we cannot easily change the spacing of the strings or frets on a guitar to adapt to a player's hands, expertise level, or technique, the behavior of our instrument can largely be altered in software. Therefore, where the player's training is at odds with the behavior expected by the instrument, the instrument can be “taught” about the players, their gestures, their technique, and their preferences, making it easier for players to play the instrument. This is a general characteristic of digital instruments of which designers can take advantage, also in the case of continuous keyboards to smoothen the learning process.

Additional Remarks

Our instrument was designed as a probe for studying the generalization of keyboard playing skills to changes in the mapping of the keyboard interface. We observed a significant transfer of skills, especially in the horizontal navigation of the pitch space, with a subject-dependent interference, at times strong, on a particular gesture (percussiveness). The continuous gestures, on the other hand, require a technique change where the piano's gestural language, involving upper body and arm weight, has to be adapted to a technique that is based on fine hand and finger movements. Continuous gestures did not suffer from interference, but also showed minimal transfer. In other words, they have to be learned. To what extent they can be learned, however, remains an open question. We can argue that the “ceiling on virtuosity” (Wessel and Wright 2002) of our instrument is very high, in that it allows more complex performances than a regular synthesizer, and the “entry fee” is low, at least for players already familiar with keyboard instruments. Some of the features of our instrument, those that really set it apart from more traditional keyboards, may still be subject to an excessively slow learning curve, however. An indication of this risk comes from the fact that several of our participants highlighted the difficulty of performing some of the continuous gestures, and that it is currently difficult to obtain notes of even loudness, to attack quiet notes fast, and, more generally, to master fine-grained control on the key position. As a possible workaround to this, some expressed the desire to have a global performance control to adjust the key response, or the overall dynamic level of the instrument. “Good musical instruments must strike the right balance between challenge, frustration, and boredom,” writes Sergei Jordà (2004, p. 331). A longitudinal study would be the most effective way of understanding how practical it is to learn and become proficient at these techniques, or whether the instrument is actually too complex and will eventually, as Jordà continues, “alienate the user before [its] richness can be extracted.”

Conclusion

In this article we described a keyboard-based musical instrument that can handle extended techniques to control the physical model of a wind instrument. The visual appearance and mechanical characteristic of the keyboard have not been modified, but the mapping between the keys and the sound generator has been subverted by adopting a paradigm in which the instantaneous position of the key continuously controls the sound generator, as opposed to a more traditional approach based on discrete key presses. Multikey gestures and percussive hits were also assigned new sonic meanings. We introduced six trained piano players to the instrument in a study that covered a guided training to the new techniques achievable on the instrument. We can analyze their encounters as consisting of three stages: expectation, understanding, and execution. At each stage, their existing training and experience as keyboard players shapes their encounter, aiding or impairing them in the process.

Players approach the instrument with certain expectations based on their experience as keyboard players. Their cultural baggage made them somehow resistant to noticing the fundamentally different response of the instrument to touch, so that most of our participants initially played for several minutes without realizing that the keyboard responded to continuous key position. In a different study we designed a Hammond emulator in which continuous key sensing allowed the triggering of individual harmonics at different points in the key throw only (Moro 2020, chapter 4). After over an hour playing it like a regular keyboard in various tasks, when participants finally had the chance to explore the instrument more freely, eight out of ten players failed to discover its capabilities for continuous control. With both instruments, players' expectations of keyboard behavior based on discrete key presses were so strong that a substantial amount of evidence was required for these expectations to be questioned, so much so that the player would unconsciously ignore or misinterpret any auditory feedback that contradicted expectations.

Once players are aware of the new capabilities and techniques of the instrument, they need to achieve an understanding of the gestures required to obtain the desired sonic outcome. The techniques that involved continuous key motions on individual keys (fades, vibrato, partial presses, and aftertouch) were relatively intuitive to understand because of the simple mapping between position and the pressure in the flute model. Multikey techniques were less immediate, as the microdetails of the relative motion of the two keys involved in the gesture assumed a relevance uncommon in traditional keyboard playing. The percussive gesture was the hardest one to understand, as some players struggled to understand the concept of percussiveness and would instead think in terms of velocity, a parameter to which they could relate more easily and which was rooted in their experience more strongly.

The execution of the techniques is partly conditioned by some of the instrument's intrinsic characteristics. For instance, the lack of mechanical support makes continuous gestures harder to execute accurately along the key throw than when compressing the key felt in the aftertouch region. Preexisting piano technique also played a crucial role in the execution of some techniques—especially in percussive key strokes, as some players felt that the required technique was at odds with the sensorimotor skills ingrained in their piano technique. Another characteristic of the instrument that is at odds with traditional practice is the fact that percussive and pressed touches are associated with clearly distinct sonic outcomes. The player has to become more aware of the touch used for each note and adopt a stricter performance practice.

One of the most important advantages of MIDI is that of generality: As long as a sound generator can understand note and velocity information, it can be played with a keyboard (or other MIDI controllers). A large part of keyboard technique transfers well across different instruments—for instance, from the piano to the organ and vice versa—however each instrument has its own characteristics that may limit the suitability of a given keyboard controller to perform a specific sound. Thanks to MIDI, it is straightforward to play a piano sound on an unweighted, 37-key keyboard, or a Hammond sound on a weighted keyboard with velocity response enabled, but these are arguably two bad choices: Piano playing often requires several octaves of weighted keys, whereas Hammond players expect an expression pedal, no velocity response, and the possibility to perform palm glissandos, something that cannot be as easily done on a weighted keyboard. If the characteristics of the controller are not well suited for the sound generator in use, this will drastically affect the way the instrument is played, making it harder to perform idiomatic gestures that are part of an instrument's sound. In the case of our instrument, the controller and sound generator are coupled even more tightly: The player is required to act on the keys in new and unusual ways because of the specific characteristics of the mapping between gesture and sound. Therefore, if we were to try to replace the sound generator, it would be an arduous task to maintain the exact meaning of gestures and the performer would have to make an effort to adapt to the new mappings. By making the bottleneck of our DMI wider, we have gained in the amount of control available and in the character of the instrument and at the same time the controller has become more specific to the sound generator.

Acknowledgments

This work was funded by EPSRC grants EP/L019981/1 (“Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption”) and EP/N005112/1 (“Design for Virtuosity”).

References

Bernays
,
M.
, and
C.
Traube.
2012
. “
Piano Touch Analysis: A MATLAB Toolbox for Extracting Performance Descriptors from High-Resolution Keyboard and Pedalling Data.
Actes des Journées d'Informatique Musicales
, pp.
55
64
.
Bernays
,
M.
, and
C.
Traube.
2013
. “
Expressive Production of Piano Timbre: Touch and Playing Techniques for Timbre Control in Piano Performance.
” In
Proceedings of the Sound and Music Computing Conference
, pp.
341
346
.
Borin
,
G.
,
G. De
Poli
, and
A.
Sarti.
1992
. “
Algorithms and Structures for Synthesis Using Physical Models.
Computer Music Journal
16
(
4
):
30
42
.
Cascone
,
K.
2000
. “
The Aesthetics of Failure: ‘Post-Digital’ Tendencies in Contemporary Computer Music.
Computer Music Journal
24
(
4
):
12
18
.
Castagne
,
N.
, and
C.
Cadoz.
2003
. “
10 Criteria for Evaluating Physical Modelling Schemes for Music Creation.
” In
Proceedings of the International Conference on Digital Audio Effects
. Available online at www.eecs.qmul.ac.uk/legacy/dafx03/proceedings/pdfs/dafx62.pdf. Accessed January 2021.
Chabassier
,
J.
, and
M.
Duruflé
.
,
2014
. “
Energy-Based Simulation of a Timoshenko Beam in Non-Forced Rotation: Influence of the Piano Hammer Shank Flexibility on the Sound.
Journal of Sound and Vibration
333
(
26
):
7198
7215
.
Cook
,
P. R.
2001
. “
Principles for Designing Computer Music Controllers.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
3
6
.
Dix
,
A.
2007
. “
Designing for Appropriation.
” In
Proceedings of the British HCI Group Annual Conference on People and Computers
, pp.
27
30
.
Doğantan-
Dack
,
M.
2011
. “
In the Beginning Was Gesture: Piano Touch and the Phenomenology of the Performing Body
.” In
E.
King
and
A.
Gritten
, eds.
New Perspectives on Music and Gesture
.
Farnham, UK
:
Ashgate
, pp.
243
265
.
Fourier
,
L.
,
C.
Roads
, and
J.-J.
Perrey
.
1994
. “
Jean-Jacques Perrey and the Ondioline.
Computer Music Journal
18
(
4
):
19
25
.
Goebl
,
W.
,
R.
Bresin
, and
I.
Fujinaga.
2014
. “
Perception of Touch Quality in Piano Tones.
Journal of the Acoustical Society of America
136
(
5
):
2839
2850
.
Goebl
,
W.
,
R.
Bresin
, and
A.
Galembo.
2005
. “
Touch and Temporal Behavior of Grand Piano Actions.
Journal of the Acoustical Society of America
118
(
2
):
1154
1165
.
Goebl
,
W.
, et al
2008
. “
Sense in Expressive Music Performance: Data Acquisition, Computational Studies, and Models.
” In
P.
Polotti
and
D.
Rocchesso
, eds.
Sound to Sense—Sense to Sound: A State of the Art in Sound and Music Computing
.
Berlin
:
Logos
, pp.
195
242
.
Gurevich
,
M.
,
P.
Stapleton
, and
A. Marquez
-
Borbon
.
2010
. “
Style and Constraint in Electronic Musical Instruments.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
106
111
.
Haken
,
L.
,
E.
Tellman
, and
P.
Wolfe.
1998
. “
An Indiscrete Music Keyboard.
Computer Music Journal
22
(
1
):
30
48
.
Jack
,
R. H.
,
T.
Stockman
, and
A.
McPherson.
2017
. “
Rich Gesture, Reduced Control: The Influence of Constrained Mappings on Performance Technique.
” In
Proceedings of the International Conference on Movement Computing, Art. 15
.
Jordà
,
S.
2004
. “
Instruments and Players: Some Thoughts on Digital Lutherie.
Journal of New Music Research
33
(
3
):
321
341
.
Kirkpatrick
,
R.
1981
. “
On Playing the Clavichord.
Early Music
9
(
3
):
293
306
.
Krakauer
,
J. W.
, et al
2006
. “
Generalization of Motor Learning Depends on the History of Prior Action
.”
PLOS Biology
4
(
10
):
1798
1808
.
Lamb
,
R.
, and
A.
Robertson.
2011
. “
Seaboard: A New Piano Keyboard-related Interface Combining Discrete and Continuous Control.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
503
506
. Available at: http://www.nime.org/proceedings/2011/nime2011_503.pdf.
Le Caine
,
H.
1955
. “
Touch-Sensitive Organ Based on an Electrostatic Coupling Device.
The Journal of the Acoustical Society of America
27
(
4
):
781
786
.
MacRitchie
,
J.
2015
. “
The Art and Science behind Piano Touch: A Review Connecting Multi-Disciplinary Literature.
Musicae Scientiae
19
(
2
):
171
190
.
MacRitchie
,
J.
, and
G.
Nuti.
2015
. “
Using Historical Accounts of Harpsichord Touch to Empirically Investigate the Production and Perception of Dynamics on the 1788 Taskin.
Frontiers in Psychology
6:Art. 183.
Magnusson
,
T.
2010
. “
Designing Constraints: Composing and Performing with Digital Musical Systems.
Computer Music Journal
34
(
4
):
62
73
.
McIntyre
,
M. E.
,
R. T.
Schumacher
, and
J.
Woodhouse.
1983
. “
On the Oscillations of Musical Instruments.
Journal of the Acoustical Society of America
74
(
5
):
1325
1345
.
McPherson
,
A.
2010
. “
The Magnetic Resonator Piano: Electronic Augmentation of an Acoustic Grand Piano.
Journal of New Music Research
39
(
3
):
189
202
.
McPherson
,
A.
2013
. “
Portable Measurement and Mapping of Continuous Piano Gesture.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
152
157
.
McPherson
,
A.
2015
. “
Buttons, Handles, and Keys: Advances in Continuous-Control Keyboard Instruments.
Computer Music Journal
39
(
2
):
28
46
.
McPherson
,
A.
, and
V.
Zappi.
2015
. “
An Environment for Submillisecond-Latency Audio and Sensor Processing on BeagleBone Black.
” In
Proceedings of the 138th Audio Engineering Society Convention
, p.
9331
.
McPherson
,
A. P.
,
A.
Gierakowski
, and
A. M.
Stark
.
2013
. “
The Space between the Notes: Adding Expressive Pitch Control to the Piano Keyboard.
” In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
, pp.
2195
2204
.
McPherson
,
A. P.
, and
Y. E.
Kim
.
2012
. “
The Problem of the Second Performer: Building a Community around an Augmented Piano
.”
Computer Music Journal
36
(
4
):
10
27
.
McSwain
,
R.
2002
. “
The Social Reconstruction of a Reverse Salient in Electric Guitar Technology: Noise, the Solid Body and Jimi Hendrix.
” In
H.-J.
Braun
, ed.
Music and Technology in the Twentieth Century
.
Baltimore, Maryland
:
Johns Hopkins University Press
, pp.
186
198
.
Michon
,
R.
, and
J. O.
Smith
.
2011
. “
Faust-STK: A Set of Linear and Nonlinear Physical Models for the Faust Programming Language.
” In
Proceedings of the International Conference on Digital Audio Effects
, pp.
199
204
.
MMA.
1983
. “
MIDI Musical Instrument Digital Interface Specification 1.0.
” Technical Report.
Buena Park, California
:
MIDI Manufacturers Association
.
Moog
,
R.
1982
. “
A Multiply Touch-Sensitive Clavier for Computer Music.
” In
Proceedings of the International Computer Music Conference
, pp.
155
159
.
Moore
,
F. R.
1988
. “
The Dysfunctions of MIDI.
Computer Music Journal
12
(
1
):
19
28
.
Moro
,
G.
2020
. “
Beyond Key Velocity: Continuous Sensing for Expressive Control on the Hammond Organ and Digital Keyboards.
” PhD dissertation, School of Electronic Engineering and Computer Science, Queen Mary, University of London.
Moro
,
G.
, and
A.
McPherson.
2020
. “
A Platform for Low-Latency Continuous Keyboard Sensing and Sound Generation.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
97
102
.
Moro
,
G.
,
A. P.
McPherson
, and
M. B.
Sandler
.
2017
. “
Dynamic Temporal Behavior of the Keyboard Action on the Hammond Organ and Its Perceptual Significance.
Journal of the Acoustical Society of America
142
(
5
):
2808
2822
.
Ortmann
,
O.
1925
.
The Physical Basis of Piano Touch and Tone
.
Abingdon-on-Thames, UK
:
Routledge
.
Quartier
,
L.
, et al
2015
. “
Intensity Key of the Ondes Martenot: An Early Mechanical Haptic Device
.”
Acta Acustica united with Acustica
101
(
2
):
421
428
.
Vyasarayani
,
C. P.
,
S.
Birkett
, and
J.
McPhee.
2009
. “
Modeling the Dynamics of a Compliant Piano Action Mechanism Impacting an Elastic Stiff String.
Journal of the Acoustical Society of America
125
(
6
):
4034
4042
.
Wessel
,
D.
, and
M.
Wright.
2002
. “
Problems and Prospects for Intimate Musical Control of Computers.
Computer Music Journal
26
(
3
):
11
22
.
Zappi
,
V.
, and
A.
McPherson.
2014
. “
Dimensionality and Appropriation in Digital Musical Instrument Design.
” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
455
460
.