Abstract

Established in 1988 by composers Laura Bianchini and Michelangelo Lupone, the Centro Ricerche Musicali (CRM) in Rome was officially recognized in 1990 as a Music Research Center by the Ministry for University Education and Scientific and Technological Research. The Center focuses on musical production in relation to new technologies, in order to create a continual interaction among musical language, scientific thought, and technological resources. The staff at CRM, comprising musicians, technicians, visual artists, architects, information technology specialists, engineers, and researchers, aim to promote study of the aesthetic, analytical, and scientific aspects of music.

In the beginning, research at CRM concerned the design and development of specific hardware devices for live electronics and composition, such as the Fly10 (1983–1985) and Fly30 (1990) systems. Subsequent studies on a physical model for the bow-and-string system in 1997 gave rise to the development of virtual musical instruments. From 1999 onwards, other areas of research have included interactivity and adaptivity applied to musical forms, the development of specific technologies for sound art installations and sculptural–musical works, and augmented instruments such as the Feed-Drum, SkinAct, WindBack, and ResoFlute.

This article presents a brief history of CRM and some artistic productions by composers working at the Center.

The Centro Ricerche Musicali

The Centro Ricerche Musicali (CRM) in Rome, founded in 1988 by composers Laura Bianchini and Michelangelo Lupone, was officially recognized as a Music Research Center in 1990 by the Ministry for University Education and Scientific and Technological Research. The Center was established to meet the needs of the two composers to create a joint project expressing their experience of musical and scientific research, and bringing together a group of musicians and researchers with multidisciplinary skills. The Center's objective is to encourage a constant exchange of ideas, experiences, and studies, in order to achieve a deep understanding of creative, artistic, and technical spheres. The interaction between various disciplines and skills, a fundamental feature of CRM's research, has made it possible to create works and technologies to develop musical language, guided by aesthetic and cultural principles to establish deep social and environmental responsibility. CRM's clear objective is innovative artistic production, arising from constant interaction between scientific research and musical composition (Bianchini and Lupone 1992).

The wide range of activity and research undertaken by CRM over more than 30 years has resulted in technical-scientific and artistic inventions known as “new lutherie,” which constitutes the principal activity of CRM and continues to expand actively. In the beginning, research concerned the design and development of specific new hardware devices for live electronics and composition, such as Lupone and Bianchini's Fly10 system of 1983 (see Figure 1), one of the first multi–digital signal processing (DSP) systems in Italy for sound synthesis in real time (Lupone 1984, 1985), and Lupone and Pellecchia's Fly30 system of 1990 (see Figure 2), the first Italian system for real-time sound synthesis and processing operating in floating point (De Vitis, Lupone, and Pellecchia 1991; Pellecchia 1991; De Vitis and Pellecchia 1992). The two systems have been used in an industrial context at the Fiat Research Center and have given rise to the production of many original electroacoustic and performative works of music achieved with real-time sound synthesis and processing.
Figure 1

Fly 10 (Release 2, 1986), using four Texas DSP 32010 chips.

Figure 1

Fly 10 (Release 2, 1986), using four Texas DSP 32010 chips.

Figure 2

Fly 30 (1991), which used the Texas DSP 32030 floating-point chip. Digital Signal Patcher, visual programming language (a); envelope generator (b); polar synthesis, filter design tool (c).

Figure 2

Fly 30 (1991), which used the Texas DSP 32030 floating-point chip. Digital Signal Patcher, visual programming language (a); envelope generator (b); polar synthesis, filter design tool (c).

Beginning from this context, CRM's technological and musical research has been focused on two principal sectors of interest, very different from the point of view of the works created but intimately related with regard to scientific assumptions and artistic and cultural results: sound art installations and augmented instruments. Since 1993, research in acoustics and psychoacoustics has sought a deeper understanding of oscillatory phenomena and features with which materials vibrate and radiate energy. These studies have caused CRM to more actively develop original technologies for sound spatialization, diffusion, and processing. These technologies are used to produce innovative musical works that are integrated with the environment, as well as with sculpture-like, plastic forms and with lighting design

Integration of acoustic features with electronic technologies represents one of the main research topics at CRM. Studies of sound spatialization techniques began with creative application of specific acoustic phenomena to the creation of original technologies such as Holophones (sound projectors), Sound Pipes, the Reflecting Screen, Planephones (vibrating surfaces that generate plane waves), and software for the control of multiple independent loudspeakers. These experiences have encouraged CRM to create “virtual listening spaces.” They have also allowed CRM to design new, cutting-edge, multichannel sound diffusion systems for the reproduction of acoustic and electronic music, as well as vibrating systems for sound diffusion (Bianchini 2000a). These technologies have given rise to innovative art sound installations and sculptural-musical adaptive works, often inside museums in Europe, Asia, and Latin America, as well as projects for major events and environmental installations (both temporary and permanent), with the collaboration of visual artists, including Michelangelo Pistoletto, Günther Uecker, Mimmo Paladino, and Licia Galizia (Bianchini and Schiavoni 1996; Bianchini 2000b). Tables 1, 2, and 3 list a selection of these works; see also Figure 3.
Figure 3

Laura Bianchini and Licia Galizia: Via dei Canti, Terra e Cielo, permanent environmental installation, Trevi nel Lazio, 2019.

Figure 3

Laura Bianchini and Licia Galizia: Via dei Canti, Terra e Cielo, permanent environmental installation, Trevi nel Lazio, 2019.

At the same time, after 1996, studies by Marco Palumbi and Lorenzo Seno on a physical model for the string-and-bow system gave rise to the development of virtual musical instruments (Palumbi and Seno 1998; Seno 1998; Palumbi and Seno 1999) and new digital calculation algorithms produced by CRM, with which Michelangelo Lupone explored the limits of signal processing, producing the musical works “Canto di Madre” for computer (1998) and “Corda di metallo” for string quartet and electronics (1997, premiered by the Kronos Quartet in Rome). The use of extended performance techniques to produce unusual sounds using orchestral instruments, the invention of algorithms and digital technologies for real-time sound transformation, and acoustic and psychoacoustic studies on vibration of materials and sound diffusion, have been the starting points for research on augmented instruments undertaken by CRM since 1999. Augmented instruments such as the Feed-Drum, SkinAct, WindBack, and ResoFlute have been developed at CRM through the principle of feedback control and have entered the musical domain, renewing and extending performance techniques and musical composition criteria. Technological augmentation of musical instruments through feedback makes it possible to extend the acoustic characteristics of sound production and radiation, as well as the methods for sound generation and control. Integration of technology extends the expressive, performative, and adaptive possibility of instruments and allows the creation of a new electroacoustic performance category for musical works (Bianchini et al. 2019).

Sound Art Installations

Music's primary element is sound, which derives from the excitation and resonance of vibrating bodies. Musical instruments, and all other objects having material that can oscillate, produce vibrations that can be perceived through both hearing and touch, with features dependent on the vibrating body's form and material, as well as on the excitation criteria. Some CRM inventions—particularly the instruments discussed in the previous section—selectively use various properties of materials and forms to propagate sound. These inventions can be modeled in plastic and acoustic terms, according to aesthetic criteria, as well as functionally. The formal and plastic features necessary to achieve acoustic objectives can be fashioned to achieve visual and sculptural works that create a dialogue with space and light.

Planephones are vibrating plates that allow one to exploit the vibrational features of natural and synthetic materials (such as metal, wood, paper, glass, and their derivatives) with musical meaning and sculpture-like, plastic form. They can be considered both as planar sound radiators that allow diffusion of plane waves and as tools for sound processing and spatial diffusion when integrated into art sound installations. The acoustic field surrounding the Planephone offers strong spatial gradients, allowing variable listening conditions according to the point of proximity and a “plastic” sensation of sound space (Seno 2005a; Bianchini 2010). The Planephones were designed and created by Michelangelo Lupone at CRM in 1996 as a result of collaboration with Fiat Research Center. The first installation with Planephones was Stanza del Legno e del Metallo (at the Acquario Romano museum in Rome, 1998), comprising two distinct timbre and space zones, followed by Lupone's Infinito (2002), which was based on the dialectic between harmonic wood and steel in dialogue with one another (see Figure 4).
Figure 4

Michelangelo Lupone's installation Infinito, area containing wood and metal surfaces with Planephones, for the MusicaScienza 2002 event at the Goethe Institute in Rome.

Figure 4

Michelangelo Lupone's installation Infinito, area containing wood and metal surfaces with Planephones, for the MusicaScienza 2002 event at the Goethe Institute in Rome.

In 1998, Lupone developed the first prototype of the Holophone, a multichannel sound diffusion system provided with precise controls that permit creative modulations of the wave front (Lupone 2005, 2008). The Holophone consists of a parabolic surface and a band-limited loudspeaker that is placed in the parabola's focal point, facing the rear (i.e., towards the parabola) to achieve approximately unidirectional sound dispersion (see Figure 5). When Holophones are used as speakers in a traditional concert, the radiation angle is controlled by the mechanism that supports the speaker and the parabola. When Holophones are used to process sound, it is possible to dynamically change the radiation angle through phase modulation. Based on the emission of plane waves, the Holophone is designed to ensure an accurate control of the sound-wave movement and profile by appropriate regulation of the phase, amplitude, and frequency of the musical signal. This type of acoustic propagation permits the construction of highly coherent sound-radiation lobes that can traverse space with minimal energy dissipation compared with diffusion with traditional loudspeakers. The dynamic controls for sculpting the wave front are managed by a computerized system that controls the processes of approach, separation, localization, velocity, and raising and lowering of the wave front with respect to the listener. The composer has the possibility of defining a particular spatial characteristic for each region of the frequency spectrum and can dynamically process the modalities with which sounds are conveyed to the listener.
Figure 5

Holophone sound projectors (a); Quartetto di Cremona at the Santo Spirito monastery in Ocre, Italy, in 2011, using sound spatialization with Holophones (b).

Figure 5

Holophone sound projectors (a); Quartetto di Cremona at the Santo Spirito monastery in Ocre, Italy, in 2011, using sound spatialization with Holophones (b).

The strong connection between plastic form, dimensions, volume, material, music, and light creates an effective link with the surrounding natural and architectural environment, together with multisensory involvement by the public, allowing a creative approach that considers multimodal and intermedial aspects.

Interactive and Adaptive Installations

The interactive and adaptive installations produced by CRM represent a category of works designed to integrate music with plastic forms and with methods of sound radiation, as well as to interact with people and the environment in order to modify and develop the musical form. The concept and creation of interactive and adaptive works are closely linked to new ideas about musical production and to criteria for articulating the musical form, aspects of technological and artistic innovation that are central to CRM's aesthetic and ethical objectives. There has been experimentation with the concepts of interactivity and adaptivity, introduced by Lupone in the 1990s, as well as consolidation of information-processing systems drawn from sensors and actuators. This information is of primary importance for digital sound processing and the structuring of musical composition methods.

Active public participation is always implied by the possibility of freely choosing the most congenial listening time and conditions. This makes it possible to follow the development of musical phenomena on paths where movement is an integral part of compositional criteria. The interaction occurs through interfaces or sensors—for example, when the actions of the performer or the public influence the spatial–temporal progress of installations or performances. Indeed, the term “interactive” is normally used to identify systems that respond specifically to the user's action, such as the personal computer, whose software makes it possible to undertake voluntary actions in order to obtain predictable responses. In the case of an interactive system, the user stands before a complex yet finite body. Responses that can be received, or processes that can be activated, will be always consequential to the action undertaken. An interactive system allows users the freedom to choose their own sequence of actions and follow their own logical and intuitive work route creatively within finite and predictable possibilities.

Work with adaptive features can also present very different expressive progress after a few hours, however. Theoretically, music can become infinite and offer new content and formal relations on the basis of its previous history. Indeed, the term adaptive identifies a system that can evolve, like a living organism. The adaptive system perceives or receives external stimuli and modifies its status in an unpredictable, or only partially predictable, manner. Users of adaptive systems will receive answers that consider both their current action and the succession of previous actions—in more advanced cases, the entire environmental context. Fully or partially, the system is able to “learn” and “adapt to” external conditions. Users always have the freedom of choosing the sequence of actions to be undertaken on the system, but unlike an interactive system, it is difficult to replicate the results, because identical stimuli in different environments do not have the same effect. Experiences in this context are similar to artificial intelligence applications and represent a fascinating frontier for all art, as they express deep, conscious critical action and awareness of the role of technology in human expression and communication (Bianchini 2010; Inverardi et al. 2012; Lupone et al 2015; Capanna and Lupone 2019).

Selected adaptive works are listed in Table 4; see also Figure 6.
Figure 6

Lupone and Galizia: Oasi_1, adaptive sculptural-musical installation, MACRO Museum, Rome, 2014.

Figure 6

Lupone and Galizia: Oasi_1, adaptive sculptural-musical installation, MACRO Museum, Rome, 2014.

Table 1:

Selected Works for Major Events

TitleCreatorsLocationYear
Paesaggio sonoro di Roma Lupone, Bianchini, Caputo Colosseum, Rome 2003 
Una città da ascoltare CRM G8 Summit, L'Aquila 2009 
Ludi Multifonici Lupone, Bianchini, Mentuccia Trajan's Market (Museo Mercati di Traiano), Rome 2015 
Riflessi di Luna Bianchini, Lupone, Mentuccia New Year's Eve celebrations, Rome 2019 
TitleCreatorsLocationYear
Paesaggio sonoro di Roma Lupone, Bianchini, Caputo Colosseum, Rome 2003 
Una città da ascoltare CRM G8 Summit, L'Aquila 2009 
Ludi Multifonici Lupone, Bianchini, Mentuccia Trajan's Market (Museo Mercati di Traiano), Rome 2015 
Riflessi di Luna Bianchini, Lupone, Mentuccia New Year's Eve celebrations, Rome 2019 
Table 2:

Selected Environmental Works

TitleCreatorsLocationYear
La Piazza Lupone, Bianchini, Mentuccia, Cianciusi, Lanzalone, Pizzaleo Festa Europea della Musica, Rome 2009 
Echi d'Acqua Lupone, Bianchini, Lanzalone, Gabriele, Colangelo Matera (European Capital of Culture 2019) 2019 
Made To Sound – Ferrari Lupone Piazzale della Farnesina, Rome 2006 
Riflessi di Dolmabahçe Lupone Dolmabahçe Palace, Istanbul 2009 
TitleCreatorsLocationYear
La Piazza Lupone, Bianchini, Mentuccia, Cianciusi, Lanzalone, Pizzaleo Festa Europea della Musica, Rome 2009 
Echi d'Acqua Lupone, Bianchini, Lanzalone, Gabriele, Colangelo Matera (European Capital of Culture 2019) 2019 
Made To Sound – Ferrari Lupone Piazzale della Farnesina, Rome 2006 
Riflessi di Dolmabahçe Lupone Dolmabahçe Palace, Istanbul 2009 
Table 3:

Selected Permanent Environmental Works

TitleCreatorsLocationYear
Gioco delle risonanze Lupone, Bianchini, Studio Annunziata Large Palaestra, Pompeii excavations 2006–2015 
Sorgenti Nascoste Paladino, Lupone Mount Pizzuto, Solopaca 2007 
Via dei Canti Bianchini, Galizia Trevi nel Lazio, Frosinone 2019 
TitleCreatorsLocationYear
Gioco delle risonanze Lupone, Bianchini, Studio Annunziata Large Palaestra, Pompeii excavations 2006–2015 
Sorgenti Nascoste Paladino, Lupone Mount Pizzuto, Solopaca 2007 
Via dei Canti Bianchini, Galizia Trevi nel Lazio, Frosinone 2019 
Table 4:

Selected Adaptive Works

TitleCreatorsLocationYear
Volumi Adattivi Lupone, Galizia Goethe-Institut, Rome 2006 
Musica in Forma Lupone, Galizia Italian Cultural Institute, Belgrade 2008 
Vibrations Lupone, Bianchini, Galizia FGTecnopolo Building, Rome 2011 
Oasi Lupone, Galizia MACRO: Museum of Contemporary Art of Rome 2015 
Forme Immateriali Lupone National Gallery, Rome 2015 
Acque Lupone, Bianchini, Galizia Galleria Anna Marra, Rome 2018 
Forme Sensibili Lupone, Galizia Italian Embassy, New Delhi 2018 
TitleCreatorsLocationYear
Volumi Adattivi Lupone, Galizia Goethe-Institut, Rome 2006 
Musica in Forma Lupone, Galizia Italian Cultural Institute, Belgrade 2008 
Vibrations Lupone, Bianchini, Galizia FGTecnopolo Building, Rome 2011 
Oasi Lupone, Galizia MACRO: Museum of Contemporary Art of Rome 2015 
Forme Immateriali Lupone National Gallery, Rome 2015 
Acque Lupone, Bianchini, Galizia Galleria Anna Marra, Rome 2018 
Forme Sensibili Lupone, Galizia Italian Embassy, New Delhi 2018 

Augmented Instruments

The term “instrument” denotes a means, or a tool, as well as the result of a technological process. It is an expression of the thinking that permeates a given cultural context. A meeting point between science and art materializes in relation to the reality, through know-how that is applied to manual or intellectual knowledge: through the creation and the use of the instrument. The musical instrument, the main purpose of which is to produce sound to create an expressive musical effect, is a complex mixture of cultural conditions. Its technological features and complex object structure should make it possible to portray a specific musical language, with aesthetic, expressive, and stylistic aspects to consolidate performance techniques or at least an interpretation shared by a given musical context. The transformation of orchestral instruments in the context of Western music during the Renaissance and Romantic periods provides us with emblematic examples of how musical practice and the experience of musical instrument makers have enhanced certain models rather than others. The process of considering historical perspective seems to have two objectives, evolutionary and conservationist alike: (1) amplification of acoustic and expressive features alongside (2) consideration of coherence with certain categories defined by performing techniques, style, and aesthetics while applying remarkably different solutions. Therefore, acoustic and musical parameters involved in this adaptation process may be classified into five major groups: tuning systems, acoustic power boosting, improvement of sound diffusion features, enhancement of timbre-related features, and ergonomic optimization.

The use of extended techniques for traditional musical instruments, along with digital sound-processing techniques and management criteria for related control parameters, may play a central role in the multiple microstructural variations affecting complex timbre-related transformations. Live electronic techniques have undergone extensive development since the 1970s, first using analog technologies and then digital. This period can undoubtedly be considered a starting point for the musical culture that seeks new ways of amplifying the possibilities of musical instruments regarding aspects of timbre and properties of sound-and-gesture articulation.

In recent years, the evolution of technology has resulted in simultaneous development of metainstruments, hyperinstruments, or augmented instruments, attracting increased attention from musicians and scientific researchers. Many musical works created with live electronics involve interaction with a traditional instrument through musical control criteria, from structured organization of the slightest gestures to algorithms involving macrostructure control. Real-time processing can be undertaken with common input devices, through relatively sophisticated specific digital or analog–digital music-related controllers, or by means of sensors added to a classical instrument in order to allow interpretation of a performer's gestures. The “new instrument,” whether it be sophisticated digital technology or an acoustic instrument modified through the application of sensors and detectors, may acquire efficiency and flexibility for the most complex requirements of electronic sound management. Whether it be the result of synthesis or of transformation through digital processing, however, the sound itself is usually not as closely related to the performer's gestures as it is in the case of a traditional acoustic instrument. Emission of a digitally processed and generated sound, through loudspeakers, is unrelated to the process of sound production, especially with regard to the perceptual identification of the sound's location (Lupone et al. 2015).

Musical, scientific, and technological research undertaken at CRM since the 1990s produces electroacoustic musical works conceived and designed to integrate sound production and diffusion within a single system. In the field of augmented instruments, CRM seeks to study acoustic phenomena in traditional instruments to develop augmented instruments that feature an absolute correlation between gesture, mechanical structure, and sound. The creation of an augmented musical instrument through careful study of traditional instruments involves both the amplification of potential and expressive techniques and the renewal of performing techniques on the basis of the new instrument's features.

Therefore, the metainstrument is a result of focusing on aesthetics, while making use of combined technology to create a technically modified classical instrument featuring enhancement of its gestural aspects, its acoustics, its timbral possibilities, and its diffusion features, all of which are closely related and almost inseparable from the original mechanical instrument. Moreover, owing to research conducted in this field, both musical composition and performance techniques have undergone significant transformations.

Feed-Drum and SkinAct

The aforementioned studies and physical models, prepared by Lorenzo Seno and Marco Palumbi in 1997 at CRM (cf. Palumbi and Seno 1998; Seno 1998; Palumbi and Seno 1999), as well as experiments made with Planephones (Seno 2005a), developed the necessary skill and experience that served as a starting point for further exploration of possibilities for modifying structural features of traditional acoustic membrane instruments. Indeed, studies on membranophones allowed Michelangelo Lupone to create two new types of augmented instrument—Feed-Drum and SkinAct (see Figures 7 and 8)– -by combining information technology with electroacoustic systems (Seno 2005b; Lupone and Seno 2006; Cianciusi and Seno 2012; Lanzalone 2015; Capanna and Lupone 2019).
Figure 7

Feed-Drum.

Figure 7

Feed-Drum.

Figure 8

SkinAct.

Figure 8

SkinAct.

The Feed-Drum was designed by Lupone for his work “Gran Cassa.” To explore the timbral richness of the attack phase and to isolate vibrational modes, a system of electronic membrane conditioning was created. The signal produced by an excited membrane is returned through a speaker (placed under the membrane) which, by taking advantage of the feedback principle, can potentially generate infinitely long sounds. Damping the membrane's motion leads to sound decay. The input energy level can be dynamically adjusted to isolate high-frequency vibration modes. Although the skin allows excitation of a considerable number of high-frequency modes, their duration is too short to be noticed by the listener, apart from their timbral contribution to the sound's attack stage. The possible variations of emission modes, adequate for a sufficient acoustic response from the resonator (the shell), are limited and not particularly adaptable. \The basic or fundamental frequency, obtained through skin tension, is influenced by a nonhomogeneous distribution of tensioning forces, resulting in a complex spectrum for real modes. The subsequent phases of listening, analysis, cataloguing, and identification of possible degrees of continuity between the instrument's sounds—also issued in relation to unconventional modes of excitation (e.g., rubbing and jeté with wire brushes)—were fundamental for the next stage: the work of musical composition.

The stability of the signal obtained with this conditioning system made it possible to experiment and design a preliminary simplified map of the oscillatory modes on the skin surface based on Bessel functions. In the 2012 release, the map was limited to 14 diameters and 21 nodal circles (two circles are in common), the latter divided into even semicircles (on the left) and odd semicircles (on the right), as shown in Figure 9 (Lupone and Seno 2006).
Figure 9

Map of the oscillatory modes in the 2012 release of the Feed-Drum (a). Scheme of the map with 14 diameters and 21 circles (b).

Figure 9

Map of the oscillatory modes in the 2012 release of the Feed-Drum (a). Scheme of the map with 14 diameters and 21 circles (b).

Electronic conditioning of the instrument left the topology and primary acoustic features unaltered, yet increased the scope of vibrational criteria and control. This was used to distinguish different pitches of various vibrational modes, in order to obtain long notes that could be modulated (similarly to vibrato on a stretched string) and to adapt acoustic energy independently of the emitted frequencies. The complexity of the vibrational phenomena also requires analysis of the instrument's mechanical parts, to identify and reduce dispersions and nonlinear components due to vibrations of structural materials and their combinations. The Feed-Drum instrument was designed and produced not only to extend the acoustic properties of a bass drum, but also to allow ergonomic use of new performing techniques. In particular, the vibrational attitude was transformed by eliminating the lower skin, a decision that simplified tuning of the instrument's basic frequency (30 Hz) and reduced excitation rise time in the upper modes.

A synthetic membrane with isotropic features and high flexibility was applied. The previously described map was drawn over it, with colors highlighting the areas of performance. The shell and tensioning hoop were made from steel and aluminum. In particular, compared with the original instrument (wooden hoop and resonator), the tensioning hoop was made stiffer, the height was reduced, and the adhesion surface was increased. The suspension system was created to isolate the Feed-Drum completely from the supporting structure on the ground. All the mechanical parts, in contact with one another, were separated by an intermediate layer of antivibrational material. Despite the fact that there were still many aspects to be studied, it was possible to verify the reproducibility of the classified sounds and the modulations, the capacity of the excitation and control modes, the extension in frequency, and the pitch features. We have experimented with many techniques to impose the necessary constraints on the frequencies emitted: using knockers of various weights, forms, and dimensions, or positioning one's hands on individual points and on combinations of diameters and semicircles (see Figure 10).
Figure 10

Feed-Drum, some methods for excitation and frequency control of the membrane. Performing techniques with two hands (a) and with fingers (b).

Figure 10

Feed-Drum, some methods for excitation and frequency control of the membrane. Performing techniques with two hands (a) and with fingers (b).

The Feed-Drum's behavior is extremely complex, and many of its aspects still have to be clarified. A more detailed study (Lupone and Seno 2006), in relation to the classic oscillation theory for a circular nonrigid membrane, is the current reference point for CRM's experimentation and musical composition.

Musical notation is an important aspect for future development (see Figures 11, 12, and 13). The references for position, pressure, damping, and speed on the membrane are much more complex to manage compared to traditional techniques. Symbolic notation can be integrated with specific descriptions of actions, but we must consider that the pitches generated by feedback can have different attack tempi, depending on the constraint position, the feedback energy, and the previous vibrational state of the membrane. Our works have used mixed notation, which has allowed interpreters such as Jean Geoffroy, the Ensemble Ars Ludi, and Philippe Spiesser to apply personal methods of sound control, especially hand configurations, to obtain adequate pressure and a constraint area that is easy to reproduce during a performance.
Figure 11

Lupone: “Gran cassa, canto della materia” for Feed-Drum (1999–2002). Page from the score, mixed notation: standard rhythmic and dynamic notation, with textual description of the type of percussion and the membrane preparation.

Figure 11

Lupone: “Gran cassa, canto della materia” for Feed-Drum (1999–2002). Page from the score, mixed notation: standard rhythmic and dynamic notation, with textual description of the type of percussion and the membrane preparation.

The Feed-Drum's basic electronics design (see Figure 14) includes a sensor placed on the membrane, a sensor placed on the loudspeaker, an analysis system, and digital signal processing. For the signal processing, a Windows computer was used, initially with the CPS or SuperCollider development environments, followed by Max. We are currently experimenting with Faust and Pure Data (Pd) on the Raspberry platform.
Figure 12

Lupone: “Feedback,” for three Feed-Drums (2002). Page from the score, mixed notation: tablature with graphical symbols and performer actions.

Figure 12

Lupone: “Feedback,” for three Feed-Drums (2002). Page from the score, mixed notation: tablature with graphical symbols and performer actions.

Figure 13

Bianchini, “Terra,” for Feed-Drum (2012). Page from score, notation of performer actions with graphical symbols and textual description of the feedback methods.

Figure 13

Bianchini, “Terra,” for Feed-Drum (2012). Page from score, notation of performer actions with graphical symbols and textual description of the feedback methods.

Figure 14

Feed-Drum, basic electronics scheme.

Figure 14

Feed-Drum, basic electronics scheme.

The basic electronics scheme allows one, in several points of the electroacoustic chain, to take the membrane signal for specific processing and to introduce signals from other sources as well. These make the instrument flexible for composition projects and allow the interpreter to adapt performance ergonomics. For example, in “Terra” by Laura Bianchini, the electronics design includes use of an external microphone and four sensors: one on the barrel, two on opposite ends of a membrane diameter, and one on the membrane diameter that is perpendicular to the previous diameter (see Figure 15). Three independent pedals control membrane, feedback, and external signal excitation. The sound is simultaneously emitted by the Feed-Drum and the loudspeaker system in the listening room.
Figure 15

“Terra,” for Feed-Drum (2012). Excerpt from setup. [Editor's note: The distinction between the CRM equipment and the local equipment is clearest in the color version of this figure, available at http://www.doi.org/10.1162/COMJ_a_00570.]

Figure 15

“Terra,” for Feed-Drum (2012). Excerpt from setup. [Editor's note: The distinction between the CRM equipment and the local equipment is clearest in the color version of this figure, available at http://www.doi.org/10.1162/COMJ_a_00570.]

The choices of the quantity and types of controls, of the listening method, and of the performance features are necessarily correlated to the compositional choices: technical and expressive. In this case, the work is inspired by the earth, in particular:

  1. the earth's ability to collect, absorb, process, and return whatever is offered to it or imposed in a different form;

  2. the perception of the energy that is released from it, even when it appears arid or resting;

  3. the materiality of its layers (which constitute its vital character), as opposed to abstraction (the latter characterizing the space in which materiality is placed as a lifeless entity, something over which you fly but on which you cannot walk); and finally,

  4. its concrete nature.

The performer listens, explores the instrument, and provokes and participates in transformations while controlling their contexts, finding forms of adaptation that make it possible to interact with the musical event in real time, creating images and sounds that metaphorically represent the music's stages and dynamism (see Figure 16).
Figure 16

“Terra,” for Feed-Drum, premiere at Aujourd'hui Musique Festival, Perpignan, France, 2012. Performer: Philippe Spiesser.

Figure 16

“Terra,” for Feed-Drum, premiere at Aujourd'hui Musique Festival, Perpignan, France, 2012. Performer: Philippe Spiesser.

“Terra” comprises four parts, each dedicated to one specific aspect of its layers. Each layer is characterized by different sound-generation and sound-processing techniques, from rarefied and fluid sounds obtained by controlling the feedback with a tissue and fingers, through continuous rhythms obtained by using the hands, sticks, and vibration of the membrane. The overall sound intensity and feedback modulations are controlled by the performer using the pedals and tactile damping of the membrane. The algorithms for real-time sound processing were implemented using the Max programming environment. In particular, algorithms were used to transform sound in the time domain (granular synthesis, amplitude modulation) and the frequency domain (dynamic filters).

The SkinAct constitutes an advancement in the study of vibrational features already observed in the Feed-Drum, and it considerably develops its interactive nature. Unlike the Feed-Drum, a vibrational sensor and actuator are placed on the skin. A map of circles and diameters, drawn on the skin's surface, indicates the vibrational nodes that allow tone selection. The skin of SkinAct includes two systems concurrently: excitation and resonance. The skin puts a vibrational detector and different actuators into a feedback condition. This particular feature allows the selection of various performing techniques and tunings (base frequency between 30 and 45 Hz) while maintaining “tone” selection opportunities. The performer chooses these by imposing vibrational nodes at the intersections of selected circles

and diameters of the map. The skin features have also been carefully studied and chosen to allow dynamic projection of light in relation to the performer.

SkinAct was designed by Michelangelo Lupone for his work “Spazio Curvo” (2012), later presented in a complete version renamed “Coup au Vent” for three SkinActs, which was premiered in 2015 at the Electronic Music Week Festival in Shanghai (see Figure 17). The creation of the instruments required a lengthy period of study and experimentation, both to design the instruments (SkinAct can be performed in two different positions: vertical or standing by the instrument, and horizontal as in the Feed-Drum), and to finalize the system for feedback generation. Unlike the Feed-Drum, the SkinAct's feedback is generated not by a loudspeaker but directly on the membrane. The instrument's ergonomics were designed by the architect Emanuela Mentuccia, CRM project and visual designer. The percussionist Philippe Spiesser assisted in experimenting with many performance techniques. “Coup au Vent” begins with the concept of reproducing acoustic space not only around the performer, but also around the listener, with coherent features of spatial mobility and localization. To achieve a musical composition that could highlight this idea, Lupone was inspired by the concept of space as perceived and measured over time. The work is formally divided into five sections, each of which proposes a different concept of rhythm: (1) “in sound” (beats obtained with partial membrane frequencies); (2) “in space” (rhythms deriving from the perception of sounds' movements and localizations in acoustic space); (3) “of sound” (rhythms obtained through granular synthesis); (4) “with sound” (rhythm organized with accents and pauses); and (5) “polyrhythms” (various rhythms superimposed in time).
Figure 17

Lupone: “Coup au Vent,” for three SkinActs, Shanghai Symphony Hall (2015), performer Philippe Spiesser. Membrane excitation techniques: beating with the hands (a), in resonance with the trumpet (b).

Figure 17

Lupone: “Coup au Vent,” for three SkinActs, Shanghai Symphony Hall (2015), performer Philippe Spiesser. Membrane excitation techniques: beating with the hands (a), in resonance with the trumpet (b).

The three SkinActs placed closely together influence each other through increased sound complexity, resulting from control techniques. This influence is used in composition to obtain deviations in the individual instruments' pitches and to obtain modulations that change the piece's overall timbre. The trumpet (played by the percussionist) also creates a specific interaction; its sound pressure level and the emitted frequency are used in the piece to modify the SkinAct's energy and feedback frequencies, entailing the superposition of many oscillatory modes and the generation of inharmonic chords

Each drum has its own identity, responding in a different way to the energy, the speed, the position, the sequences of beats, and the gestures that the performer imposes on the membrane. The performer generates and models the vibrating material, accompanying the sound's movement in the listening space with gestures. The membrane's vibration radiates into the air its resonances, which create a counterpoint of abstract and immaterial forms. Like the Feed-Drum, SkinAct allows dynamic control for sensors and actuators, with algorithms specially developed for Max (Cianciusi and Seno 2012; Lanzalone 2015; Bianchini et al. 2019).

WindBack and ResoFlute

Two augmented wind instruments are currently produced at CRM: WindBack and ResoFlute. They have developed from research on how to extend, enhance, and transfigure the features of traditional wind instruments (Lanzalone 2015; Lupone, Lanzalone, and Seno 2015; Bianchini et al. 2019). WindBack is a prototype system, applied to the alto saxophone, based on acoustic feedback to modulate the instrument's air column (see Figure 18). The system was designed and created in 2011 by Michelangelo Lupone for his work “In Sordina,” which premiered at the Recoleta Cultural Center in Buenos Aires the same year.
Figure 18

WindBack release 2, system based on acoustic feedback to modulate the saxophone's air column.

Figure 18

WindBack release 2, system based on acoustic feedback to modulate the saxophone's air column.

The purpose of this project is to meet creative requirements for manipulating the instrument's timbre and pitches to obtain four main features.

  1. Independent multiphonic sounds: Normally, with multiphonic sounds, although one can emit two simultaneous pitches, it is impossible to control their time variations independently. The WindBack, however, can produce two controllable pitches independently, which allows management of simple yet repetitive polyphony.

  2. Polyrhythm: Microvariations to the internal rhythm of two simultaneous sounds allow beats to be obtained, the frequency of which can be efficiently controlled and used for rhythmic purposes.

  3. Harmonic structure modification: Obtained through sound emission inside a pipe to deform the air column and condition natural reed excitation.

  4. Selective resonance: The instrument's resonant modes can be made to emerge in a selective manner by means of high acoustic pressure, allowing transformation of the timbre.

These musical objectives have suggested various ways to use instruments. Initially, vibrational sensors on the reed and bell were used to obtain significant data for analysis. Later, miniature internal and external microphones were used, with the actuator placed on the body of the instrument. Finally, the actuator was replaced with a loudspeaker placed on the bell. The definitive system is based on an acoustic feedback process with complex behavior, as the loudspeaker sends acoustic pressure into the instrument in the opposite direction from the breath. The nonlinear dynamics of physical phenomena deriving from this configuration are difficult to fully analyze and understand. The turbulence and dissipation in various points of the instrument, as well as time-variable pressure, are articulated together through phenomena of fluid dynamics that require further study and research. Currently, for stability and reproducibility of sound parameters, a dual control has been inserted to govern the dynamic system and to control the frequency of the instrument and the WindBack. In particular, the input signal's dynamics are automatically processed, whereas the output signal depends on the pedal worked by the performer. In this way, it is possible to obtain both controlled modulation of the instrument's overall air column energy and real-time alterations of pitch and timbre for compositional objectives.

The choice of electroacoustic components and software implementation arises from a series of experiments and surveys undertaken in collaboration with an audio technician, Maurizio Palpacelli, and a saxophonist, Enzo Filippetti. The sound of the alto saxophone is extracted through an AKG C411 condenser microphone placed on the saxophone's neck and a DPA 4060 omnidirectional microphone condenser placed in the middle of the instrument's body (see Figure 19). The signal is transmitted to an analog-to-digital conversion system and then processed by an algorithm implemented in Max. Microphones can be used individually or together. The signal is analyzed, and its frequency and amplitude envelopes are extracted. The data obtained is sent to a synthesis algorithm that manages the energy of the formants, the alterations of the pitch, and the sum of partials to the input signal. The result of the processing is controlled in amplitude from a MIDI pedal and sent to the digital-to-analog convertor (DAC) and to the WindBack loudspeaker. The amplifier and the loudspeaker have been studied to obtain the pairing that has maximally effective acoustics in the alto saxophone's frequency range. Further hardware controls are possible in more points of the feedback circuit, and the signal processing can be modified on the basis of the composer's musical needs.
Figure 19

WindBack, basic electronics scheme and algorithm flow diagram.

Figure 19

WindBack, basic electronics scheme and algorithm flow diagram.

As in the previous membrane instruments, some techniques of sound production also change in the WindBack. Special musical training is needed to blow into the mouthpiece while the loudspeaker is introducing air pressure into the bell, because the air pressure's resistance to the breath requires more effort and control than in ordinary performance practice. Internal resonance variations due to opening and closing of keys according to ordinary criteria allow the instrumentalist to tune the feedback, while pressure variation on the reed makes it possible to manage beat frequencies. The performance techniques have a strong influence on how the score is written. Works for WindBack have used various compositional techniques and mixed notation, ranging from precise notation through aleatory and graphic notation. The first musical score was created as a result of several experiments the instrument had previously undergone, undertaken with the help of saxophonist Enzo Filippetti. The score included a traditional staff, supplemented with a second staff indicating required foot pedal actions and references to certain performing modes or air-column perturbation effects caused by the loudspeaker.

“In Sordina,” by Michelangelo Lupone (see Figure 20), is written on two staves: one notating the saxophone part, and the other, the WindBack pedal control. The pedal's time trajectory is described by a function with discretized line segments on five levels.
Figure 20

Lupone: “In Sordina” (2011). A page of the handwritten music score.

Figure 20

Lupone: “In Sordina” (2011). A page of the handwritten music score.

In this case, the pedal performs a dual action: First, to fix and stabilize the pitch of the feedback when the function goes beyond the lowest staff line; and, second, to increase the feedback sound level and consequently the air pressure in the instrument. The pedal movement is synchronized with the saxophone notation and modifies the instrument's timbre and rhythm.

The piece is divided into four sections in the form of “variations” developed around reference tones (Zentraltöne) and multiphonic sounds. The former are developed in such a way as to generate internal rhythm variations (beats), progressively more articulated through pitch movements generated with the WindBack, whereas the latter are modulated internally from the variations of the acoustic pressure. In particular, multiphonic sounds are explored through the feedback that freezes some frequencies, making it possible to increase spectral density until the sound is fully transformed.

“S4EF” by Giuseppe Silvi (see Figure 21) presents a definite structure of tones played in free rhythm, both in conjunction with and in counterpoint to samples synthesized by the algorithm used in the path object (Markidis, Fernandez, and Silvi 2016) in the Pd environment. The piece also uses the S.T.ONE spatialization system created by the composer, which in this piece is used to develop a spatial dialectic of sounds: the direct and localized source of the WindBack, and the unidirectional diffusion positioned in the listening room through the S.T.ONE system. The piece develops polyphonic sound grains that are distributed in space; the performer follows a graphic score that describes the methods of interaction with synthesized and prerecorded sounds.
Figure 21

Giuseppe Silvi: “S4EF” (2016). A page of music score: pitch structure.

Figure 21

Giuseppe Silvi: “S4EF” (2016). A page of music score: pitch structure.

“Âme lie” (see Figure 22) was composed by Alessio Gabriele at CRM during the WindBack research process. The piece is built in three sections, starting from a succession of six notes that correspond to a musical cryptogram of the title. It is a sound picture in which voice, augmented saxophone, and live electronics develop functional and harmonic relationships in terms of dialogue versus contrast and fusion versus splitting, to explore the possibilities of merging different acoustic elements into the saxophone (see Figure 23). The instrument acts as an unconventional emitter and resonator, projecting into a more complex space enriched with acoustic and tonal distortions introduced by feedback, as well as with interactions between the performers. There are three main stages of real-time signal processing: (1) spectral processing for voice and
Figure 22

Alessio Gabriele: “Âme lie” (2018). A page of the handwritten music score.

Figure 22

Alessio Gabriele: “Âme lie” (2018). A page of the handwritten music score.

Figure 23

“Âme lie,” for soprano and WindBack (2016). Soprano Eleonora Claps, WindBack Enzo Filippetti.

Figure 23

“Âme lie,” for soprano and WindBack (2016). Soprano Eleonora Claps, WindBack Enzo Filippetti.

saxophone, through which, after analysis by a fast Fourier transform (FFT), each sinusoidal partial can be treated by varying its magnitude and delay, and furthermore groups of contiguous FFT frames can be processed through a real-time, stochastic spectral freeze technique to produce an elongation of short fragments of the live sound (much of the signal useful for feedback is generated in this stage); (2) signal accumulation, transposition, and selective polyphonic repetition; and (3) selective routing of individual musical elements on the multichannel broadcasting system (the WindBack is treated as a “special” input/output audio channel).

During the process of the WindBack's development, an important goal was to obtain an ergonomic, self-contained, acoustically more-efficient system, containing all the electronics necessary for input/output of the audio signal and for the performer's interaction with the algorithms for control and signal processing. This is a more efficient amplifier-driver system, with a new, completely redesigned enclosure, produced through 3-D printing, and an ad hoc control interface that allows easier and more immediate interaction with the performer. Research at CRM and further experiments on sonorities producible using resonators of different shapes and materials have allowed Silvia Lanzalone to create new works for original sound-diffusion systems made from terracotta sculptures, glass resonators selected for reproduction of vocal sounds (Lanzalone 2008a, 2009), and certain modified musical instruments, such as clarinet (Lanzalone 2008b), harpsichord (Lanzalone 2012), and flute (Lanzalone 2015; Lupone, Lanzalone, and Seno 2015).

ResoFlute is an augmented instrument composed of a classical Western concert flute modified by six miniature microphones applied internally on the instrument, a sensor, some pedal controls, and an aluminum pipe for sound diffusion and resonance (see Figure 24). A microphone is applied internally to the instrument's head joint rather than the traditional cork, and five other electret miniature microphones are placed on the body alongside certain key holes and are fixed by making holes in the pipe according to the positions of the keys. (The microphone project was undertaken with the participation of Antonio Marra, audio technician and flute repair specialist.) The six microphones detect sound-pressure variations that can be found in different parts of the flute. The body of the instrument also features a piezo-film sensor that detects the position of the right-hand thumb, which a flutist normally uses solely to hold the instrument (see Figure 25). The piezo-film signal is used via a threshold circuit to produce on/off commands that the performer generates according to the score.
Figure 24

ResoFlute. Setup of the equipment for the piece “Èleghos” by Lanzalone.

Figure 24

ResoFlute. Setup of the equipment for the piece “Èleghos” by Lanzalone.

Figure 25

ResoFlute, electret microphones on the body of the instrument (a); MEAS piezo-film sensor at the right-hand thumb position (b).

Figure 25

ResoFlute, electret microphones on the body of the instrument (a); MEAS piezo-film sensor at the right-hand thumb position (b).

The sound of the traditional instrument is augmented through the use of the microphones and the resonant aluminum pipe alike. The pipe not only projects the flute's sound, but also filters it to enhance frequencies corresponding to its normal modes. The pipe (180 cm long and 10 cm in diameter) is mounted on a wooden case featuring both the speaker and a second cabinet containing other electronic devices, including a power amplifier. The speaker is connected to the pipe through a conical joint, which enhances its acoustic power effectiveness. All other signal-processing controls are performed via MIDI pedals featuring Control Change and Program Change messages.

During creation of the ResoFlute prototype, some construction difficulties came to light. As they passed through the body of the instrument, the cables should have been arranged to avoid inconvenience when using the instrument. Therefore, to refine design-related issues and add extra controls on the body of the instrument, a new ResoFlute prototype is under development in collaboration with the architect Emanuela Mentuccia. The project, entitled ResonāBIT, was included in the “Make Your Idea 2017” program, dedicated to the development of new projects in the Lazio Region's FabLab network, still underway.

The applied technology, conceived as a means to enhance the flute's timbral possibilities, was created so that the resonance and spatial diffusion features of the instrument's sound passing through the resonant pipe could be emphasized and integrated.

The entire system comprises an algorithm for generating synthesized sounds and real-time processing of flute sounds, completed by the phenomenon of feedback in the air. The resonant pipe is responsible for feedback and for sound diffusion. The feedback depends on the flute's position relative to the pipe, as well as the positioning of the microphones inside the flute. A balance between the instrument's natural sounds and electronic processing was achieved, resulting in a complex output sound from the resonant pipe that entailed the best timbre integration and spatial diffusion.

ResoFlute was designed and created by Silvia Lanzalone specifically for the performance of her “Èleghos” for ResoFlute, which premiered in 2014 at the ArteScienza Festival in Rome. (A previous version, “Studio su Èleghos,” premiered at Tor Vergata University of Rome in 2014.) The process of music creation was optimized at the performance stage, thanks to collaboration with the experienced flutist Gianni Trovalusci, the first performer of this piece (see Figure 26). The title “Èleghos” evokes aspects of archaic Hellenistic culture, from the time when early wind instruments appeared in Greece, with or without reeds. Indeed, the ancient term (elegheia) from the 5th century BCE is related to the word (eleghos), which means “song accompanied by flute.”

Figure 26

“Èleghos,” for ResoFlute (2014). Flutist Gianni Trovalusci.

Figure 26

“Èleghos,” for ResoFlute (2014). Flutist Gianni Trovalusci.

The principal sound elements used in the piece are the voice, wind, and the classical sound of the instrument, through writing that evokes an ancient virtuosity that is now stylized. The instrument's voice is the Dionysian aspect, often hidden by its apparently ethereal nature. The musical composition phase was preceded by numerous experiments related to optimizing technical aspects for the production of timbre materials necessary for the piece's creation, on the basis of the initial idea from which the formal project began.

The sound processing algorithm (see Figure 27) is structured to highlight four aspects associated with performance practice of the 20th-century repertoire for flute, chosen for composition but transformed through modification and transfiguration of the associated timbre: (1) the harmonic nature of the spectra of long sounds, (2) microvariations in the timbre of trills and tremolo achieved with microtonal fingerings, (3) the use of vocal and breath sounds together with the flute sound, and (4) internal articulations in the tube achieved with the mouth in “closed” position. The algorithm structure, fully implemented in Max, comprises two banks of sound filters, one applied to the signal from the microphones, and the other to the signal immediately before it is sent to the DAC. Both banks of filters are “tuned” to harmonic frequencies calculated from the instrument's tempered notes. The microphone signal is also analyzed to extract data from the amplitude envelope and information related to pitch variations, while the sensor signal is analyzed to extract amplitude peaks.
Figure 27

ResoFlute. Real-time algorithm for “Èleghos.”

Figure 27

ResoFlute. Real-time algorithm for “Èleghos.”

The processing placed between the two filter banks consists of FM synthesis and comb filters that develop through aspects taken from real-time analysis. The feedback is activated through sequences of gestures that the score requests the flutist to execute while standing close to the aluminum pipe.

Considerations on Augmented Instruments

Augmented instruments produced at CRM have used feedback “processed” in electronic, digital, and acoustic contexts, according to specific criteria and objectives, differentiated for each instrument. Indeed, electroacoustic feedback is of particular interest, because it is directly correlated to the physical, mechanical, and electronic features of the electroacoustic components involved, above all for their capacity to feed themselves during response, specifically as a result of correlation. This feature makes the phenomenon particularly sensitive to microvariations of the system in which they are generated, which can be modified by the performer if integrated with the musical instrument. Thus, inclusion of a feedback system inside the electromechanics of a musical instrument can grant the new instrument a particular originality from the perspective of sound production. The timbral “vitality” is dependent on variations made to the system by the performer, to whom the instrument responds with its ability to adapt, allowing consistent sound variations in response to small performance gestures.

The unexpected nature of the phenomenon given by the response must be controlled by inserting algorithms for signal processing, calibrated on the basis of features of the specific instrument. The Feed-Drum and SkinAct augmented percussion instruments achieve the production of long sounds, reproducing the signal on the membrane that is excited either through a loudspeaker placed below it, or structurally, through attached vibrational actuators. Damping of the membrane's motion and of the resulting sound can be regulated through pressure introduced by the interpreter, placing hands on specific points of the membrane to isolate specific high-frequency vibration modes and to choose the quantity of feedback energy. WindBack presents a feedback system developed directly inside the air column of a saxophone, injecting the sound produced by the performer, and the ResoFlute achieves feedback in the air by taking advantage of the radiation and resonance features of the flute and the external aluminum pipe, which interact directly in the air and are regulated by the flutist's distance from the pipe.

The augmented instruments produced at CRM achieve integration of the means of production and sound diffusion to produce “new instruments.” The new instruments are also called “augmented” instruments because of timbral aspects, acoustic diffusion, and performance techniques. The creation of augmented instruments is based on an absolute correlation between gesture, mechanical structure, and sound by integration of acoustic features and electronic technologies. The Center constantly researches this correlation and integration through creative use of nonlinear phenomena in acoustic instruments, as well as theoretical and experimental study undertaken on the vibration of materials, resonant bodies, and reflective surfaces, all phenomena closely connected to the acoustics of musical instruments (Bianchini et al. 2019). In any event, the principal criteria that have always guided CRM in research and creation of new prototypes are closely connected to the composer's culture and aesthetics, which express and guide the design and the creative projects, as seen in all the examples described.

Conclusion

Study and experiments on perceptual space are expressed through forms that imply a different approach to fruition and artistic creation. The relationship between artistic creator and observer has substantially changed. There is a need to create innovative expressive forms and forms of fruition that are coherent with the content proposed and that relate to the context in a historical perspective for personal and community awareness. For many years, CRM has been involved in this research in the conviction that active and conscious public participation can reactivate systems of social relations. Space, understood as the “context of extended fruition,” has a fundamental role because it redefines artistic spaces. Timbre, understood as “extension of relations between sounds,” frees perception from predefined codes that presuppose diachronic reading, over time, opening on to synchronic, “associative” readings that develop spatially.

A new approach to creation must make use of scientific knowledge of acoustics and psychoacoustics, allowing “flexible” music forms based on complex sound nuclei with definite style to identify independent potential relations, which influence imagination and evoke emotions. This musical form must be coherent and can adapt to its spatial dimension. New forms of adaptive art and CRM's augmented instruments allow the public and the performer to participate in the creative act by redesigning relations without essentially modifying the content proposed by the artist. Such artistic experiences have made it possible for CRM to commit to the Music Emotions project (a program at the CRM laboratories in collaboration with the psychiatric unit of Rome's Tor Vergata University Hospital) for rehabilitation of individuals with mental health issues through integrated, interactive artistic forms and augmented instruments. The objective is to allow those with physical and mental disabilities to find a way to participate that would allow them to establish relationships with others or with works of art (Capanna and Lupone 2019)

References

Bianchini
,
L.
2000a
. “
Designing a Virtual Theatrical Listening Space
.” In
Proceedings of the International Computer Music Conference
, pp.
406
409
.
Bianchini
,
L.
2000b
. “
From the Drama of Speech to the Dramaturgy of Listening: A Hypothesis of Virtual Theatre
.” In
Atti del Colloqui di informatica musicale
, pp.
98
101
.
Bianchini
,
L.
2010
. “
Scenari dell'ascolto. Le nuove forme dell'arte musicale
.” In
Atti del Convegno “La terra fertile
,” pp.
154
157
.
Bianchini
,
L.
, and
M.
Lupone
.
1992
. “
The Activities of CRM: Centro Ricerche Musicali
.”
Leonardo Music Journal
2
(
1
):
111
113
.
Bianchini
,
L.
, and
S.
Schiavoni
.
1996
. “
Disegno di un teatro virtuale dell'ascolto
.” In
Atti del Convegno “La Terra fertile
,” pp.
70
75
.
Bianchini
,
L.
, et al
2019
. “Augmented Instruments at CRM—Centro Ricerche Musicali of Rome: Feed-Drum, SkinAct, WindBack and ResoFlute.” In
Proceedings of the International Computer Music Conference
, pp.
510
515
.
Capanna
,
A.
, and
M.
Lupone
.
2019
. “
Ambiente–Forma–Opera
.”
L'Architettura delle città
11
(
15
):
59
79
.
Cianciusi
,
W.
, and
L.
Seno
.
2012
. “
Feed-Drum e SkinAct: L'ultima frontiera dello strumento aumentato
.” In
Atti del Colloquio di informatica musicale
, pp.
72
78
.
De Vitis
,
A.
,
M.
Lupone
, and
A.
Pellecchia
.
1991
. “
CRM: From the Fly10 to the Fly30 System
.” In
Atti del Colloquio di informatica musicale
, pp.
367
371
.
De Vitis
,
A.
, and
A.
Pellecchia
.
1992
. “
Fly30: Un sistema programmabile per l'elaborazione numerica dei segnali musicali in tempo reale
.” In
Atti del Convegno Nazionale di Acustica in Italia
, pp.
455
459
.
Inverardi
,
P.
, et al
2012
. “
Ad-Opera: Music-Inspired Self-Adaptive System
.” In
J.
Zander
and
P. J.
Mosterman
, eds.
Computation for Humanity: Information Technology to Advance Society
.
Boca Raton, Florida
:
CRC Press
, pp.
359
380
.
Lanzalone
,
S.
2008a
. “
Suoni scolpiti e sculture sonore: Alcuni esempi di installazioni d'arte elettroacustica
.” In
Atti del Colloquio di informatica musicale
, pp.
149
156
.
Lanzalone
,
S.
2008b
. “
The ‘Suspended Clarinet’ with the ‘Uncaused Sound’: Description of a Renewed Musical Instrument
.” In
Proceedings of the International Conference on New Interfaces for Musical Expression
, pp.
273
276
.
Lanzalone
,
S.
2009
. “
Suoni scolpiti/sculture sonore: Un percorso (est)etico tra re-incarnazione e inte(g)razione
.”
Le arti del suono
2
:
109
132
.
Lanzalone
,
S.
2012
. “
Clavecin électrique: Studio e realizzazione dell'opera
.” In
Atti del Colloquio di informatica musicale
, pp.
104
111
.
Lanzalone
,
S.
2015
. “
Strumenti aumentati
.” In
R.
Spagnolo
, ed.
Acustica: Fondamenti e applicazioni
. Turin: UTET, pp.
877
894
.
Lupone
,
M.
1984
. “
Lo studio per l'informatica musicale di roma: Il sistema per la sintesi in tempo reale
.” In
Atti del Colloquio di informatica musicale
, pp.
62
69
.
Lupone
,
M.
1985
. “
System Fly
.” In
Atti del Colloquio di informatica musicale
, pp.
315
322
.
Lupone
,
M.
2005
. “
Musica elettronica
.” In
S.
Cingolani
and
R.
Spagnolo
, eds.
Acustica musicale e architettonica
.
Turin
:
UTET
, pp.
527
586
.
Lupone
,
M.
2008
. “
Musica e mutazione
.” HiArt 1. Available online at www.torrossa.com/en/resources/an/4534451 (subscription required). Accessed March 2021.
Lupone
,
M.
,
S.
Lanzalone
, and
L.
Seno
.
2015
. “
New Advancement of the Research on the Augmented Wind Instruments: Windback and Resoflute
.” In
Proceedings of the International Conference Electroacoustic Winds
, pp.
156
163
.
Lupone
,
M.
, and
L.
Seno
.
2006
. “
Gran Cassa and the Adaptive Instrument Feed-Drum
.” In
Actes des Rencontres musicales pluridisciplinaires
, pp.
27
36
.
Lupone
,
M.
, et al
2015
. “‘Forme Immateriali’ by Michelangelo Lupone: Structure, Creation, Interaction, Evolution of a Permanent Adaptive Music Work.” In
Proceedings of the International Conference Electroacoustic Winds
, pp.
176
183
.
Markidis
,
M. M.
,
J. M.
Fernandez
, and
G.
Silvi
.
2016
. “
Real-Time Sound Similarity Synthesis by an Ahead-of-Time Feature Extraction
.” In
Proceedings of the International Pure Data Convention
, pp.
87
92
.
Palumbi
,
M.
, and
L.
Seno
.
1998
. “
Physical Modelling of Bowed Strings: A New Model and Algorithm
.” In
Actes des Journées d'informatique musicale
, pp.
G2-1
8
.
Palumbi
,
M.
, and
L.
Seno
.
1999
. “
Physical Modelling by Directly Solving Wave PDE
.” In
Proceedings of the International Computer Music Conference
, pp.
325
328
.
Pellecchia
,
A.
1991
. “
Real-Time DSP System Fly30
.” In
Proceedings of the International Workshop on Man–Machine Interaction in Live Performance
, pp.
79
106
.
Seno
,
L.
1998
. “
Lo strumento virtuale: Implicazioni teoriche e pratiche nell'uso dei modelli fisici
.” In
Atti del Colloquium “La terra fertile
,” pp.
75
78
.
Seno
,
L.
2005a
. “
Nuova liuteria: Caratterizzazione acusto-vibrazionale dei Planofoni; un nuovo strumento (e concetto) per la musica e l'arte contemporanee
.” In
Proceedings of the Annual Meeting of the Audio Engineering Society, Italian Section
, paper 5009.
Seno
,
L.
2005b
. “
Modelli fisici e strumenti musicali
.” In
S.
Cingolani
and
R.
Spagnolo
, eds.
Acustica musicale e architettonica.
Turin
:
UTET
, pp.
455
494
.

Author notes

All artwork in this article from the Archives of the Centro Ricerche Musicali. Used with permission.