Abstract
Gesture-to-sound mapping is generally defined as the association between gestural and sound parameters. This article describes an approach that brings forward the perception–action loop as a fundamental design principle for gesture–sound mapping in digital music instrument. Our approach considers the processes of listening as the foundation—and the first step—in the design of action–sound relationships. In this design process, the relationship between action and sound is derived from actions that can be perceived in the sound. Building on previous work on listening modes and gestural descriptions, we propose to distinguish between three mapping strategies: instantaneous, temporal, and metaphorical. Our approach makes use of machine-learning techniques for building prototypes, from digital music instruments to interactive installations. Four different examples of scenarios and prototypes are described and discussed.