In this article, we introduce the coadaptive audiovisual instrument, CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning–based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.

You do not currently have access to this content.