The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human–AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human–computer interactive improvisation.

You do not currently have access to this content.