Abstract

Over the last five years, KIMA, an art and research project on sound and vision, has investigated visual properties of sound. Previous iterations of KIMA focused on digital representations of cymatics—physical sound patterns—as media for performance. The most recent development incorporated neural networks and machine learning strategies to explore visual expressions of sound in participatory music creation. The project, displayed on a 360-degree canvas at the London Roundhouse, prompted the audience to explore their own voice as intelligent, real-time visual representation. Machine learning algorithms played a key role in meaningful interpretation of sound as visual form. The resulting immersive performance turned the audience into cocreators of the piece.

This content is only available as a PDF.