In recent years there has been a proliferation of sound-based algorithmic practices in visual programming environments such as Max or PureData, enforcing a real-time paradigm for sound synthesis and processing. By contrast, in the same environments, combining sounds in an out-of-time manner proves to be surprisingly complex, with simple editing operations being awkward and complex mechanisms nearly impossible to achieve.

This article introduces the ears library: a collection of externals for Max designed to streamline sound-based offline algorithmic practices. As the fourth-born in the bach family of libraries for computer-aided composition (Agostini and Ghisi 2013), ears combines seamlessly with the bach ecosystem and complies with its programming patterns. It contains tools to manipulate sound buffers for input, formatting, editing, mixing, signal processing, time and pitch manipulation, spectral analysis and synthesis, partial tracking, feature extraction, audio compression, waveset manipulation, score rendering, spatialization, and output. Whereas its older siblings in the bach family were intended to make note-based “compositional” practice more performative, ears is designed to make sound-based “performative” practices more “compositional.”

This content is only available as a PDF.

Article PDF first page preview

First page of A Library for Offline Algorithmic Audio Manipulation in
                    Max
You do not currently have access to this content.