In this paper, we present a set of time-domain algorithms for the low- and high-level analysis of audio streams. These include spectral centroid, noisiness, and spectral spread for the low level, and dynamicity, heterogeneity, and complexity for the high level. The low-level algorithms provide a continuous measure of the features and can operate with short analysis frames. The high-level algorithms, on the other hand, are original designs informed both perceptually and by complexity theory for the analysis of musically meaningful information, both in short sounds or articulated streams with long-term nontrivial variations.
These algorithms are suitable for the implementation of real-time audio analysis in diverse live performance setups that require the extraction of information from several streams at the same time. For example, the low-level algorithms can be deployed in large audio networks of adaptive agents, or in small-to-large ensembles for the analysis of various characteristics of the instruments for computer-assisted performance. Furthermore, the high-level algorithms can be implemented as part of fitness functions in music systems based on evolutionary algorithms that follow musically-informed criteria, or as analysis tools to assess the quality of some of the characteristics of a musical output.
Musical applications of these algorithms can be found in a companion paper in this issue of Computer Music Journal: “Complex Adaptation in Audio Feedback Networks for the Synthesis of Music and Sounds.”