Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-5 of 5
Gérard Assayag
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2022) 46 (4): 7–25.
Published: 01 December 2022
Abstract
View article
PDF
Somax2 is an artificial intelligence (AI)-based multiagent system for human–machine “coimprovisation” that generates stylistically coherent streams while continuously listening and adapting to musicians or other agents. The model on which it is based can be used with little configuration to interact with humans in full autonomy, but it also allows fine real-time control of its generative processes and interaction strategies, closer in this case to a “smart” digital instrument. An offspring of the Omax system, conceived at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), the Somax2 environment is part of the European Research Council Raising Cocreativity in Cyber–Human Musicianship (REACH) project, which studies distributed creativity as a general template for symbiotic interaction between humans and digital systems. It fosters mixed musical reality involving cocreative AI agents. The REACH project puts forward the idea that cocreativity in cyber–human systems results from the emergence of complex joint behavior, produced by interaction and featuring cross-learning mechanisms. Somax2 is a first step toward this ideal, and already shows life-size achievements. This article describes Somax2 extensively, from its theoretical model to its system architecture, through its listening and learning strategies, representation spaces, and interaction policies.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2019) 43 (2-3): 109–124.
Published: 01 June 2019
Abstract
View article
PDF
This article focuses on learning the hierarchical structure of what we call a “temporal scenario” (for instance, a chord progression) to perform automatic improvisation consistently over several different time scales. We first present a way to represent hierarchical structures with a phrase structure grammar. Such a grammar enables us to analyze a scenario at several levels of organization, creating a “multilevel scenario.” We then develop a method to automatically induce this grammar from a corpus, based on sequence selection with mutual information. We applied this method to a corpus of transcribed improvisations based on the chord sequence, also with chord substitutions, from George Gershwin's “I Got Rhythm.” From these we obtained multilevel scenarios similar to the analyses performed by professional musicians. We then present a novel heuristic approach, exploiting the multilevel structure of a scenario to guide the improvisation with anticipatory behavior in an improvisation paradigm driven by a factor oracle. This method ensures consistency of the improvisation with regard to the global form, and it opens up possibilities when playing on chords that do not exist in memory. This system was evaluated by professional improvisers during listening sessions and received excellent feedback.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2018) 42 (02): 52–66.
Published: 01 June 2018
Abstract
View article
PDF
This article presents two methods to generate automatic improvisation using training over multidimensional sequences. We consider musical features such as melody, harmony, timbre, etc., as dimensions. We first present a system combining interpolated probabilistic models with a factor oracle. The probabilistic models are trained on a corpus of musical work to learn the correlation between dimensions, and they are used to guide the navigation in the factor oracle to ensure a logical improvisation. Improvisations are therefore created in a way in which the intuition of a context is enriched with multidimensional knowledge. We then introduce a system creating multidimensional improvisations based on communication between dimensions via probabilistic message passing. The communication infers some anticipatory behavior on each dimension influenced by the others, creating a consistent multidimensional improvisation. Both systems were evaluated by professional improvisers during listening sessions. Overall, the systems received good feedback and showed encouraging results—first, on how multidimensional knowledge can improve navigation in the factor oracle and, second, on how communication through message passing can emulate the interactivity between dimensions or musicians.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2013) 37 (2): 61–72.
Published: 01 June 2013
Abstract
View article
PDF
In this article, we consider the possibility of mixing two main paradigms of electroacoustic music: the writing-oriented and the performance-oriented paradigms. We show that these two opposing paradigms are the consequence of two corresponding conceptions of time. In addition, we assume that the temporal aspects of a performer's interpretation of a musical composition can be linked to both paradigms. Based on this theoretical study, we propose a formalism for composing pieces of electroacoustic music that can be interpreted in performance.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (1999) 23 (3): 59–72.
Published: 01 September 1999