Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-8 of 8
Rodrigo F. Cádiz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Computer Music Journal 1–16.
Published: 23 October 2024
Abstract
View article
PDF
Quadratic difference tones belong to a family of perceptual phenomena that arise from the neuromechanics of the auditory system in response to particular physical properties of sound. Long deployed as “ghost” or “phantom” tones by sound artists, improvisers, and computer musicians, in this article we address an entirely new topic: How to create a quadratic difference tone spectrum (QDTS) in which a target fundamental and harmonic overtone series are specified and in which the complex tone necessary to evoke it is synthesized. We propose a numerical algorithm that solves the problem of how to synthesize a QDTS for a target distribution of amplitudes. The algorithm aims to find a solution that matches the desired spectrum as closely as possible for an arbitrary number of target harmonics. Results from experiments using different parameter settings and target distributions show that the algorithm is effective in the majority of cases, with at least 99% of the cases being solvable in real time. An external object for the visual programming language Max is described. We discuss musical and perceptual considerations for using the external, and we describe a range of audio examples that demonstrate the synthesis of QDTSs across different cases. As we show, the method makes possible the matching of QDTSs to particular instrumental timbres with surprising efficiency. Also included is a discussion of a musical work by composer Marcin Pietruszewski that makes use of QDTS synthesis.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2023) 47 (1): 22–43.
Published: 01 March 2023
Abstract
View article
PDF
One of the main research areas in the field of musical human–AI interactivity is how to incorporate expressiveness into interactive digital musical instruments (DMIs). In this study we analyzed gestures rooted in expressiveness by using AI techniques that can enhance the mapping stage of multitouch DMIs. This approach not only considers the geometric information of various gestures but also incorporates expressiveness, which is a crucial element of musicality. Our focus is specifically on multitouch DMIs, and we use expressive descriptors and a fuzzy logic model to mathematically analyze performers' finger movements. By incorporating commonly used features from the literature and adapting some of Rudolf Laban's descriptors—originally intended for full-body analysis—to finger-based multitouch systems, we aim to enrich the mapping process. To achieve this, we developed an AI algorithm based on a fuzzy control system that takes these descriptors as inputs and maps them to synthesis variables. This tool empowers DMI designers to define their own mapping rules based on expressive gestural descriptions, using musical metaphors in a simple and intuitive way. Through a user evaluation, we demonstrate the effectiveness of our approach in capturing and representing gestural expressiveness in the case of multitouch DMIs.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2022) 46 (1-2): 5–7.
Published: 01 June 2022
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2022) 46 (1-2): 136–140.
Published: 01 June 2022
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2021) 45 (2): 48–66.
Published: 01 June 2021
Abstract
View article
PDF
This article presents an extension of Iannis Xenakis's Dynamic Stochastic Synthesis (DSS) called Diffusion Dynamic Stochastic Synthesis (DDSS). This extension solves a diffusion equation whose solutions can be used to map particle positions to amplitude values of several breakpoints in a waveform, following traditional concepts of DSS by directly shaping the waveform of a sound. One significant difference between DSS and DDSS is that the latter includes a drift in the Brownian trajectories that each breakpoint experiences through time. Diffusion Dynamic Stochastic Synthesis can also be used in other ways, such as to control the amplitude values of an oscillator bank using additive synthesis, shaping in this case the spectrum, not the waveform. This second modality goes against Xenakis's original desire to depart from classical Fourier synthesis. The results of spectral analyses of the DDSS waveform approach, implemented using the software environment Max, are discussed and compared with the results of a simplified version of DSS to which, despite the similarity in the overall form of the frequency spectrum, noticeable differences are found. In addition to the Max implementation of the basic DDSS algorithm, a MIDI-controlled synthesizer is also presented here. With DDSS we introduce a real physical process, in this case diffusion, into traditional stochastic synthesis. This sort of sonification can suggest models of sound synthesis that are more complex and grounded in physical concepts.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2014) 38 (4): 5–23.
Published: 01 December 2014
Abstract
View article
PDF
This article describes methods of sound synthesis based on auditory distortion products, often called combination tones. In 1856, Helmholtz was the first to identify sum and difference tones as products of auditory distortion. Today this phenomenon is well studied in the context of otoacoustic emissions, and the “distortion” is understood as a product of what is termed the cochlear amplifier. These tones have had a rich history in the music of improvisers and drone artists. Until now, the use of distortion tones in technological music has largely been rudimentary and dependent on very high amplitudes in order for the distortion products to be heard by audiences. Discussed here are synthesis methods to render these tones more easily audible and lend them the dynamic properties of traditional acoustic sound, thus making auditory distortion a practical domain for sound synthesis. An adaptation of single-sideband synthesis is particularly effective for capturing the dynamic properties of audio inputs in real time. Also presented is an analytic solution for matching up to four harmonics of a target spectrum. Most interestingly, the spatial imagery produced by these techniques is very distinctive, and over loudspeakers the normal assumptions of spatial hearing do not apply. Audio examples are provided that illustrate the discussion.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2014) 38 (4): 53–67.
Published: 01 December 2014
Abstract
View article
PDF
This article describes a synthesis technique based on the sonification of the dynamic behavior of a quantum particle enclosed in an infinite square well. More specifically, we sonify the momentum distribution of a one-dimensional Gaussian bouncing wave packet model. We have chosen this particular case because of its relative simplicity and interesting dynamic behavior, which makes it suitable for a novel sonification mapping that can be applied to standard synthesis techniques, resulting in the generation of appealing sounds. In addition, this sonification might provide useful insight into the behavior of the quantum particle. In particular, this model exhibits quantum revivals, minimizes uncertainty, and exhibits similarities to the case of a classical bouncing ball. The proposed model has been implemented in real time in both the Max/MSP and the Pure Data environments. The algorithm is based on concepts of additive synthesis where each oscillator describes the eigenfunctions that characterize the state evolution of the wave packet. We also provide an analysis of the sounds produced by the model from both a physical and a perceptual point of view.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2006) 30 (1): 67–82.
Published: 01 March 2006