Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-6 of 6
Rodrigo F. Cádiz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2021) 45 (2): 48–66.
Published: 01 June 2021
Abstract
View article
PDF
This article presents an extension of Iannis Xenakis's Dynamic Stochastic Synthesis (DSS) called Diffusion Dynamic Stochastic Synthesis (DDSS). This extension solves a diffusion equation whose solutions can be used to map particle positions to amplitude values of several breakpoints in a waveform, following traditional concepts of DSS by directly shaping the waveform of a sound. One significant difference between DSS and DDSS is that the latter includes a drift in the Brownian trajectories that each breakpoint experiences through time. Diffusion Dynamic Stochastic Synthesis can also be used in other ways, such as to control the amplitude values of an oscillator bank using additive synthesis, shaping in this case the spectrum, not the waveform. This second modality goes against Xenakis's original desire to depart from classical Fourier synthesis. The results of spectral analyses of the DDSS waveform approach, implemented using the software environment Max, are discussed and compared with the results of a simplified version of DSS to which, despite the similarity in the overall form of the frequency spectrum, noticeable differences are found. In addition to the Max implementation of the basic DDSS algorithm, a MIDI-controlled synthesizer is also presented here. With DDSS we introduce a real physical process, in this case diffusion, into traditional stochastic synthesis. This sort of sonification can suggest models of sound synthesis that are more complex and grounded in physical concepts.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2014) 38 (4): 5–23.
Published: 01 December 2014
Abstract
View article
PDF
This article describes methods of sound synthesis based on auditory distortion products, often called combination tones. In 1856, Helmholtz was the first to identify sum and difference tones as products of auditory distortion. Today this phenomenon is well studied in the context of otoacoustic emissions, and the “distortion” is understood as a product of what is termed the cochlear amplifier. These tones have had a rich history in the music of improvisers and drone artists. Until now, the use of distortion tones in technological music has largely been rudimentary and dependent on very high amplitudes in order for the distortion products to be heard by audiences. Discussed here are synthesis methods to render these tones more easily audible and lend them the dynamic properties of traditional acoustic sound, thus making auditory distortion a practical domain for sound synthesis. An adaptation of single-sideband synthesis is particularly effective for capturing the dynamic properties of audio inputs in real time. Also presented is an analytic solution for matching up to four harmonics of a target spectrum. Most interestingly, the spatial imagery produced by these techniques is very distinctive, and over loudspeakers the normal assumptions of spatial hearing do not apply. Audio examples are provided that illustrate the discussion.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2014) 38 (4): 53–67.
Published: 01 December 2014
Abstract
View article
PDF
This article describes a synthesis technique based on the sonification of the dynamic behavior of a quantum particle enclosed in an infinite square well. More specifically, we sonify the momentum distribution of a one-dimensional Gaussian bouncing wave packet model. We have chosen this particular case because of its relative simplicity and interesting dynamic behavior, which makes it suitable for a novel sonification mapping that can be applied to standard synthesis techniques, resulting in the generation of appealing sounds. In addition, this sonification might provide useful insight into the behavior of the quantum particle. In particular, this model exhibits quantum revivals, minimizes uncertainty, and exhibits similarities to the case of a classical bouncing ball. The proposed model has been implemented in real time in both the Max/MSP and the Pure Data environments. The algorithm is based on concepts of additive synthesis where each oscillator describes the eigenfunctions that characterize the state evolution of the wave packet. We also provide an analysis of the sounds produced by the model from both a physical and a perceptual point of view.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2006) 30 (1): 67–82.
Published: 01 March 2006