Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Dinesh K. Pai
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2007) 16 (1): 84–99.
Published: 01 February 2007
Abstract
View article
PDF
We present a technique to facilitate the creation of constantly changing, randomized audio streams from samples of source material. A core motivation is to make it easier to quickly create soundscapes for virtual environments and other scenarios where long streams of audio are used. While mostly in the background, these streams are vital for the creation of mood and realism in these types of applications. Our approach is to extract the component parts of sampled audio signals, and use them to resynthesize a continuous audio stream of indeterminate length. An automatic segmentation algorithm involving wavelets is used to split the input signal into syllable-like audio segments that we call “natural grains.” For each grain, a table of similarity between it and all the other grains is constructed. The grains are then output in a continuous stream, with the next grain being chosen from among those other grains which best follow from it. Using this sampling-resynthesis technique, we can construct an infinite number of variations on the original signal with a minimum amount of interaction. An interface for the manipulation and playback of several of these streams is provided to facilitate building complex audio environments, and is made available for online experimentation at www.cs.ubc.ca/labs/lci/naturalgrains/ .
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2004) 13 (1): 99–111.
Published: 01 February 2004
Abstract
View article
PDF
We demonstrate a method for efficiently rendering the audio generated by graphical scenes with a large number of sounding objects. This is achieved by using modal synthesis for rigid bodies and rendering only those modes that we judge to be audible to a user observing the scene. We show how excitations of modes can be estimated and inaudible modes eliminated based on the masking characteristics of the human ear. We describe a novel technique for generating contact events by performing closed-form particle simulation and collision detection with the aid of programmable graphics hardware. The effectiveness of our system is shown in the context of suitably complex simulations.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2000) 9 (4): 399–410.
Published: 01 August 2000
Abstract
View article
PDF
Contact sounds can provide important perceptual cues in virtual environments. We investigated the relation between material perception and variables that govern the synthesis of contact sounds. A shape-invariant, auditory-decay parameter was a powerful determinant of the perceived material of an object. Subjects judged the similarity of synthesized sounds with respect to material (Experiment 1 and 2) or length (Experiment 3). The sounds corresponded to modal frequencies of clamped bars struck at an intermediate point, and they varied in fundamental frequency and frequency-dependent rate of decay. The latter parameter has been proposed as reflecting a shape-invariant material property: damping. Differences between sounds in both decay and frequency affected similarity judgments (magnitude of similarity and judgment duration), with decay playing a substantially larger role. Experiment 2, which varied the initial sound amplitude, showed that decay rate—rather than total energy or sound duration—was the critical factor in determining similarity. Experiment 3 demonstrated that similarity judgments in the first two studies were specific to instructions to judge material. Experiment 4, in which subjects assigned the sounds to one of four material categories, showed an influence of frequency and decay, but confirmed the greater importance of decay. Decay parameters associated with each category were estimated and found to correlate with physical measures of damping. The results support the use of a simplified model of material in virtual auditory environments.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1998) 7 (4): 382–395.
Published: 01 August 1998
Abstract
View article
PDF
We propose a general framework for the simulation of sounds produced by colliding physical objects in a virtual reality environment. The framework is based on the vibration dynamics of bodies. The computed sounds depend on the material of the body, its shape, and the location of the contact. This simulation of sounds allows the user to obtain important auditory clues about the objects in the simulation, as well as about the locations on the objects of the collisions. Specifically, we show how to compute (1) the spectral signature of each body (its natural frequencies), which depends on the material and the shape, (2) the “timbre” of the vibration (the relative amplitudes of the spectral components) generated by an impulsive force applied to the object at a grid of locations, (3) the decay rates of the various frequency components that correlate with the type of material, based on its internal friction parameter, and finally (4) the mapping of sounds onto the object's geometry for real-time rendering of the resulting sound. The framework has been implemented in a Sonic Explorer program which simulates a room with several objects such as a chair, tables, and rods. After a preprocessing stage, the user can hit the objects at different points to interactively produce realistic sounds.