Abstract
The great variety of functions possible for sound in virtual environments is surveyed in relation to the traditions that primarily inform them. These traditions are examined, classifying sound into the three categories of artistic expression, information transfer, and environmental sound. The potentials of and relations between sonification, algorithmic composition, musicogenic and sonigenic displays, virtual musical instruments and virtual sound sources are examined, as well as the practical technical limitations that govern performance control of MIDI and real-time DSP sound synthesis in coordination with visual display. The importance of music-theoretic and psychological research is emphasized. The issues and developed categorizations are then applied to a case study: the examination of a specific virtual environment performance by a team of workers in Australia in which the author worked as composer/performer/ programmer.