Sounds are often the result of motions of virtual objects in a virtual environment. Therefore, sounds and the motions that caused them should be treated in an integrated way. When sounds and motions do not have the proper correspondence, the resultant confusion can lessen the effects of each. In this paper, we present an integrated system for modeling, synchronizing, and rendering sounds for virtual environments. The key idea of the system is the use of a functional representation of sounds, called timbre trees. This representation is used to model sounds that are parameterizable. These parameters can then be mapped to the parameters associated with the motions of objects in the environment. This mapping allows the correspondence of motions and sounds in the environment. Representing arbitrary sounds using timbre trees is a difficult process that we do not address in this paper. We describe approaches for creating some timbre trees including the use of genetic algorithms. Rendering the sounds in an aural environment is achieved by attaching special environmental nodes that represent the attenuation and delay as well as the listener effects to the timbre trees. These trees are then evaluated to generate the sounds. The system that we describe runs parallel in real time on an eight-processor SGI Onyx. We see the main contribution of the present system as a conceptual framework on which to consider the sound and motion in an integrated virtual environment.