This article presents a new approach to interactive spatial sonification of multidimensional data as a tool for spatial sound synthesis, for composing temporal–spatial musical materials, and as an auditory display for scientists to analyze multidimensional data sets in time and space. The approach applies parameter-mapping sonification and is currently implemented in an application called Cheddar, which was programmed in Max/MSP. Cheddar sonifies data in real time, where the user can modify a wide variety of temporal, spatial, and sonic parameters during the listening process, and thus more easily uncover patterns and processes in the data than when applying non-real-time, noninteractive techniques. The design draws on existing literature concerning perception and acoustics, and it applies the author's practical experience in acousmatic composition, spectromorphology, and sound semantics, while addressing accuracy, flexibility, and ease of use. Although previous sonification applications have addressed some degree of real-time control and spatialization, this approach integrates space and sound in an interactive framework. Spatial information is sonified in high-order 3-D ambisonics, where the user can interactively move the virtual listening position to reveal details easily missed from fixed or noninteractive spatial views. Sounds used as input to the sonification take advantage of the rich spectra and extramusical attributes of acoustic sources, which, although previously theorized, are investigated here in a practical context thoroughly tested alongside acoustic and psychoacoustic considerations. Furthermore, when using Cheddar, no specialized knowledge of programming, acoustics, or psychoacoustics is required. These approaches position Cheddar at the junction between science and art. With one application serving both disciplines, the patterns and processes of science are more fluently appropriated into music or sound art, and vice versa for scientific research, science public outreach, and education.