Telepresence is generally described as the feeling of being immersed in a remote environment, be it virtual or real. A multimodal telepresence environment, equipped with modalities such as vision, audition, and haptic, improves immersion and augments the overall perceptual presence. The present work focuses on acoustic telepresence at both the teleoperator and operator sites. On the teleoperator side, we build a novel binaural sound source localizer using generic Head Related Transfer Functions (HRTFs). This new localizer provides estimates for the direction of a single sound source given in terms of azimuth and elevation angles in free space by using only two microphones. It also uses an algorithm that is efficient compared to the currently known algorithms used in similar localization processes. On the operator side, the paper addresses the problem of spatially interpolating HRTFs for densely sampled high-fidelity 3D sound synthesis. In our telepresence application scenario the synthesized 3D sound is presented to the operator over headphones and shall achieve a high-fidelity acoustic immersion. Using measured HRTF data, we create interpolated HRTFs between the existing functions using a matrix-valued interpolation function. The comparison with existing interpolation methods reveals that our new method offers superior performance and is capable of achieving high-fidelity reconstructions of HRTFs.

This content is only available as a PDF.
You do not currently have access to this content.