The contribution of this article lies in providing the code for a working system running with off-the-shelf hardware, but not in advancing theory in computer vision or graphics. The current work presents a system which uses one RGBD camera (Microsoft Kinect v2) to capture people in places, and an AR headset (Microsoft HoloLens) to display the scene. While the fidelity of the system is relatively low compared to others which utilize multiple cameras (i.e., Orts-Escolano et al., 2016), it displays with a high frame rate, has low latency, and is mobile in that it does not require a render computer.

Maimone and colleagues (2013) previously examined the connection between RGBD cameras and AR headsets. Their system used two Kinect devices to scan and render a space in an AR headset for avatar-based telepresence, and...

You do not currently have access to this content.