Abstract
Immersive, multiprojector systems are a compelling alternative to traditional head-mounted displays and have been growing steadily in popularity. However, the vast majority of these systems have been confined to laboratories or other special purpose facilities and have had little impact on general human—computer and human—human communication models. Cost, infrastructure requirements, and maintenance are all obstacles to the widespread deployment of immersive displays. We address these issues in the design and implementation of the Metaverse. The Metaverse system focuses on a multiprojector scalable display framework that supports automatic detection of devices as they are added/removed from the display environment. Multiple cameras support calibration over wide fields of view for immersive applications with little or no input from the user.
The approach is demonstrated on a 24-projector display environment that can be scaled on the fly, reconfigured, and redeployed according to user needs. Using our method, subpixel calibration is possible with little or no user input. Because little effort is required by the user to either install or reconfigure the projectors, rapid deployment of large, immersive displays in somewhat unconstrained environments is feasible.