Abstract
In two experiments, subjects traveled through virtual mazes, encountering target objects along the way. Their task was to indicate the direction to these target objects from a terminal location in the maze (from which the objects could no longer be seen). Subjects controlled their motion through the mazes using three locomotion modes. In the Walk mode, subjects walked normally in the experimental room. For each subject, body position and heading were tracked, and the tracking information was used to continuously update the visual imagery presented to the subjects on a head-mounted display. This process created the impression of immersion in the experimental maze. In the Visual Turn mode subjects moved through the environment using a joystick to control their turning. The only sensory information subjects received about rotation and translation was that provided by the computer-generated imagery. The Real Turn mode was midway between the other two modes, in that subjects physically turned in place to steer while translating in the virtual maze; thus translation through the maze was signaled only by the computer-generated imagery, whereas rotations were signaled by the imagery as well as by proprioceptive and vestibular information. The dependent measure in the experiment was the absolute error of the subject's directional estimate to each target from the terminal location. Performance in the Walk mode was significantly better than in the Visual Turn mode but other trends were not significant. A secondary finding was that the degree of motion sickness depended upon locomotion mode, with the lowest incidence occurring in the Walk mode. Both findings suggest the advisability of having subjects explore virtual environments using real rotations and translations in tasks involving spatial orientation.