Mental rotation is the capacity to predict the orientation of an object or the layout of a scene after a change in viewpoint. Previous studies have shown that the cognitive cost of mental rotations is reduced when the viewpoint change results from the observer's motion rather than the object or spatial layout's rotation. The classical interpretation for these findings involves the use of automatic updating mechanisms triggered during self-motion. Nevertheless, little is known about how this process is triggered and particularly how sensory cues combine in order to facilitate mental rotations. The previously existing setups, either real or virtual, did not allow disentangling the different sensory contributions, which motivated the development of a new high-end virtual reality platform overcoming these technical limitations.
In the present paper we will start by a didactic review of the literature on mental rotations and expose the current technical limitations. Then we will fully describe the experimental platform that was developed at the Max Planck Institute for Biological Cybernetics in Tübingen. The setup consisted of a cabin mounted on the top of a six degree-of-freedom Stewart platform inside of which was an adjustable seat, a physical table with a screen embedded, and a large projection screen. A 5-PC cluster running Virtools was used to drive the platform and render the two passive stereovision scenes that were displayed on the table and background screens. Finally, we will present the experiment using this setup that allowed replicating the classical advantage found for a moving observer, which validates our setup. We will conclude by discussing the experimental validation and the advantages of such a setup.