Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Heinrich H. Bülthoff
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2012) 21 (3): 281–294.
Published: 01 August 2012
Abstract
View article
PDF
Theories of social interaction (i.e., common coding theory) suggest that visual information about the interaction partner is critical for successful interpersonal action coordination. Seeing the interaction partner allows an observer to understand and predict the interaction partner's behavior. However, it is unknown which of the many sources of visual information about an interaction partner (e.g., body, end effectors, and/or interaction objects) are used for action understanding and thus for the control of movements in response to observed actions. We used a novel immersive virtual environment to investigate this further. Specifically, we asked participants to perform table tennis strokes in response to table tennis balls stroked by a virtual table tennis player. We tested the effect of the visibility of the ball, the paddle, and the body of the virtual player on task performance and movement kinematics. Task performance was measured as the minimum distance between the center of the paddle and the center of the ball (radial error). Movement kinematics was measured as variability in the paddle speed of repeatedly executed table tennis strokes (stroke speed variability). We found that radial error was reduced when the ball was visible compared to invisible. However, seeing the body and/or the racket of the virtual players only reduced radial error when the ball was invisible. There was no influence of seeing the ball on stroke speed variability. However, we found that stroke speed variability was reduced when either the body or the paddle of the virtual player was visible. Importantly, the differences in stroke speed variability were largest in the moment when the virtual player hit the ball. This suggests that seeing the virtual player's body or paddle was important for preparing the stroke response. These results demonstrate for the first time that the online control of arm movements is coupled with visual body information about an opponent.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (3): 230–242.
Published: 01 June 2010
Abstract
View article
PDF
Few HMD-based virtual environment systems display a rendering of the user's own body. Subjectively, this often leads to a sense of disembodiment in the virtual world. We explore the effect of being able to see one's own body in such systems on an objective measure of the accuracy of one form of space perception. Using an action-based response measure, we found that participants who explored near space while seeing a fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4 m to 6 m away from where they were standing than did participants who saw no avatar. A nonanimated avatar also improved distance judgments, but by a lesser amount. Participants who viewed either animated or static avatars positioned 3 m in front of their own position made subsequent distance judgments with similar accuracy to the participants who viewed the equivalent animated or static avatar positioned at their own location. We discuss the implications of these results on theories of embodied perception in virtual environments.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2008) 17 (4): 365–375.
Published: 01 August 2008
Abstract
View article
PDF
Mental rotation is the capacity to predict the orientation of an object or the layout of a scene after a change in viewpoint. Previous studies have shown that the cognitive cost of mental rotations is reduced when the viewpoint change results from the observer's motion rather than the object or spatial layout's rotation. The classical interpretation for these findings involves the use of automatic updating mechanisms triggered during self-motion. Nevertheless, little is known about how this process is triggered and particularly how sensory cues combine in order to facilitate mental rotations. The previously existing setups, either real or virtual, did not allow disentangling the different sensory contributions, which motivated the development of a new high-end virtual reality platform overcoming these technical limitations. In the present paper we will start by a didactic review of the literature on mental rotations and expose the current technical limitations. Then we will fully describe the experimental platform that was developed at the Max Planck Institute for Biological Cybernetics in Tübingen. The setup consisted of a cabin mounted on the top of a six degree-of-freedom Stewart platform inside of which was an adjustable seat, a physical table with a screen embedded, and a large projection screen. A 5-PC cluster running Virtools was used to drive the platform and render the two passive stereovision scenes that were displayed on the table and background screens. Finally, we will present the experiment using this setup that allowed replicating the classical advantage found for a moving observer, which validates our setup. We will conclude by discussing the experimental validation and the advantages of such a setup.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2002) 11 (5): 443–473.
Published: 01 October 2002
Abstract
View article
PDF
The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual environments in which only visual cues were provided. Participants had to execute turns, reproduce distances, or perform triangle completion tasks. Most experiments were performed in a simulated 3D field of blobs, thus restricting navigation strategies to path integration based on optic flow. For our experimental set-up (half-cylindrical 180 deg. projection screen), optic flow information alone proved to be sufficient for untrained participants to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards the mean response. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance, especially for the more complex triangle completion tasks—suggesting that mental spatial abilities might be a determining factor for navigation performance. In summary, visual path integration without any vestibular or kinesthetic cues can be sufficient for elementary navigation tasks like rotations, translations, and triangle completion.