Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-7 of 7
Special Issue
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (6): 544–556.
Published: 01 December 2010
Abstract
View article
PDF
Human postural control is a multimodal process involving visual and vestibular information. The aim of the present study was to measure individual differences in the contributions of vision and vestibular senses to postural control, and to investigate if the individual weights could be modulated by long-term adaptation to visual motion or galvanic vestibular stimulation (GVS). Since GVS is a less expensive technique than a motion platform and can be wearable, it is a promising virtual reality (VR) technology. We measured the postural sway of observers induced by a visual motion or GVS before and after a 7-day adaptation task. We divided participants into four groups. In visual adaptation groups, visual motions were presented to either enhance voluntary body movement (enhancing vision group) or inhibit voluntary body movement (inhibiting vision group). In GVS adaptation groups, GVS was applied to enhance voluntary body movement (enhancing GVS group) or inhibit voluntary body movement (inhibiting GVS group). The adaptation to enhancing body-movement-yoked visual motion decreased the GVS-induced postural sway at a low motion frequency. The adaptation to the enhancing GVS slightly increased the GVS-induced postural sway and decreased the visually-induced sway at a low motion frequency. The adaptation to the inhibiting GVS increased the GVS-induced postural sway and decreased the visually-induced sway at a high motion frequency. These data suggest that long-term adaptation can modify weights of vision and vestibular senses to control posture. These findings can be applied to training or rehabilitation systems of postural control and also to adaptive virtual-reality systems.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (6): 513–526.
Published: 01 December 2010
Abstract
View article
PDF
This paper describes some fluid dynamic considerations for attaining realistic odor presentation using an olfactory display. Molecular diffusion is an extremely slow process and odor molecules released from their source are spread by being carried off by airflow. Therefore, we propose to use a computational fluid dynamics (CFD) simulation in conjunction with the olfactory display. The CFD solver is employed to calculate the turbulent airflow field in the given environment and the dispersal of odor molecules from their source. The simulation result is used to reproduce at the nose the realistic change in the odor concentration with time and space. However, our initial sensory test for evaluating the proposed method was not completely successful, and we also found some discrepancies between our real-life olfactory sensation and the experience of the CFD-based olfactory display. Here we report some insights to overcome these problems. In the initial sensory test, a nontrivial portion of the subjects did not properly recognize the spatial variation in the odor intensity. The result of our recent sensory test is presented in this paper to show that better contrast in the perceived odor intensity can be provided when the concentration range of the released odor is adjusted for the variation in the olfactory sensitivity of individual subjects. We noted that olfactory adaptation occurred more quickly in the initial sensory test of the CFD-based olfactory display than in real environments. In this paper, we show that olfactory adaptation can be alleviated by modulating the odor concentration randomly to mimic the random fluctuations of the turbulent flow fields in real environments. We also noted in our initial sensory test that there were sometimes discrepancies between our olfactory sensation in real environments and the simulated odor distribution. We show in this paper that the discrepancy can be attributed to the convection caused by the human body temperature that brings an odor vapor that is drifting around our feet up to our noses.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (6): iii.
Published: 01 December 2010
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (6): 527–543.
Published: 01 December 2010
Abstract
View article
PDF
Researchers have proposed that immersion could have advantages for tasks involving abstract mental activities, such as conceptual learning; however, there are few empirical results that support this idea. We hypothesized that higher levels of immersion would benefit such tasks if the mental activity could be mapped to objects or locations in a 3D environment. To investigate this hypothesis, we performed an experiment in which participants memorized procedures in a virtual environment and then attempted to recall those procedures. We aimed to understand the effects of three components of immersion on performance. The results demonstrate that a matched software field of view (SFOV), a higher physical field of view (FOV), and a higher field of regard (FOR) all contributed to more effective memorization. The best performance was achieved with a matched SFOV and either a high FOV or a high FOR, or both. In addition, our experiment demonstrated that memorization in a virtual environment could be transferred to the real world. The results suggest that, for procedure memorization tasks, increasing the level of immersion even to moderate levels, such as those found in head mounted displays (HMDs) and display walls, can improve performance significantly compared to lower levels of immersion. Hypothesizing that the performance improvements provided by higher levels of immersion can be attributed to enhanced spatial cues, we discuss the values and limitations of supplementing conceptual information with spatial information in educational VR.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (6): 499–512.
Published: 01 December 2010
Abstract
View article
PDF
The world-in-miniature metaphor (WIM) allows users to select, manipulate, and navigate efficiently in virtual environments. In addition to the first-person perspective offered by typical virtual reality (VR) applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In this paper we explore different strategies to allow the user to interact with the miniature replica at multiple levels of scale. Unlike competing approaches, we support complex indoor environments by explicitly handling occlusion. We discuss algorithms for selecting the part of the scene to be included in the replica, and for providing a clear view of the region of interest. Key elements of our approach include an algorithm to recompute the active region from a subdivision of the scene into cells, and a view-dependent algorithm to cull occluding geometry. Our cutaway algorithm is based on a small set of slicing planes roughly oriented along the main occluding surfaces, along with depthbased revealing for nonplanar geometry. We present the results of a user study showing that our technique clearly outperforms competing approaches on spatial tasks performed in densely occluded scenes.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (5): iii–iv.
Published: 01 October 2010
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (4): iii.
Published: 01 August 2010