Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Jannick P. Rolland
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2005) 14 (5): 528–549.
Published: 01 October 2005
Abstract
View article
PDF
Distributed systems technologies supporting 3D visualization and social collaboration will be increasing in frequency and type over time. An emerging type of head-mounted display referred to as the head-mounted projection display (HMPD) was recently developed that only requires ultralight optics (i.e., less than 8 g per eye) that enables immersive multiuser, mobile augmented reality 3D visualization, as well as remote 3D collaborations. In this paper a review of the development of lightweight HMPD technology is provided, together with insight into what makes this technology timely and so unique. Two novel emerging HMPD-based technologies are then described: a teleportal HMPD (T-HMPD) enabling face-to-face communication and visualization of shared 3D virtual objects, and a mobile HMPD (M-HMPD) designed for outdoor wearable visualization and communication. Finally, the use of HMPD in medical visualization and training, as well as in infospaces, two applications developed in the ODA and MIND labs respectively, are discussed.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2004) 13 (3): 315–327.
Published: 01 June 2004
Abstract
View article
PDF
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. One of the challenges in networked virtual environments is maintaining a consistent view of the shared state in the presence of inevitable network latency and jitter. A consistent view in a shared scene may significantly increase the sense of presence among participants and facilitate their interactivity. The dynamic shared state is directly affected by the frequency of actions applied on the objects in the scene. Mixed Reality (MR) and Virtual Reality (VR) environments contain several types of action producers including human users, a wide range of electronic motion sensors, and haptic devices. In this paper, we propose a novel criterion for categorization of distributed MR/VR systems and present an adaptive synchronization algorithm for distributed MR/VR collaborative environments. In spite of significant network latency, results show that for low levels of update frequencies the dynamic shared state can be kept consistent at multiple remotely located sites.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2000) 9 (3): 223–235.
Published: 01 June 2000
Abstract
View article
PDF
This paper presents a method and algorithms for automatic modeling of anatomical joint motion. The method relies on collision detection to achieve stable positions and orientations of the knee joint by evaluating the relative motion of the tibia with respect to the femur (for example, flexion-extension). The stable positions then become the basis for a look-up table employed in the animation of the joint. The strength of this method lies in its robustness to animate any normal anatomical joint. It is also expandable to other anatomical joints given a set of kinematic constraints for the joint type as well as a high-resolution, static, 3-D model of the joint. The demonstration could be patient specific if a person's real anatomical data could be obtained from a medical imaging modality such as computed tomography or magnetic resonance imaging. Otherwise, the demonstration requires the scaling of a generic joint based on patient characteristics. Compared with current teaching strategies, this Virtual Reality Dynamic Anatomy (VRDA) tool aims to greatly enhance students' understanding of 3-D human anatomy and joint motions. A preliminary demonstration of the optical superimposition of a generic knee joint on a leg model is shown.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2000) 9 (3): 287–309.
Published: 01 June 2000
Abstract
View article
PDF
We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1995) 4 (1): 24–49.
Published: 01 February 1995
Abstract
View article
PDF
With the rapid advance of real-time computer graphics, head-mounted displays (HMDs) have become popular tools for 3D visualization. One of the most promising and challenging future uses of HMDs, however, is in applications where virtual environments enhance rather than replace real environments. In such applications, a virtual image is superimposed on a real image. The unique problem raised by this superimposition is the difficulty that the human visual system may have in integrating information from these two environments. As a starting point to studying the problem of information integration in see-through environments, we investigate the quantification of depth and size perception of virtual objects relative to real objects in combined real and virtual environments. This starting point leads directly to the important issue of system calibration, which must be completed before perceived depth and sizes are measured. Finally, preliminary experimental results on the perceived depth of spatially nonoverlapping real and virtual objects are presented.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1992) 1 (1): 45–62.
Published: 01 February 1992
Abstract
View article
PDF
For stereoscopic photography or telepresence, orthostereoscopy occurs when the perceived size, shape, and relative position of objects in the three-dimensional scene being viewed match those of the physical objects in front of the camera. In virtual reality, the simulated scene has no physical counterpart, so orthostereoscopy must be defined in this case as constancy, as the head moves around, of the perceived size, shape, and relative positions of the simulated objects. Achieving this constancy requires that the computational model used to generate the graphics matches the physical geometry of the head-mounted display being used. This geometry includes the optics used to image the displays and the placement of the displays with respect to the eyes. The model may fail to match the geometry because model parameters are difficult to measure accurately, or because the model itself is in error. Two common modeling errors are ignoring the distortion caused by the optics and ignoring the variation in interpupillary distance across different users. A computational model for the geometry of a head-mounted display is presented, and the parameters of this model for the VPL EyePhone are calculated.