Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Warren Robinett
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2016) 25 (4): 325–329.
Published: 22 December 2016
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1995) 4 (1): 1–23.
Published: 01 February 1995
Abstract
View article
PDF
The visual display transformation for virtual reality (VR) systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics. The process can be represented as a series of transformations, some of which contain parameters that must match the physical configuration of the system hardware and the user's body. Because of the number and complexity of the transformations, a systematic approach and a thorough understanding of the mathematical models involved are essential. This paper presents a complete model for the visual display transformation for a VR system; that is, the series of transformations used to map points from object coordinates to screen coordinates. Virtual objects are typically defined in an object-centered coordinate system (CS), but must be displayed using the screen-centered CSs of the two screens of a head-mounted display (HMD). This particular algorithm for the VR display computation allows multiple users to independently change position, orientation, and scale within the virtual world, allows users to pick up and move virtual objects, uses the measurements from a head tracker to immerse the user in the virtual world, provides an adjustable eye separation for generating two stereoscopic images, uses the off-center perspective projection required by many HMDs, and compensates for the optical distortion introduced by the lenses in an HMD. The implementation of this framework as the core of the UNC VR software is described, and the values of the UNC display parameters are given. We also introduce the vector-quaternion-scalar (VQS) representation for transformations between 3D coordinate systems, which is specifically tailored to the needs of a VR system. The transformations and CSs presented comprise a complete framework for generating the computer-graphic imagery required in a typical VR system. The model presented here is deliberately abstract in order to be general purpose; thus, issues of system design and visual perception are not addressed. While the mathematical techniques involved are already well known, there are enough parameters and pitfalls that a detailed description of the entire process should be a useful tool for someone interested in implementing a VR system.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1993) 2 (3): 171–184.
Published: 01 August 1993
Abstract
View article
PDF
Technologies applicable toward a display system in which a laser is raster scanned on the viewer's retina are reviewed. The properties of laser beam propagation and the inherent resolution of a laser scanning system are discussed. Scanning techniques employing rotating mirrors, galvanometer scanners, acoustooptic deflectors, and piezoelectric deflectors are described. Resolution, speed, deflection range, and physical size are strongly coupled properties of these technologies. A radiometric analysis indicates that eye safety would not be a problem in a retina-scanning system. For head-mounted display applications, a monochromatic system employing a laser diode source with acoustooptic and galvanometer scanners is deemed most practical at the present time. A resolution of 1000 × 1000 pixels at 60 frames per second should be possible with such a monochromatic system using currently available off-the-shelf components. A full-color scanned-laser display suitable for head-mounted display use is not judged feasible to build at this time with off-the-shelf components.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1992) 1 (2): 229–247.
Published: 01 May 1992
Abstract
View article
PDF
A taxonomy is proposed to classify all varieties of technologically mediated experience. This includes virtual reality and teleoperation, and also earlier devices such as the microscope and telephone. The model of mediated interaction assumes a sensor-display link from the world to the human, and an action-actuator link going back from the human to the world, with the mediating technology transforming the transmitted experience in some way. The taxonomy is used to classify a number of example systems. Two taxonomies proposed earlier are compared with the ideas presented in this paper. Then the long-term prospects of this field are speculated on, ignoring constraints of cost, effort, or time to develop. Finally, the ultimate limits of synthetic experience are discussed, which derive from properties of the physical universe and the human neural apparatus.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1992) 1 (1): 45–62.
Published: 01 February 1992
Abstract
View article
PDF
For stereoscopic photography or telepresence, orthostereoscopy occurs when the perceived size, shape, and relative position of objects in the three-dimensional scene being viewed match those of the physical objects in front of the camera. In virtual reality, the simulated scene has no physical counterpart, so orthostereoscopy must be defined in this case as constancy, as the head moves around, of the perceived size, shape, and relative positions of the simulated objects. Achieving this constancy requires that the computational model used to generate the graphics matches the physical geometry of the head-mounted display being used. This geometry includes the optics used to image the displays and the placement of the displays with respect to the eyes. The model may fail to match the geometry because model parameters are difficult to measure accurately, or because the model itself is in error. Two common modeling errors are ignoring the distortion caused by the optics and ignoring the variation in interpupillary distance across different users. A computational model for the geometry of a head-mounted display is presented, and the parameters of this model for the VPL EyePhone are calculated.