Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Greg Welch
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2016) 25 (1): 33–46.
Published: 01 July 2016
Abstract
View article
PDF
We demonstrate a generalizable method for unified multitouch detection and response on various nonparametric and parametric surfaces to support interactive physical-virtual experiences. The method employs multiple infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off human fingers is captured by cameras with matched IR pass filters, allowing for the detection and localization of multiple simultaneous finger-touch events. The processing of these events is tightly coupled with the rendering system to produce auditory and visual responses displayed on the surface using the projector(s) to achieve a responsive, interactive, physical–virtual experience. We demonstrate the method on two nonparametric face-shaped surfaces and a planar surface. We also illustrate the approach's applicability in an interactive medical training scenario using one of the head surfaces to support hands-on, touch-sensitive medical training with dynamic physical–virtual patient behavior.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): 109–132.
Published: 01 August 2014
Abstract
View article
PDF
This paper presents a framework to interactively control avatars in remote environments. The system, called AMITIES, serves as the central component that connects people controlling avatars (inhabiters), various manifestations of these avatars (surrogates), and people interacting with these avatars (participants). A multiserver–client architecture, based on a low-demand network protocol, connects the participant environment(s), the inhabiter station(s), and the avatars. A human-in-the-loop metaphor provides an interface for remote operation, with support for multiple inhabiters, multiple avatars, and multiple participants. Custom animation blending routines and a gesture-based interface provide inhabiters with an intuitive avatar control paradigm. This gesture control is enhanced by genres of program-controlled behaviors that can be triggered by events or inhabiter choices for individual or groups of avatars. This mixed (agency and gesture-based) control paradigm reduces the cognitive and physical loads on the inhabiter while supporting natural bidirectional conversation between participants and the virtual characters or avatar counterparts, including ones with physical manifestations, for example, robotic surrogates. The associated system affords the delivery of personalized experiences that adapt to the actions and interactions of individual users, while staying true to each virtual character's personality and backstory. In addition to its avatar control paradigm, AMITIES provides processes for character and scenario development, testing, and refinement. It also has integrated capabilities for session recording and event tagging, along with automated tools for reflection and after-action review. We demonstrate effectiveness by describing an instantiation of AMITIES, called TeachLivE, that is widely used by colleges of education to prepare new teachers and provide continuing professional development to existing teachers. Finally, we show the system's flexibility by describing a number of other diverse applications, and presenting plans to enhance capabilities and application areas.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2004) 13 (2): 128–145.
Published: 01 April 2004
Abstract
View article
PDF
We introduce and present preliminary results for a hybrid display system combining head-mounted and projector-based displays. Our work is motivated by a surgical training application where it is necessary to simultaneously provide both a highfidelity view of a central close-up task (the surgery) and visual awareness of objects and events in the surrounding environment. In this article, we motivate the use of a hybrid display system, discuss previous work, describe a prototype along with methods for geometric calibration, and present results from a controlled human subject experiment. This article is an invited resubmission of work presented at IEEE Virtual Reality 2003. The article has been updated and expanded to include (among other things) additional related work and more details about the calibration process.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2001) 10 (1): 1–21.
Published: 01 February 2001
Abstract
View article
PDF
Since the early 1980s, the Tracker Project at the University of North Carolina at Chapel Hill has been working on wide-area head tracking for virtual and augmented environments. Our long-term goal has been to achieve the high performance required for accurate visual simulation throughout our entire laboratory, beyond into the hallways, and eventually even outdoors. In this article, we present results and a complete description of our most recent electro-optical system, the HiBall Tracking System. In particular, we discuss motivation for the geometric configuration and describe the novel optical, mechanical, electronic, and algorithmic aspects that enable unprecedented speed, resolution, accuracy, robustness, and flexibility.