Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-7 of 7
Michael Cohen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2015) 24 (3): 220–242.
Published: 01 July 2015
Abstract
View article
PDF
We describe a musical cyberworld, Folkways in Wonderland, in which avatarrepresented users can find and listen to selections from the Smithsonian Folkways world music collection. When audition is disturbed by cacophony of nearby tracks or avatar conversations, one’s soundscape can be refined since the system supports narrowcasting, a technique which allows information streams to be filtered. Our system supports two different kinds of sound sources: musical selections and avatar conversation (voice-chat). Narrowcasting for music enables aesthetic focus; narrowcasting for talk enables cognitive focus. The former is required for dense presentation of musical sound, the latter for virtual worlds in which many avatars are expected to be able to interact. An active listener can fork self-identified avatars using a novel multipresence technique, locating representatives at locations of interest, each clone capturing respective soundscapes, controlled using narrowcasting functions {self, non-self} × {select (solo), mute, deafen, attend} . Likewise one can participate in a conference and at the same time join a global tour of music. Our music browser is architected to use MX: IEEE 1599, a comprehensive, multilayered, music description standard. Using our cyberworld as a virtual laboratory, we evaluated the effectiveness of narrowcasting when auditioning music and conferencing. Experimental results suggest that narrowcasting and multipresence techniques are useful for collaborative music exploration and improve user experience. We also got positive feedback from the participants regarding narrowcasting representations, variously based on colors, symbols, and icons.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2007) 16 (4): 414–438.
Published: 01 August 2007
Abstract
View article
PDF
To support multiperspective and stereographic image display systems intended for multiuser applications, we have developed two integrated multiuser multiperspective stereographic browsers, respectively featuring IBR-generated egocentric and CG exocentric perspectives. The first one described, “VR 4 U 2 C” (‘virtual reality for you to see’), uses Apple's QuickTime VR technology and the Java programming language together with the support of the QuickTime for Java library. This unique QTVR browser allows coordinated display of multiple views of a scene or object, limited only by the size and number of monitors or projectors assembled around or among users (for panoramas or turnoramas) in various viewing locations. The browser also provides a novel solution to limitations associated with display of QTVR imagery: its multinode feature provides interactive stereographic QTVR (dubbed SQTVR) to display dynamically selected pairs of images exhibiting binocular parallax, the stereoscopic depth percept enhanced by motion parallax from displacement of the viewpoint through space coupled with rotation of the view through a 360° horizontal panorama. This navigable approach to SQTVR allows proper occlusion/disocclusion as the virtual standpoint shifts, as well as natural looming of closer objects compared to more distant ones. We have integrated this stereographic panoramic browsing application in a client/server architecture with a sibling client, named “Just Look at Yourself!” which is built with Java3D and allows realtime visualization of the dollying and viewpoint adjustment as well as juxtaposition and combination of stereographic CG and IBR displays. “Just Look at Yourself!” visualizes and emulates VR 4 U 2 C, embedding avatars associated with cylinder pairs wrapped around the stereo standpoints texture-mapped with a set of panoramic scenes into a 3D CG model of the same space as that captured by the set of panoramas. The transparency of the 3D CG polygon space and the photorealistic stereographic 360° scenes, as well as the size of the stereo goggles through which the CG space is conceptually viewed and upon which the 360° scenes are texture-mapped, can be adjusted at runtime to understand the relationship of the spaces.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2000) 9 (1): 84–96.
Published: 01 February 2000
Abstract
View article
PDF
Non-immersive perspectives in virtual environments enable flexible paradigms of perception, especially in the context of frames of reference for conferencing and musical audition. Traditional mixing idioms for enabling and disabling various audio sources employ mute and solo functions, that, along with cue , selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only sources but also sinks, motivate the generalization of mute and solo (or cue ) to exclude and include, manifested for sinks as deafen and attend ( confide and harken ). Such functions, which narrow stimuli by explicitly blocking out and/or concentrating on selected entities, can be applied not only to other users' sinks for privacy, but also to one's own sinks for selective attendance or presence. Multiple sinks are useful in groupware, where a common environment implies social inhibitions to rearranging shared sources like musical voices or conferees, as well as individual sessions in which spatial arrangement of sources, like the configuration of a concert orchestra, has mnemonic value. A taxonomy of modal narrowcasting functions is proposed, and an audibility protocol is described, comprising revoke, renounce, grant, and claim methods, invocable by these narrowcasting commands to control superposition of soundscapes.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1998) 7 (1): 53–66.
Published: 01 February 1998
Abstract
View article
PDF
Audio windowing is a front-end, or user interface, to an audio system with a real-time spatial sound back end. Complementing directionalization by a digital signal processor (DSP), gain adjustment is used to control the volume of the various mixels ([sound] mixing elements). Virtual gain can be synthesized from components derived from collective iconic size, mutual distance, orientation and directivity, and selectively enabled according to room-wise partitioning of sources across sinks. This paper describes a derivation of virtual gain, and outlines the deployment of these expressions in an audio windowing system.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1994) 3 (1): 87–93.
Published: 01 February 1994
Abstract
View article
PDF
As in the Bauhaus movement of the 1930s, artists and engineers are working together on commercial industrial (hardware) and postindustrial (software) design. Japan, a world leader in research and development areas like display technology and robotics, is a fertile environment in which VR [known here sometimes as AR (for artificial reality )] can flourish, both in labs and studios, and as consumer products and services: a confluence of theme parks, amusement centers, retail outlets, and home computer and media centers. Emphasizing the capture, transmission, and reproduction of experience, (literally) sensational VR is upon us, to simulate and stimulate. If it is hyped, or hyper, it is happening around Tokyo. Here is a selective guide to meta-holo-attractions open to the public in “The Big Orange.”
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1994) 3 (1): 60–72.
Published: 01 February 1994
Abstract
View article
PDF
A technique is presented for dynamically invoking a set of head-related transfer functions (HRTFs) and scaling gain, driven by a dynamic map in a graphic window. With such an interface, users may configure a virtual conferencing environment, manipulating virtual positions of teleconferees. The design of a personal headphone teleconferencing prototype is proposed, integrating spatialized sound presentation with individualized HRTF measurement using a bifunctional transducer. According to judgment tests, the use of individualized HRTFs instead of dummy-head HRTFs can reduce front-back sound image confusion.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1992) 1 (4): 468–481.
Published: 01 November 1992
Abstract
View article
PDF
It is important to exploit sound as a vital communication channel for computer-human interfaces. Developing this potential motivates both developing expressive models unique to audio and also exploring analogues to visual modes of representation. This paper elaborates an organization of presentation and control that implements a flexible sound management system called “audio windows.” After reviewing audio imaging, spatial sound, and relevant underlying technology, an audio windowing prototype is described, implementing an extended model of free-field, two-dimensional spatial sound control. The system, “Maw” (acronymic for m ultidimensional a udio w indows), is a GUI ( g raphic u ser i nterface), integrating a graphic editor with a multidimensional spatial sound engine. Standard idioms for WIMP ( w indow, i con, m enu, p ointing d evice) systems are reinterpreted for audio window applications, including provisions for directionalized and nonatomic spatial sound objects. Unique features include draggably rotating icons, clusters, dynamically collapsible hierarchical groups of spatial sound objects, and an autofocus mode that is used to disambiguate multiple presence.