Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
William Steptoe
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2012) 21 (4): 388–405.
Published: 01 November 2012
Abstract
View article
PDF
Users of immersive virtual reality (VR) are often observed to act realistically on social, behavioral, physiological, and subjective levels. However, experimental studies in the field typically collect and analyze metrics independently, which fails to consider the synchronous and multimodal nature of the original human activity. This paper concerns multimodal data capture and analysis in immersive collaborative virtual environments (ICVEs) in order to enable a holistic and rich analysis based on techniques from interaction analysis. A reference architecture for collecting multimodal data specifically for immersive VR is presented. It collates multiple components of a user's nonverbal and verbal behavior in single log file, thereby preserving the temporal relationships between cues. Two case studies describing sequences of immersive avatar-mediated communication (AMC) demonstrate the ability of multimodal data to preserve a rich description of the original mediated social interaction. Analyses of the sequences using techniques from interaction analysis emphasize the causal interrelationships between the captured components of human behavior, leading to a deeper understanding of how and why the communication may have unfolded. In presenting our logging architecture, we hope that we will initiate a discussion of a logging standard that can be built by the community so that practitioners can share data and build better tools to analyze the utility of VR.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2012) 21 (4): 406–422.
Published: 01 November 2012
Abstract
View article
PDF
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the “destination-visitor” paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.