Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Daniel Thalmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): 133–154.
Published: 01 August 2014
Abstract
View article
PDF
In this paper, a human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human–object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is employed to capture the hand posture. This information is combined with the head and arm posture captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the posture data from the CyberGlove II and Kinect, an effective and real-time human gesture recognition method is proposed. The gesture understanding approach based on an innovative combination of sensors is the main contribution of this paper. To verify the effectiveness of the proposed gesture recognition method, a human body gesture data set is built. The experimental results demonstrate that our approach can recognize the upper body gestures with high accuracy in real time. In addition, for robot motion generation and control, a novel online motion planning method is proposed. In order to generate appropriate dynamic motion, a quadratic programming (QP)-based dual-arms kinematic motion generation scheme is proposed, and a simplified recurrent neural network is employed to solve the QP problem. The integration of a handshake within the HRI system illustrates the effectiveness of the proposed online generation method.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): iii–iv.
Published: 01 August 2014
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2005) 14 (2): 234–246.
Published: 01 April 2005
Abstract
View article
PDF
The ability to build and control exposure through a seamless synergy of interaction and narration is a strongly required feature of a new type of immersive-VR training and therapy system. This paper presents a practical approach for immersive-VR training and therapy applications based on interactive storytelling. It provides detailed description of a working implementation of the Interactive Narration Space (INS); this approach combines and satisfies both interaction and narration requirements through the use of high-level social interaction. By introducing the Social Channel , we aim at minimizing the contradictions between control over the story required by the trainer/therapist and interaction required by the trainee/patient. These concepts and their practical realization have been investigated in the context of emergency-situation training and psychotherapeutic exposure, and could validate the usability of mediated interaction with a virtual assistant.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2005) 14 (2): iii–iv.
Published: 01 April 2005
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2001) 10 (6): 632–646.
Published: 01 December 2001
Abstract
View article
PDF
In the past decade, networked virtual environments (NVEs) have been an increasingly active area of research, with the first commercial systems emerging recently. Graphical and behavioral representation of users within such systems is a particularly important issue that has lagged in development behind other issues such as network architectures and space structuring. In this paper, we expose the importance of using virtual humans within these systems and provide a brief overview of several virtual humans technologies used in particular for simulation of crowds. As the main technical contribution, the paper presents the integration of these technologies with the COVEN-DIVE platform, the extension of the DIVE system developed within the COVEN project. In conjunction with this, we present our contributions through the COVEN project to the MPEG-4 standard concerning the representation of virtual humans.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1999) 8 (2): 140–156.
Published: 01 April 1999
Abstract
View article
PDF
In this paper we present a virtual tennis game. We describe the creation and modeling of the virtual humans and body deformations, also showing the real-time animation and rendering aspects of the avatars. We focus on the animation of the virtual tennis ball and the behavior of a synthetic, autonomous referee who judges the tennis games. The networked, collaborative, virtual environment system is described with special reference to its interfaces to driver programs. We also mention the virtual reality (VR) devices that are used to merge the interactive players into the virtual tennis environment, together with the equipment and technologies employed for this exciting experience. We conclude with remarks on personal experiences during the game and on future research topics to improve parts of the presented system.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1999) 8 (2): 218–236.
Published: 01 April 1999
Abstract
View article
PDF
COVEN (Collaborative Virtual Environments) is a European project that seeks to develop a comprehensive approach to the issues in the development of collaborative virtual environment (CVE) technology. COVEN brings together twelve academic and industrial partners with a wide range of expertise in CSCW, networked VR, computer graphics, human factors, HCI, and telecommunications infrastructures. After two years of work, we are presenting the main features of our approach and results, our driving applications, the main components of our technical investigations, and our experimental activities. With different citizen and professional application scenarios as driving forces, COVEN is exploring the requirements and supporting techniques for collaborative interaction in scalable CVEs. Technical results are being integrated in an enriched networked VR platform based on the dVS and DIVE systems. Taking advantage of a dedicated Europe-wide ISDN and ATM network infrastructure, a large component of the project is a trial and experimentation activity that should allow a comprehensive understanding of the network requirements of these systems as well as their usability issues and human factors aspects.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1999) 8 (2): 187–203.
Published: 01 April 1999
Abstract
View article
PDF
Motion-capture techniques are rarely based on orientation measurements for two main reasons: (1) optical motion-capture systems are designed for tracking object position rather than their orientation (which can be deduced from several trackers), (2) known animation techniques, like inverse kinematics or geometric algorithms, require position targets constantly, but orientation inputs only occasionally. We propose a complete human motion-capture technique based essentially on orientation measurements. The position measurement is used only for recovering the global position of the performer. This method allows fast tracking of human gestures for interactive applications as well as high rate recording. Several motion-capture optimizations, including the multijoint technique, improve the posture realism. This work is well suited for magnetic-based systems that rely more on orientation registration (in our environment) than position measurements that necessitate difficult system calibration.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1997) 6 (6): 676–686.
Published: 01 December 1997
Abstract
View article
PDF
In order to feel the sense of presence in a virtual environment, it is important for the participants to become a part of this environment and interact with it through natural behaviors. This interaction is even more important in networked collaborative virtual environments, in which the participants need to see and interact with each other. We present the Virtual Life Network (VLNET), a joint research effort in the field of networked collaborative virtual environments at MIRALab of the University of Geneva and the Computer Graphics Laboratory of the Swiss Federal Institute of Technology, Lausanne. In VLNET each participant is represented by a virtual human actor with realistic appearance and movements similar to the actual body. Interacting with the environment through his virtual body, the participant is perceived by himself and others in a natural way. Since it is generally not possible to track all degrees of freedom of the human body in order to reproduce realistic body motion, we introduce the motor functions that generate natural motion for standard tasks such as walking and arm motion; they are based on limited tracked information (hand and head positions). By using the same virtual human representation, but with the addition of high-level control, autonomous virtual actors can be introduced into the environment to perform some useful tasks or simply to make the environment more appealing. To further enhance the realistic feel of the virtual environment and to simplify object manipulation we provide the facility of defining object behaviors by attaching motor functions to the objects.