Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-2 of 2
Junsong Yuan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): 133–154.
Published: 01 August 2014
Abstract
View article
PDF
In this paper, a human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human–object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is employed to capture the hand posture. This information is combined with the head and arm posture captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the posture data from the CyberGlove II and Kinect, an effective and real-time human gesture recognition method is proposed. The gesture understanding approach based on an innovative combination of sensors is the main contribution of this paper. To verify the effectiveness of the proposed gesture recognition method, a human body gesture data set is built. The experimental results demonstrate that our approach can recognize the upper body gestures with high accuracy in real time. In addition, for robot motion generation and control, a novel online motion planning method is proposed. In order to generate appropriate dynamic motion, a quadratic programming (QP)-based dual-arms kinematic motion generation scheme is proposed, and a simplified recurrent neural network is employed to solve the QP problem. The integration of a handshake within the HRI system illustrates the effectiveness of the proposed online generation method.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): 172–190.
Published: 01 August 2014
Abstract
View article
PDF
3D virtual humans and physical human-like robots can be used to interact with people in a remote location in order to increase the feeling of presence. In a telepresence setup, their behaviors are driven by real participants. We envision that in the absence of the real users, when they have to leave or they do not want to do a repetitive task, the control of the robots can be handed to an artificial intelligence component to sustain the ongoing interaction. At the point when human-mediated interaction is required again, control can be returned to the real users. One of the main challenges in telepresence research is the adaptation of 3D position and orientation of the remote participants to the actual physical environment to have appropriate eye contact and gesture awareness in a group conversation. In case the human behind the robot and/or virtual human leaves, multi-party interaction should be handed to an artificial intelligence component. In this paper, we discuss the challenges in autonomous multi-party interaction among virtual characters, human-like robots, and real participants, and describe a prototype system to study these challenges.
Includes: Multimedia, Supplementary data