Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Emmanuelle Richard
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2012) 21 (3): 321–337.
Published: 01 August 2012
Abstract
View article
PDF
Virtual reality (VR) is a technology covering a large field of applications among which are sports and video games. In both gaming and sporting VR applications, interaction techniques involve specific gestures such as catching or striking. However, such dynamic gestures are not currently being recognized as elementary task primitives, and have therefore not been investigated as such. In this paper, we propose a framework for the analysis of interaction in dynamic virtual environments (DVEs). This framework is based on three dynamic interaction primitives (DIPs) that are common to many sporting activities: catching, throwing, and striking. For each of these primitives, an original modeling approach is proposed. Furthermore, we introduce and formalize the concept of dynamic virtual fixtures (DVFs). These fixtures aim to assist the user in tasks involving interaction with moving objects or with objects to be set in movement. Two experiments have been carried out to investigate the influence of different DVFs on human performance in the context of ball catching and archery. The results reveal a significant positive effect of the DVFs, and that DVFs could be either classified as “performance-assisted” or “learning-assisted.”
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2012) 21 (1): 43–57.
Published: 01 February 2012
Abstract
View article
PDF
Everyday action impairment is one of the diagnostic criteria of Alzheimer's disease and is associated with many serious consequences, including loss of functional autonomy and independence. It has been shown that the (re)learning of everyday activities is possible in Alzheimer's disease by using error reduction teaching approaches in naturalistic clinical settings. The purpose of this study is to develop a dual-modal virtual reality platform for training in everyday cooking activities in Alzheimer's disease and to establish its value as a training tool for everyday activities in these patients. Two everyday tasks and two error reduction learning methods were implemented within a virtual kitchen. Two patients with Alzheimer's disease and two healthy elderly controls were tested. All subjects were trained in two learning sessions on two comparable cooking tasks. Within each group (i.e., patients and controls), the order of the training methods was counterbalanced. Repeated measure analysis before and after learning was performed. A questionnaire of presence and a verbal interview were used to obtain information about the subjective responses of the participants to the VR experience. The results in terms of errors, omissions, and perseverations (i.e., repetitive behaviors) indicate that the patients performed worse than the controls before learning, but that they reached a level of performance similar to that of the controls after a short learning session, regardless of the learning method employed. This finding provides preliminary support for the value of the dual-modal virtual reality platform for training in everyday cooking activities in Alzheimer's disease. However, further work is needed before it is ready for clinical application.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2011) 20 (3): 241–253.
Published: 01 June 2011
Abstract
View article
PDF
Humans possess the ability to perform complex manipulations without the need to consciously perceive detailed motion plans. When a large number of trials and tests are required for techniques such as learning by imitation and programming by demonstration, the virtual reality approach provides an effective method. Indeed, virtual environments can be built economically and quickly, and can be automatically reinitialized. In the fields of robotics and virtual reality, this has now become commonplace. Rather than imitating human actions, our focus is to develop an intuitive and interactive method based on user demonstrations to create humanlike, autonomous behavior for a virtual character or robot. Initially, a virtual character is built via real-time virtual simulation in which the user demonstrates the task by controlling the virtual agent. The necessary data (position, speed, etc.) to accomplish the task are acquired in a Cartesian space during the demonstration session. These data are then generalized off-line by using a neural network with a back-propagation algorithm. The objective is to model a function that represents the studied task, and by so doing, to adapt the agent to deal with new cases. In this study, the virtual agent is a 6-DOF arm manipulator, Kuka Kr6, and the task is to grasp a ball thrown into its workspace. Our approach is to find a minimum number of necessary demonstrations while maintaining adequate task efficiency. Moreover, the relationship between the number of dimensions of the estimated function and the number of human trials is studied, depending on the evolution of the learning system.