Humans possess the ability to perform complex manipulations without the need to consciously perceive detailed motion plans. When a large number of trials and tests are required for techniques such as learning by imitation and programming by demonstration, the virtual reality approach provides an effective method. Indeed, virtual environments can be built economically and quickly, and can be automatically reinitialized. In the fields of robotics and virtual reality, this has now become commonplace. Rather than imitating human actions, our focus is to develop an intuitive and interactive method based on user demonstrations to create humanlike, autonomous behavior for a virtual character or robot. Initially, a virtual character is built via real-time virtual simulation in which the user demonstrates the task by controlling the virtual agent. The necessary data (position, speed, etc.) to accomplish the task are acquired in a Cartesian space during the demonstration session. These data are then generalized off-line by using a neural network with a back-propagation algorithm. The objective is to model a function that represents the studied task, and by so doing, to adapt the agent to deal with new cases. In this study, the virtual agent is a 6-DOF arm manipulator, Kuka Kr6, and the task is to grasp a ball thrown into its workspace. Our approach is to find a minimum number of necessary demonstrations while maintaining adequate task efficiency. Moreover, the relationship between the number of dimensions of the estimated function and the number of human trials is studied, depending on the evolution of the learning system.

This content is only available as a PDF.
You do not currently have access to this content.