Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Bruno Arnaldi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2015) 24 (3): 265–277.
Published: 01 July 2015
Abstract
View article
PDF
The sense of touch provides a particular access to our environment, enabling a tangible relation with it. In the particular case of cultural heritage, touching the past, apart from being a universal dream, can provide essential information to analyze, understand, or restore artifacts. However, archaeological objects cannot always offer tangible access, either because they have been destroyed or are too damaged, or because they are part of a larger assembly. In other cases, it is the context of use that has become inaccessible, as it is related to an outdated activity. We propose a workflow based on a combination of computed tomography, 3D images, and 3D printing to provide concrete access to cultural heritage, and we illustrate this workflow in different contexts of inaccessibility. These technologies are already used in cultural heritage, but seldom combined, and are most often employed for exceptional artifacts. We propose to combine these technologies in case studies corresponding to relevant archaeological situations.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (1): 54–70.
Published: 01 February 2010
Abstract
View article
PDF
Brain–computer interfaces (BCI) are interaction devices that enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly-designed self-paced BCI which enables the user to send three different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state of the art method during a task of virtual museum exploration. The state of the art method uses low-level commands, which means that each mental state of the user is associated with a simple command such as turning left or moving forward in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly half the commands, meaning with less stress and more comfort for the user. This suggests that our technique enables efficient use of the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2003) 12 (4): 411–421.
Published: 01 August 2003
Abstract
View article
PDF
Virtual reality offers new tools for human motion understanding. Several applications have been widely used in teleoperation, military training, driving and flying simulators, and so forth. We propose to test if virtual reality is a valid training tool for the game of handball. We focused on the duel between a handball goalkeeper and a thrower. To this end, we defined a pilot experiment divided into two steps: an experiment with real subjects and another one with virtual throwers. The throwers' motions were captured in order to animate their avatar in a reality center. In this paper, we focused on the evaluation of presence when a goalkeeper is confronting these avatars. To this end, we compared the goalkeeper's gestures in the real and in the virtual experiment to determine if virtual reality engendered the same movements for the same throw. Our results show that gestures did not differ between the real and virtual environment. As a consequence, we can say that the virtual environment offered enough realism to initiate natural gestures. Moreover, as in real games, we observed the goalkeeper's anticipation to allow us to use virtual reality in future work as a way to understand the goalkeeper and thrower interactions. The main originality of this work was to measure presence in a sporting application with new evaluation methods based on motion capture.