Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Anatole Lécuyer
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2016) 25 (1): 1–16.
Published: 01 July 2016
Abstract
View article
PDF
When walking within a CAVE-like system, accommodation distance, parallax, and angular resolution vary according to the distance between the user and the projection walls, which can alter spatial perception. As these systems get bigger, there is a need to assess the main factors influencing spatial perception in order to better design immersive projection systems and virtual reality applications. In this paper, we present two experiments that analyze distance perception when considering the distance toward the projection screens and parallax as main factors. Both experiments were conducted in a large immersive projection system with up to 10-meter interaction space. The first experiment showed that both the screen distance and parallax have a strong asymmetric effect on distance judgments. We observed increased underestimation for positive parallax conditions and slight distance overestimation for negative and zero parallax conditions. The second experiment further analyzed the factors contributing to these effects and confirmed the observed effects of the first experiment with a high-resolution projection setup providing twice the angular resolution and improved accommodative stimuli. In conclusion, our results suggest that space is the most important characteristic for distance perception, optimally requiring about 6- to 7-meter distance around the user, and virtual objects with high demands on accurate spatial perception should be displayed at zero or negative parallax.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2016) 25 (1): 17–32.
Published: 01 July 2016
Abstract
View article
PDF
Haptic feedback is known to improve 3D interaction in virtual environments but current haptic interfaces remain complex and tailored to desktop interaction. In this paper, we describe an alternative approach called “Elastic-Arm” for incorporating haptic feedback in immersive virtual environments in a simple and cost-effective way. The Elastic-Arm is based on a body-mounted elastic armature that links the user's hand to the body and generates a progressive egocentric force when extending the arm. A variety of designs can be proposed with multiple links attached to various locations on the body in order to simulate different haptic properties and sensations such as different levels of stiffness, weight lifting, and bimanual interaction. Our passive haptic approach can be combined with various 3D interaction techniques and we illustrate the possibilities offered by the Elastic-Arm through several use cases based on well-known techniques such as the Bubble technique, redirected touching, and pseudo-haptics. A user study was conducted which showed the effectiveness of our pseudo-haptic technique as well as the general appreciation of the Elastic-Arm. We believe that the Elastic-Arm could be used in various VR applications which call for mobile haptic feedback or human-scale haptic sensations.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2011) 20 (6): 529–544.
Published: 01 December 2011
Abstract
View article
PDF
An open question in research nowadays is the usability of brain–computer interfaces (BCI) conceived to extend human capabilities of interaction within a virtual environment. Several paradigms are used for BCI, but the steady-state visual-evoked potential (SSVEP) stands out as it provides a higher information transfer rate while requiring less training. It is an electroencephalographic response detectable when the user looks at a flickering visual stimulus. This research proposes a novel approach for SSVEP-based BCI controller used here for navigation within a 3D virtual environment. For the first time, the flickering stimuli were integrated into virtual objects as a part of the virtual scene in a more transparent and ecological way. As an example, when navigating inside a virtual natural outdoor scene, we could embed the SSVEP flashes in the wings of virtual butterflies surrounding the user. We could also introduce the use of animated and moving stimulations when using SSVEP-based BCI, as the virtual butterflies were left with the possibility of moving and flying in front of the user. Moreover, users received real-time feedback of their mental activity and were thus aware of their detected SSVEP directly and continuously. An experiment has been conducted to assess the influence of both the feedback and the integrated controller on navigation performance and subjective preference. We found that the usage of a controller integrated within the virtual scene along with the feedback seems to improve subjective preference and feeling of presence, despite reduced performance in terms of speed. This suggests that SSVEP-based BCI interfaces for virtual environments could move on from static targets and use integrated and animated stimuli presented in an ecological way for controls in systems where performance demands could be relaxed to benefit an improvement in interaction naturalness.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (1): 35–53.
Published: 01 February 2010
Abstract
View article
PDF
This paper describes the OpenViBE software platform which enables researchers to design, test, and use brain–computer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2010) 19 (1): 54–70.
Published: 01 February 2010
Abstract
View article
PDF
Brain–computer interfaces (BCI) are interaction devices that enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly-designed self-paced BCI which enables the user to send three different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state of the art method during a task of virtual museum exploration. The state of the art method uses low-level commands, which means that each mental state of the user is associated with a simple command such as turning left or moving forward in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly half the commands, meaning with less stress and more comfort for the user. This suggests that our technique enables efficient use of the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2009) 18 (1): 39–53.
Published: 01 February 2009
Abstract
View article
PDF
This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2006) 15 (3): 353–357.
Published: 01 June 2006
Abstract
View article
PDF
How do we perceive objects when what we see and what we touch is not at the same place? In a virtual environment, we observed that spatial delocation promotes visual dominance when judging the rotation angle of a hand-operated handle. Thus, the delocation of perceptual information appears to increase considerably the weight of the dominant sense at the expense of the other. We relate this result to the design of teleoperation and virtual reality systems, in which, typically, the visual and haptic sensory information originates in spatially distinct devices.