Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Cagatay Basdogan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2008) 17 (4): 344–364.
Published: 01 August 2008
Abstract
View article
PDF
Using optical tweezers (OT) and a haptic device, microspheres having diameters ranging from 3 to 4 μm (floating in a fluid solution) are manipulated in order to form patterns of coupled optical microresonators by assembling the spheres via chemical binding. For this purpose, biotin-coated microspheres trapped by a laser beam are steered and chemically attached to an immobilized streptavidin-coated sphere (i.e., the anchor sphere) one by one using an xyz piezo scanner controlled by a haptic device. The positions of all spheres in the scene are detected using a CCD camera and a collision-free path for each manipulated sphere is generated using the potential field approach. The forces acting on the manipulated particle due to the viscosity of the fluid and the artificial potential field are scaled and displayed to the user through the haptic device for better guidance and control during steering. In addition, a virtual fixture is implemented such that the desired angle of approach and strength are achieved during the binding phase. Our experimental studies in virtual and real environments with eight human subjects show that haptic feedback significantly improves the user performance by reducing the task completion time, the number of undesired collisions during steering, and the positional errors during binding. To our knowledge, this is the first time that a haptic device is coupled with OTs to guide the user during an optical manipulation task involving steering and assembly of microspheres to construct a coupled microresonator.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2008) 17 (1): 73–90.
Published: 01 February 2008
Abstract
View article
PDF
Many biological activities take place through the physicochemical interaction of two molecules. This interaction occurs when one of the molecules finds a suitable location on the surface of the other for binding. This process is known as molecular docking, and it has applications to drug design. If we can determine which drug molecule binds to a particular protein, and how the protein interacts with the bonded molecule, we can possibly enhance or inhibit its activities. This information, in turn, can be used to develop new drugs that are more effective against diseases. In this paper, we propose a new approach based on a human-computer interaction paradigm for the solution of the rigid body molecular docking problem. In our approach, a rigid ligand molecule (i.e., drug) manipulated by the user is inserted into the cavities of a rigid protein molecule to search for the binding cavity, while the molecular interaction forces are conveyed to the user via a haptic device for guidance. We developed a new visualization concept, Active Haptic Workspace ( AHW ), for the efficient exploration of the large protein surface in high resolution using a haptic device having a small workspace. After the discovery of the true binding site and the rough alignment of the ligand molecule inside the cavity by the user, its final configuration is calculated off-line through time stepping molecular dynamics (MD) simulations. At each time step, the optimum rigid body transformations of the ligand molecule are calculated using a new approach, which minimizes the distance error between the previous rigid body coordinates of its atoms and their new coordinates calculated by the MD simulations. The simulations are continued until the ligand molecule arrives at the lowest energy configuration. Our experimental studies conducted with six human subjects testing six different molecular complexes demonstrate that given a ligand molecule and five potential binding sites on a protein surface, the subjects can successfully identify the true binding site using visual and haptic cues. Moreover, they can roughly align the ligand molecule inside the binding cavity such that the final configuration of the ligand molecule can be determined via the proposed MD simulations.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2007) 16 (1): 1–15.
Published: 01 February 2007
Abstract
View article
PDF
A planetary rover acquires a large collection of images while exploring its surrounding environment. For example, 2D stereo images of the Martian surface captured by the lander and the Sojourner rover during the Mars Pathfinder mission in 1997 were transmitted to Earth for scientific analysis and navigation planning. Due to the limited memory and computational power of the Sojourner rover, most of the images were captured by the lander and then transmitted to Earth directly for processing. If these images were merged together at the rover site to reconstruct a 3D representation of the rover's environment using its on-board resources, more information could potentially be transmitted to Earth in a compact manner. However, construction of a 3D model from multiple views is a highly challenging task to accomplish even for the new generation rovers (Spirit and Opportunity) running on the Mars surface at the time this article was written. Moreover, low transmission rates and communication intervals between Earth and Mars make the transmission of any data more difficult. We propose a robust and computationally efficient method for progressive transmission of multi-resolution 3D models of Martian rocks and soil reconstructed from a series of stereo images. For visualization of these models on Earth, we have developed a new multimodal visualization setup that integrates vision and touch. Our scheme for 3D reconstruction of Martian rocks from 2D images for visualization on Earth involves four main steps: a) acquisition of scans: depth maps are generated from stereo images, b) integration of scans: the scans are correctly positioned and oriented with respect to each other and fused to construct a 3D volumetric representation of the rocks using an octree, c) transmission: the volumetric data is encoded and progressively transmitted to Earth, d) visualization: a surface model is reconstructed from the transmitted data on Earth and displayed to a user through a new autostereoscopic visualization table and a haptic device for providing touch feedback. To test the practical utility of our approach, we first captured a sequence of stereo images of a rock surface from various viewpoints in JPL MarsYard using a mobile cart and then performed a series of 3D reconstruction experiments. In this paper, we discuss the steps of our reconstruction process, our multimodal visualization system, and the tradeoffs that have to be made to transmit multiresolution 3D models to Earth in an efficient manner under the constraints of limited computational resources, low transmission rate, and communication interval between Earth and Mars.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1999) 8 (5): 477–491.
Published: 01 October 1999
Abstract
View article
PDF
Computer haptics, an emerging field of research that is analogous to computer graphics, is concerned with the generation and rendering of haptic virtual objects. In this paper, we propose an efficient haptic rendering method for displaying the feel of 3-D polyhedral objects in virtual environments (VEs). Using this method and a haptic interface device, the users can manually explore and feel the shape and surface details of virtual objects. The main component of our rendering method is the “neighborhood watch” algorithm that takes advantage of precomputed connectivity information for detecting collisions between the end effector of a force-reflecting robot and polyhedral objects in VEs. We use a hierarchical database, multithreading techniques, and efficient search procedures to reduce the computational time such that the haptic servo rate after the first contact is essentially independent of the number of polygons that represent the object. We also propose efficient methods for displaying surface properties of objects such as haptic texture and friction. Our haptic-texturing techniques and friction model can add surface details onto convex or concave 3-D polygonal surfaces. These haptic-rendering techniques can be extended to display dynamics of rigid and deformable objects.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1997) 6 (2): 147–159.
Published: 01 April 1997
Abstract
View article
PDF
The current methods of training medical personnel to provide emergency medical care have several important shortcomings. For example, in the training of wound debridement techniques, animal models are used to gain experience treating traumatic injuries. We propose an alternative approach by creating a three-dimensional, interactive computer model of the human body that can be used within a virtual environment to learn and practice wound debridement techniques and Advanced Trauma Life Support (ATLS) procedures. As a first step, we have developed a computer model that represents the anatomy and physiology of a normal and injured lower limb. When visualized and manipulated in a virtual environment, this computer model will reduce the need for animals in the training of trauma management and potentially provide a superior training experience. This article describes the development choices that were made in implementing the preliminary system and the challenges that must be met to create an effective medical training environment.