One of the improvements virtual reality offers traditional human-computer interfaces is that it enables the user to interact with virtual objects using gestures. The use of natural hand gestures for computer input provides opportunities for direct manipulation in computing environments, but not without some challenges. The mapping of a human gesture onto a particular system function is not nearly so easy as mapping with a keyboard or mouse. Reasons for this difficulty include individual variations in the exact gesture movement, the problem of knowing when a gesture starts and ends, and variation in the relative positions of other body parts that might help to identify a gesture but are not measured. A further difficulty stems from limitations on the number of gestures that a person can reliably remember and reproduce. This paper describes work on the statistical recognition of gestures based on the sum of squares. A DataGlove™ was employed to measure finger position and “train” software to recognize the letters and numbers of the American Sign Language (ASL) manual alphabet. This technique for gesture recognition is more effective than methods commonly employed in VR applications in that it can distinguish dozens of gestures and is not bound by the input sequences of a particular user. The work described here is limited in that it examines only gestures that do not occur across time. Applications for speakers of ASL and for VR are discussed, and future directions for gesture recognition research are introduced. These include adding a motion tracker and potential for recognizing gestures that do occur across time.

This content is only available as a PDF.
You do not currently have access to this content.