Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Norman I. Badler
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2015) 24 (3): 187–200.
Published: 01 July 2015
Abstract
View article
PDF
Virtual heritage architectural and cultural reconstructions may be enhanced by populating the environment with simulated people. There are a number of important human modeling issues to address, such as situationally appropriate clothing, occupations, and behaviors. Our interest here is focused on how people interact with portable items in their environment: namely, whether they are carrying items and what those items are. With an end goal of enabling lifelike, data-driven, agent-based populace simulations, we conducted an informal but systematic ethnographic observational study of the items carried by more than 3,000 people in two different urban community environments: an indoor market and an outdoor city plaza. We recorded the number and types of items carried by each person, along with their gender, estimated age category, and whether they were alone or in a group. We performed a basic statistical analysis of the results. There were two highly significant findings: (1) a strong and similar majority of all people carry at least one item (76.63% in the indoor setting and 79.79% in the outdoor setting); and (2) the types and amounts of items carried were highly consistent across the two different environments, implying that the data may be applicable in a wide range of scenarios.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2014) 23 (2): 191–208.
Published: 01 August 2014
Abstract
View article
PDF
Due to varied personal, social, or even cultural situations, people sometimes conceal or mask their true emotions. These suppressed emotions can be expressed in a very subtle way by brief movements called microexpressions. We investigate human subjects' perception of hidden emotions in virtual faces, inspired by recent psychological experiments. We created animations with virtual faces showing some facial expressions and inserted brief secondary expressions in some sequences, in order to try to convey a subtle second emotion in the character. Our evaluation methodology consists of two sets of experiments, with three different sets of questions. The first experiment verifies that the accuracy and concordance of the participant's responses with synthetic faces matches the empirical results done with photos of real people in the paper by X.-b. Shen, Q. Wu, and X.-l. Fu, 2012, “Effects of the duration of expressions on the recognition of microexpressions,” Journal of Zhejiang University Science B, 13 (3), 221–230. The second experiment verifies whether participants could perceive and identify primary and secondary emotions in virtual faces. The third experiment tries to evaluate the participant's perception of realism, deceit, and valence of the emotions. Our results show that most of the participants recognized the foreground (macro) emotion and most of the time they perceived the presence of the second (micro) emotion in the animations, although they did not identify it correctly in some samples. This experiment exposes the benefits of conveying microexpressions in computer graphics characters, as they may visually enhance a character's emotional depth through subliminal microexpression cues, and consequently increase the perceived social complexity and believability.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (2008) 17 (2): 103–120.
Published: 01 April 2008
Abstract
View article
PDF
What is often missing from many virtual worlds and training simulations is a physical sense of the confinement and constraint of the virtual environment. We present a method for providing localized cutaneous vibratory feedback to the user's right arm. We created a sleeve of tactors linked to a real-time human model; the tactors activate to apply sensation to the corresponding body area. The hypothesis is that vibrotactile feedback to body areas provides the wearer sufficient guidance to assume correct body configurations and ascertain the existence and physical realism of access paths. We present the results of human subject experiments that study both explicit and implicit training of skills using vibrotactile arrays. Implicitly, collision awareness is achieved by activating the appropriate tactor when a body part collides with the scene; thus, the user will attempt to correct his or her body configuration. Explicitly, we use the tactors to guide the body into the proper configuration. The results of human subject experiments clearly show that the use of full arm vibrotactile feedback improves performance over purely visual feedback for navigating the virtual environment, as well as allowing easy acquisition of new skills. These results validate the empirical performance of this concept.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1996) 5 (4): 416–430.
Published: 01 November 1996
Abstract
View article
PDF
Simulating a human figure performing a manual task requires that the agent interact with objects in the environment in a realistic manner. Graphic or programming interfaces to control human figure animation, however, do not allow the animator to instruct the system with concise “high-level” commands. Instructions coming from a high-level planner cannot be directly given to a synthetic agent because they do not specify such details as which end-effector to use or where on the object to grasp. Because current animation systems require joint angle displacement descriptions of motion—even for motions that incorporate upwards of 15 joints—an efficient connection between high-level specifications and low-level hand joint motion is required. In this paper we describe a system that directs task-level, general-purpose, object grasping for a simulated human agent. The Object-Specific Reasoner (OSR) is a reasoning module that uses knowledge of the object of the underspecified action to generate values for missing parameters. The Grasp Behavior manages simultaneous motions of the joints in the hand, wrist, and arm, and provides a programmer with a high-level description of the desired action. When composed hierarchically, the OSR and the Grasp behavior interpret task-level commands and direct specific motions to the animation system. These modules are implemented as part of the Jock system at the University of Pennsylvania.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1996) 5 (4): 359–366.
Published: 01 November 1996
Abstract
View article
PDF
We present a model for simulating casualties in virtual environments for real-time medical training. It allows a user to choose diagnostic and therapeutic actions to carry out on a simulated casualty who will manifest appropriate physiological, behavioral, and physical responses. Currently, the user or a “stealth instructor” can specify one or more injuries that the casualty has sustained. The model responds by continuously determining the state of the casualty, responding appropriately to medical assessment and treatment procedures. So far, we have modeled four medical conditions and over 20 procedures. The model has been designed to handle the addition of other injuries and medical procedures.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1996) 5 (4): 393–401.
Published: 01 November 1996
Abstract
View article
PDF
A simple inverse kinematics procedure is proposed for a seven degree of freedom model of the human arm. Two schemes are used to provide an additional constraint leading to closed-form analytical equations with an upper bound of two or four solutions, Multiple solutions can be evaluated on the basis of their proximity from the rest angles or the previous configuration of the arm. Empirical results demonstrate that the procedure is well suited for real-time applications.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1995) 4 (3): 286–296.
Published: 01 August 1995
Abstract
View article
PDF
We present a new framework for rendering virtual environments. This framework is proposed as a complete scene description, which embodies the space of all possible renderings, under all possible lighting scenarios of the given scene. In effect, this hypothetical rendering space includes all possible light sources as part of the geometric model. While it would be impractical to implement the general framework, this approach does allow us to look at the rendering problem in a new way. Thus, we propose new representations that are subspaces of the entire rendering space. Some of these subspaces are computationally tractable and may be carefully chosen to serve a particular application. The approach is useful both for real and virtual scenes. The framework includes methods for rendering environments which are illuminated by artificial light, natural light, or a combination of the two models.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1995) 4 (1): 81–96.
Published: 01 February 1995
Abstract
View article
PDF
The overall goals of the Center for Human Modeling and Simulation are the investigation of computer graphics modeling, animation, and rendering techniques. Major focii are in behavior-based animation of human movement, modeling through physics-based techniques, applications of control theory techniques to dynamic models, illumination models for image synthesis, and understanding the relationship between human movement, natural language, and communication.
Journal Articles
Publisher: Journals Gateway
Presence: Teleoperators and Virtual Environments (1993) 2 (1): 82–86.
Published: 01 February 1993
Abstract
View article
PDF
We track, in real-time, the position and posture of a human body, using a minimal number of six DOF sensors to capture full body standing postures. We use four sensors to create a good approximation of a human operator's position and posture, and map it on to our articulated computer graphics human model. The unsensed joints are positioned by a fast inverse kinematics algorithm. Our goal is to realistically recreate human postures while minimally encumbering the operator.