This work explored how the presence of graphical information about self-movement affected reach-to-grasp movements in an augmented environment. Twelve subjects reached to grasp objects that were passed by a partner or rested on a table surface. Graphical feedback about self-movement was available for half the trials and was removed for the other half. Results indicated that removing visual feedback about self-movement in an object-passing task dramatically affected both the receiver's movement to grasp the object and the time to transfer the object between partners. Specifically, the receiver's deceleration time, and temporal and spatial aspects of grasp formation, showed significant effects. Results also indicated that the presence of a graphic representation of self-movement had similar effects on the kinematics of reaching to grasp a stationary object on a table as for one held by a stationary or moving partner. These results suggest that performance of goal-directed movements, whether to a stationary object on a table surface or to objects being passed by a stationary or moving partner, benefits from a crude graphical representation of the finger pads. The role of providing graphic feedback about self-movement is discussed for tasks requiring precision. Implications for the use of kinematic measures in the field of Human-Computer Interaction (HCI) are also discussed.