Abstract

As an observer moves and explores the environment, the visual stimulation in his eye is constantly changing. Somehow he is able to perceive the spatial layout of the scene, and to discern his movement through space. Computational vision researchers have been trying to solve this problem for a number of years with only limited success. It is a difficult problem to solve because the relationship between the optical-flow field, the 3D motion parameters, and depth is nonlinear. We have come to understand that this nonlinear equation describing the optical-flow field can be split by an exact algebraic manipulation to yield an equation that relates the image velocities to the translational component of the 3D motion alone. Thus, the depth and the rotational velocity need not be known or estimated prior to solving for the translational velocity. The algorithm applies to the general case of arbitrary motion with respect to an arbitrary scene. It is simple to compute and it is plausible biologically.

This content is only available as a PDF.

Author notes

*Current address: NASA-Ames Research Center, mail stop 262-2, Moffett Field, CA 94035 USA.