A parallel algorithm operating on the units (“neurons”) of an artificial retina is proposed to recover depth information in a visual scene from radial flow fields induced by ego motion along a given axis. The system consists of up to 600 radii with fewer than 65 radially arranged neurons on each radius. Neurons are connected only to their nearest neighbors, and they are excited as soon as a sufficiently strong gray-level change occurs. The time difference of two subsequently activated neurons is then used by the last-excited neuron to compute the depth information. All algorithmic calculations remain strictly local, and information is exchanged only between adjacent active neurons (except for the final read-out). This, in principle, permits parallel implementation. Furthermore, it is demonstrated that the calculation of the object coordinates requires only a single multiplication with a constant, which is dependent on only the retinal position of the active neuron. The initial restriction to local operations makes the algorithm very noise sensitive. In order to solve this problem, a prediction mechanism is introduced. After an object coordinate has been determined, the active neuron computes the time when the next neuronal excitation should take place. This estimated time is transferred to the respective next neuron, which will wait for this excitation only within a certain time window. If the excitation fails to arrive within this window, the previously computed object coordinate is regarded as noisy and discarded. We will show that this predictive mechanism relies also on only a (second) single multiplication with another neuron-dependent constant. Thus, computational complexity remains low, and noisy depth coordinates are efficiently eliminated. Thus, the algorithm is very fast and operates in real time on 128×128 images even in a serial implementation on a relatively slow computer. The algorithm is tested on scenes of growing complexity, and a detailed error analysis is provided showing that the depth error remains very low in most cases. A comparison to standard flow-field analysis shows that our algorithm outperforms the older method by far. The analysis of the algorithm also shows that it is generally applicable despite its restrictions, because it is fast and accurate enough such that a complete depth percept can be composed from radial flow field segments. Finally, we suggest how to generalize the algorithm, waiving the restriction of radial flow.