Apparent motion of the surroundings on an agent's retina can be used to navigate through cluttered environments, avoid collisions with obstacles, or track targets of interest. The pattern of apparent motion of objects, (i.e., the optic flow), contains spatial information about the surrounding environment. For a small, fast-moving agent, as used in search and rescue missions, it is crucial to estimate the distance to close-by objects to avoid collisions quickly. This estimation cannot be done by conventional methods, such as frame-based optic flow estimation, given the size, power, and latency constraints of the necessary hardware. A practical alternative makes use of event-based vision sensors. Contrary to the frame-based approach, they produce so-called events only when there are changes in the visual scene.

We propose a novel asynchronous circuit, the spiking elementary motion detector (sEMD), composed of a single silicon neuron and synapse, to detect elementary motion from an event-based vision sensor. The sEMD encodes the time an object's image needs to travel across the retina into a burst of spikes. The number of spikes within the burst is proportional to the speed of events across the retina. A fast but imprecise estimate of the time-to-travel can already be obtained from the first two spikes of a burst and refined by subsequent interspike intervals. The latter encoding scheme is possible due to an adaptive nonlinear synaptic efficacy scaling.

We show that the sEMD can be used to compute a collision avoidance direction in the context of robotic navigation in a cluttered outdoor environment and compared the collision avoidance direction to a frame-based algorithm. The proposed computational principle constitutes a generic spiking temporal correlation detector that can be applied to other sensory modalities (e.g., sound localization), and it provides a novel perspective to gating information in spiking neural networks.

Supplementary data

You do not currently have access to this content.