Abstract

Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.

1 Introduction

Motion perception is critically important to serve a variety of daily tasks for animals and humans. Insects, in particular, are “experts” in motion perception, even though they have tiny brains and much smaller numbers of visual neurons than vertebrates. Much evidence has demonstrated their amazing ability to deal with visual motion and to interact with dynamic and cluttered scenes, enabling quick and flexible reactions like collision avoidance and target tracking and following—even, in some aspects, performing better than vertebrates including humans [26, 28, 30, 31, 59, 66, 193].

Insects have compact visual systems that can extract meaningful motion cues and distinguish different motion patterns for proper behavioral response. For example, locusts can fly for hundreds of miles in dense swarms free of collision; honeybees show center response when crossing a tunnel; praying mantises can monitor small moving prey in visual clutter. Such appealing talent draws attention from not only biologists but also computational modelers and engineers. In biology, the underlying circuits and mechanisms of insects' visual processing systems remain largely unknown to date [59, 66, 193]. While the biological substrates are elusive, the computational modeling and applications to machine vision are of particular usefulness to help understand the neural characteristics and demonstrate the functionality of visual circuits or pathways [31, 59, 193]. In addition, these models can be ideal modules for designing dynamic vision systems or sensors for future intelligent machines like robots and vehicles for motion perception in a low-energy, fast, and reliable manner.

1.1 Related Survey of Research on Biological Visual Systems

The past several decades have witnessed much progress in our understanding of cellular and subcellular mechanisms of elusive biological visual systems. There have been some specific visual neurons or pathways identified in insects: various kinds of flies [12, 17, 21, 121, 125, 136, 209, 212, 213, 224, 248, 250], locusts [157, 175, 176, 189, 200, 210, 227, 231], and bees [13, 14, 154], as well as ants [245], and mantises [6, 232]. Two studies have reviewed fundamental mechanisms in insect visual motion detection; these comprise classic models and functions [29, 37]. At early stages, fly visual systems are seminal models for studying animals' motion-detecting strategies [27]. Borst et al. thoroughly reviewed the successive physiological findings on the fly visual systems and summarized the visual course control; these include behaviors, algorithms, and circuits [26, 28, 29, 30, 49]. Importantly, they have also pointed out the commonality in design of fly and mammalian motion vision circuits [31]. In contrast to the correlated elementary motion, which is velocity-dependent, Aptekar briefly reviewed the higher-order figure detection with non-Fourier or statistical features in flies that correlates with human vision [3]. In addition, Rind et al. investigated the underlying structures and mechanisms of locust visual systems to learn about looming perception and collision avoidance schemes [110, 168, 170176, 199, 201, 210, 227]. On the behavioral level, one investigation reviewed escape behaviors in insects caused by visual stimuli, and moreover, demonstrated the complexity of both visual and escape circuits [33].

1.2 Related Survey of Bio-inspired Models and Applications

These naturally evolved vision systems have been providing us with a rich source of inspiration for developing artificial visual systems for motion perception. As hardware has swiftly developed, these bio-inspired models become applicable to robotics. A good number of surveys have emerged to demonstrate how machine vision benefits from computational modeling of insect vision. Iida reviewed the models motivated by flying insects and the applications to robotics [105]. Floreano et al. proposed applying bio-inspired control methodology for vision-based wheeled and flying robots [56]. Srinivasan et al. studied rigorously the models and control methods inspired by visual systems in flying insects like honeybees for visually guided flight control and navigation [204, 205, 206]. Huber presented models of visuomotor control in flies and visual-behavior-based models of control and design for robotics [103]. Franceschini systematically surveyed biological research and computational models on the basis of a scheme of fly elementary motion detectors (EMDs) with a view to bio-robotics applications [59]. Recently, Serres and Ruffier reviewed the applications of fly optic-flow-based strategies to UAVs1 and MAVs for multiple visually guided behaviors like collision avoidance, terrain following, landing, and tunnel crossing [193]. More generically, Desouza and Kak surveyed vision techniques that varied from traditional computer vision methods to insects' optic flow strategies, for mobile robot navigation [44]. On the other hand, Webb considered the influence of robot-based research, including bio-inspired vision, on investigations of the biological behavior of animals [225, 226]; importantly, these seminal works revealed that the bio-robotic studies could be good paradigms for studying biological behaviors.

1.3 Taxonomy of This Review

There have been a lot of publications on computational modeling of motion perception neural systems originating from research on animals' visual systems in the past decades, shedding light on significant breakthroughs in bio-inspired artificial visual systems for future robotics and autonomous vehicles. These publications cover computational modeling of different motion patterns, such as looming, translation, small-target motion, and rotation, as well as various applications. However, there is no systematic review of this promising research field, though some of the topics have been casually touched upon in different articles.

To the best of our knowledge, this survey for the first time covers computational models sensitive to different motion patterns, including looming, translation, and small-target motion, that originate from research on several kinds of insects' visual systems. These insects include locusts, fruit flies (Drosophila), dragonflies, hoverflies, and bees (bumblebees and honeybees). The vast majority of existing biological and computational studies have been focusing on the various kinds of flies and locusts. Although there have been a few reviews on biological and computational models, as well as applications of fly visual systems (e.g., [26, 59, 193]), no systematic survey has included the looming-sensitive neuron models inspired by locust visual systems and the small-target motion-sensitive neuron models inspired by dragonflies and hoverflies and their applications.

In the real world, the diversity of motion patterns can be categorized into a few types that involve expansion and contraction of objects, translation, rotational or spiral motion, and the like. Depending on the distance between moving objects and the observer, one can also define other specific motion patterns of small-target movements. Perceiving and recognizing these different motion patterns in a visually cluttered and dynamic environment is critically important for the survival of insects. With regard to the diversity of visual neurons that possess specific sensitivity to different motion patterns, this article reviews bio-inspired motion perception models and applications in the literature according to different kinds of direction and size selectivity, as illustrated in Figure 1. These models represent distinct direction selectivity (DS) to looming and translation in visual neurons of locusts, fruit flies, and bees, as well as specific size selectivity to small-target motion in visual neurons of dragonflies and hoverflies.

Figure 1. 

The taxonomy proposed in this review.

Figure 1. 

The taxonomy proposed in this review.

This review also makes several additional contributions regarding computational models and applications of insect visual systems. It summarizes the similarities in computational modeling of different visual neurons. It demonstrates the key methods for generation of both direction and size selectivity in motion perception models. It also suggests that the computational modeling of similar motion-sensitive neurons in other insects like mantises and arthropods like crabs may benefit from the existing models. Further, it discusses the integration of multiple neural systems to handle more complex visual tasks, and describes the hardware realization of these models.

The taxonomy proposed in this review is given in Figure 1. Table 1 lists the nomenclature used in this review. The rest of this article is organized as follows: First, we review the looming-sensitive neuronal models and applications of two locust lobula giant movement detectors (LGMDs), LGMD1 and LGMD2, in Section 2. In Section 3, we introduce translation-sensitive neural systems and their wide applications in flying robots, including UAVs and MAVs; we also present the cutting-edge research on on and off pathways underlying motion perception in biological visual systems. In Section 4, we survey a specific group of visual neural networks for sensing small-target motion. In Section 5, we summarize the commonality in computational modeling of different insect visual systems, and we demonstrate the generation of different direction and size selectivity in these models. We also present further discussion about multiple systems integration and potential hardware implementations of these motion perception models. Finally, in Section 6 we summarize this review article.

Table 1. 
Nomenclature in this review.
LGMD(s) Lobula giant movement detector(s) 
DCMD Descending contralateral movement detector 
DSN(s) Direction-selective neuron(s) 
STMD(s) Small-target motion detector(s) 
SFS Small-field system 
EMD(s) Elementary motion detector(s) 
LPTC(s) Lobula plate tangential cell(s) 
LGN Lobula giant neuron 
OF Optic flow 
HR(C) Hassenstein-Reichardt (correlator) 
FDN(s) Figure detection neuron(s) 
PD(s) Preferred direction(s) 
ND(s) Null or non-preferred direction(s) 
DS Direction selectivity 
VLSI Very large-scale integration 
FPGA Field-programmable gate array 
UAV Unmanned aerial vehicle 
MAV Micro air vehicle 
LGMD(s) Lobula giant movement detector(s) 
DCMD Descending contralateral movement detector 
DSN(s) Direction-selective neuron(s) 
STMD(s) Small-target motion detector(s) 
SFS Small-field system 
EMD(s) Elementary motion detector(s) 
LPTC(s) Lobula plate tangential cell(s) 
LGN Lobula giant neuron 
OF Optic flow 
HR(C) Hassenstein-Reichardt (correlator) 
FDN(s) Figure detection neuron(s) 
PD(s) Preferred direction(s) 
ND(s) Null or non-preferred direction(s) 
DS Direction selectivity 
VLSI Very large-scale integration 
FPGA Field-programmable gate array 
UAV Unmanned aerial vehicle 
MAV Micro air vehicle 

2 Neuron Models of Looming Perception

This section reviews looming-sensitive neuronal models as collision-detecting systems and applications inspired by the locust visual systems. These include two neuronal models of LGMD1 and LGMD2. In addition, this section introduces different methods to shape the looming selectivity in computational structures. This section also covers existing applications of these looming detectors in mobile ground robots, UAVs, and ground vehicles.

The looming stimuli indicate movements in depth of objects that threaten collision, which are very frequent visual challenges to animals. Recognizing looming objects, timely and accurately, is crucial for animals' survival, deciding a variety of visually guided behaviors like avoiding predators. A few looming-sensitive neurons have been explored in insects like locusts [152] and flies [42], and arthropods like crabs [150]. Amongst these animals, locusts have been researched in a good number of studies demonstrating looming perception schemes (e.g., [175, 176, 189]), which have been adopted to design artificial collision-sensitive models and sensors (e.g., [8, 23, 43, 72, 237]). As a result of millions of years of evolutionary development, locusts are “experts” in collision detection and avoidance, and can fly in dense swarms for hundreds of miles free of collision. Realizing this robust ability is required for future intelligent machines like autonomous robots and vehicles interacting with dynamic and complex environments.

2.1 LGMD1-Based Neuron Models and Applications

2.1.1 Biological Research Background

As early as the 1970s, biologists had explored, anatomically, a group of large interneurons in the lobula neuropile layer of the locust's visual brain. These neurons were called lobula giant movement detectors (LGMDs) [151, 152]. LGMD1 was first identified as a movement detector and gradually recognized as a looming-object detector (e.g., [152, 168, 173]). In the same effort, LGMD2 was also identified as a looming detector but with unique characteristics that are different from those of LGMD1 [199]. Both LGMD1 and LGMD2 respond more strongly to objects that approach than to other kinds of movement like recession and translation [170, 175, 176].

The vast majority of studies have been concentrating on LGMD1. This neuron has been demonstrated to play dominant roles in locusts capable of flying [170, 171, 175, 211, 227, 231]. In terms of neuromorphology, Figure 2 illustrates an LGMD1 neuron and both its presynaptic and postsynaptic neural structures. Generally speaking, LGMD1 integrates visual signals from different dendritic areas; these generate two kinds of flows: excitations and inhibitions. The neural processing within the circuit is a competition between these two types of flows [76, 170]. More precisely, the activation of LGMD1 requires the winner of the competition to be the excitatory flow. In addition, the descending contralateral movement detector (DCMD) is a one-to-one connection of a postsynaptic target neuron to the LGMD1 [85, 110, 173, 201]; this neuron conveys the spikes generated by the LGMD1 to following motion-control neural systems producing avoidance behaviors [201].

Figure 2. 

Illustrations of the LGMD1 morphology: (a) Presynaptic neuropile layers of the LGMD1 neuron and the postsynaptic one-to-one target DCMD neuron, adapted from [175], (b) LGMD1's large dendritic fan (A) and two additional dendritic fields (B, C) that receive distinct synaptic inputs, adapted from [78].

Figure 2. 

Illustrations of the LGMD1 morphology: (a) Presynaptic neuropile layers of the LGMD1 neuron and the postsynaptic one-to-one target DCMD neuron, adapted from [175], (b) LGMD1's large dendritic fan (A) and two additional dendritic fields (B, C) that receive distinct synaptic inputs, adapted from [78].

So, what does the LGMD1's neural response look like? Figure 3 demonstrates the responses to different visual stimuli resulting from objects approaching and receding (Figure 3(a)), as well as translation stimuli due to various sizes of dark objects and speeds of translation (Figure 3(b)). It can be clearly seen from Figure 3 that the LGMD1 neuron responds most strongly to looming objects that approach, yielding the highest firing rates. It is only briefly excited by an object moving away. Translating stimuli bring about short-term and weak responses of LGMD1 regardless of the size, direction, and speed of objects.

Figure 3. 

(a) LGMD1 neural response to approaching and receding stimuli, adapted from [175]; arrows indicate a hyperpolarization response of strong inhibition after activation. (b) LGMD1 neural response to approaching and translating stimuli from a variety of sizes and speeds of moving objects, adapted from [155].

Figure 3. 

(a) LGMD1 neural response to approaching and receding stimuli, adapted from [175]; arrows indicate a hyperpolarization response of strong inhibition after activation. (b) LGMD1 neural response to approaching and translating stimuli from a variety of sizes and speeds of moving objects, adapted from [155].

More recently, Yakubowski et al. demonstrated the neural response of LGMD1 in locusts against a visually cluttered or dynamic background and more abundant visual stimuli, including objects that deviate from a collision course [231]. As can be seen from Figure 4 the LGMD1 responds vigorously to a variety of oncoming threats; it can well discriminate collisions from movements of objects in recession. This response is also affected by the degree of complexity of background motion, such as that due to dynamic visual clutter.

Figure 4. 

The LGMD1-DCMD pathway in locusts responds to a variety of courses including collision and deviations from collision, against a simple background of dynamic clutter; adapted from [231]. Red solid vertical lines indicate time of collision. Red dashed vertical lines represent time of transition. Asterisks indicate the time of a local valley or peak in response to a transition.

Figure 4. 

The LGMD1-DCMD pathway in locusts responds to a variety of courses including collision and deviations from collision, against a simple background of dynamic clutter; adapted from [231]. Red solid vertical lines indicate time of collision. Red dashed vertical lines represent time of transition. Asterisks indicate the time of a local valley or peak in response to a transition.

2.1.2 Computational Models and Applications

Computational modeling of looming sensitive neurons has emerged since the 1990s. Seminal work was performed by Rind and Bramwell to model an LGMD1-based neural network [170], as illustrated in Figure 5. In this research, they looked deeper into the presynaptic signal-processing mechanism in the looming-sensitive neural network and proposed a way to explain how the lateral inhibitions can play crucial roles in cutting down the motion-dependent excitations both spatially temporally; importantly, this mechanism effectively shapes the LGMD1's looming selectivity to respond most powerfully to approaching objects. Also importantly, this research highlighted that the visual information sensed by the first layer of photoreceptors is divided into two kinds of signals within the presynaptic structure, namely, the excitations and the lateral inhibitions. In addition, the lateral inhibitions are temporally delayed relative to the excitations and spread out to neighboring cells, symmetrically in space and decaying in time. The interaction between two such types of signals determines the specific looming selectivity of the LGMD1 to approaching rather than receding and translating movements. In addition, a feedforward inhibition can suppress the LGMD1 neuron directly. It can also mediate the LGMD1's response at some critical moments like the end of approach and the start of recession.

Figure 5. 

Schematic of LGMD1 visual neural network proposed by Rind, adapted from [175]. This network consists of four layers: photoreceptors (P), excitations (E), lateral inhibitions (I), and summation cells (S), as well as two LGMD1 cells and feedforward inhibition (F).

Figure 5. 

Schematic of LGMD1 visual neural network proposed by Rind, adapted from [175]. This network consists of four layers: photoreceptors (P), excitations (E), lateral inhibitions (I), and summation cells (S), as well as two LGMD1 cells and feedforward inhibition (F).

Based on this LGMD1 modeling theory, a good number of models have been produced during the past two decades; these works have been not only extending and consolidating the LGMD1's original functionality for looming perception, but also investigating the possible applications to mobile machines like robots and vehicles. More specifically, in two seminal works [22, 23], the LGMD1 neuronal model proposed by Rind [170] was for the first time implemented in a ground mobile robot for collision detection. Rind further demonstrated the usefulness of the LGMD for guiding flying robot behavior and pointed out that another group of directional selective neurons that sense ego motion could be integrated with the LGMD for better collision-detecting performance [169]. In addition, Yue et al. applied the LGMD1 model as an optimized collision-detecting solution for cars; a novel genetic algorithm was for the first time incorporated in this neuronal model for improving collision detection performance in driving scenes [240]. After that, Yue and Rind developed new mechanisms in the LGMD1-based visual neural network, to enhance the ability to extract looming features from complex and dynamic environments and to adapt to different illuminations [235], as illustrated in Figure 6. This method was verified by a vision-based mobile ground robot [194, 235] with better performance in collision detection. Compared with previous bio-robotic studies, the robot agent can recognize potential collision at different speeds and under different lighting conditions, as shown in Figure 7. With similar ideas, Yue and Rind continued exploring the potential of the LGMD1 model in robotic applications such as near-range path navigation; these works include a development of a visually guided control with a bilateral pair of LGMD1-DCMD neurons for a reactive collision avoidance strategy [237, 241] (Figure 8). Hu et al. applied a similar LGMD1 visual neural network as an embedded vision system in a vision-based autonomous micro-robot to demonstrate its computational simplicity for in-chip visual processing [101, 102]. To verify its validity and reliability, the miniaturized robot with on-board LGMD1 processing was tested in an arena mixed with many obstacles, with the results shown in Figure 9. The results demonstrated a very high success rate in collision detection tested with different speeds of the robot and densities of obstacles. Very recently, a similar approach has been implemented in a hexapod walking robot [35] and a small quadcopter for collision avoidance in short-range navigation [249].

Figure 6. 

Schematic of LGMD1-based visual neural network using only 6 cells as an example (a), with a G (grouping) layer (b), and with a FFM (feedforward mediation) mechanism (c). Adapted from [235].

Figure 6. 

Schematic of LGMD1-based visual neural network using only 6 cells as an example (a), with a G (grouping) layer (b), and with a FFM (feedforward mediation) mechanism (c). Adapted from [235].

Figure 7. 

The results of robot collision detection with normalized neural responses (blue lines) and bursts of spikes (red dashed lines), tested at different speeds from slow (a) to fast (c), as well as under light conditions from dim (d) to extremely bright (f). The overtime trajectory is shown for each result. (a)–(c) are adapted from [194], and (d)–(f) are adapted from [235].

Figure 7. 

The results of robot collision detection with normalized neural responses (blue lines) and bursts of spikes (red dashed lines), tested at different speeds from slow (a) to fast (c), as well as under light conditions from dim (d) to extremely bright (f). The overtime trajectory is shown for each result. (a)–(c) are adapted from [194], and (d)–(f) are adapted from [235].

Figure 8. 

A reactive collision avoidance strategy via integrating a bilateral pair of LGMD1 neuronal models to control the left and right wheels of the robot, respectively. Adapted from [241]. (a) The control strategy of the robot, (b) the bilateral LGMD1-DCMD visual neural networks.

Figure 8. 

A reactive collision avoidance strategy via integrating a bilateral pair of LGMD1 neuronal models to control the left and right wheels of the robot, respectively. Adapted from [241]. (a) The control strategy of the robot, (b) the bilateral LGMD1-DCMD visual neural networks.

Figure 9. 

Micro-robot arena tests implemented with an LGMD1 model [101] as the only collision-detecting sensor. The agent was tested at different speeds and with varied densities of obstacles in an arena. The green line indicates the robot trajectory over time, and the red circles denote the obstacles. The experimental data is adapted from [101].

Figure 9. 

Micro-robot arena tests implemented with an LGMD1 model [101] as the only collision-detecting sensor. The agent was tested at different speeds and with varied densities of obstacles in an arena. The green line indicates the robot trajectory over time, and the red circles denote the obstacles. The experimental data is adapted from [101].

Moreover, there are many derivatives of the LGMD1-based neural network proposed by Rind [170], as illustrated in Figure 10. These computational models consist of new methods to enhance the collision selectivity with respect to approaching objects [133], new layers to reduce environmental noise [197, 198], and so on. There are also studies on corresponding applications for cars [91, 124] and mobile robots [43], as well as implementations in hardware such as the field-programmable gate array (FPGA) [132].

Figure 10. 

Schematics of a variety of LGMD1-based visual neural networks: (a) an LGMD1 model encoding onset and offset responses by luminance increments and decrements, adapted from [114]; (b) a modified LGMD1 model for detection of multiple looming objects, adapted from [233]; (c) a simplified LGMD1 model for collision avoidance of a UAV, adapted from [188]; (d) a modified LGMD1 model with enhancement of collision selectivity, adapted from [132, 133]; (e) a modified LGMD1 model with a new layer for noise reduction and spiking-threshold mediation, adapted from [197, 198]; (f) an LGMD1 neural network based on the modeling of elementary motion detectors for collision detection in ground vehicle scenarios, adapted from [91].

Figure 10. 

Schematics of a variety of LGMD1-based visual neural networks: (a) an LGMD1 model encoding onset and offset responses by luminance increments and decrements, adapted from [114]; (b) a modified LGMD1 model for detection of multiple looming objects, adapted from [233]; (c) a simplified LGMD1 model for collision avoidance of a UAV, adapted from [188]; (d) a modified LGMD1 model with enhancement of collision selectivity, adapted from [132, 133]; (e) a modified LGMD1 model with a new layer for noise reduction and spiking-threshold mediation, adapted from [197, 198]; (f) an LGMD1 neural network based on the modeling of elementary motion detectors for collision detection in ground vehicle scenarios, adapted from [91].

Interestingly, Gabbiani has pointed out that there are many ways to build looming-sensitive neuronal models like the locust LGMD [80]. For example, another important theory underlines the nonlinearity in the modeling of looming-sensitive neurons, namely, the LGMD1 would represent a highly nonlinear processing consisting of competition between the inhibitory and the excitatory flows, as proposed by Gabbiani et al. [77, 78, 79]. They have also demonstrated the correspondence between the calculations of feedforward excitation and inhibition and the angular speed and size of looming objects within the field of view. Here, the feedforward inhibition plays a critical role in shaping the collision selectivity of LGMD1 [73]. With respect to the nonlinear interactions between the excitations and inhibitions, the LGMD1 neuronal model could possess a biologically plausible invariance under varied shapes, textures, gray levels, and approaching angles of looming patterns [76, 81].

Based on the nonlinear theory of modeling the LGMD, Keil gave an insight into the mathematical explanations for the generation of nonlinearity in the LGMD neuronal model [112, 113]. Badia et al. incorporated nonlinear (multiplicative) elementary motion detectors (EMDs) to construct the LGMD1 for sensing and avoiding potential collision [19]. Stafford et al. also applied similar strategies to model the LGMD1 for handling looming perception in driving scenarios [207]. In addition, as illustrated in Figure 11, a nonlinear LGMD1 visual neural network was proposed by Badia et al. [8]; the functionality of this model fits well the nonlinear properties of an LGMD1 neuron given by Gabbiani [78], and it displays invariance of collision detection for looming stimuli with varied shapes, textures, and approaching angles [76]. Importantly, Badia et al. demonstrated that the LGMD1 model can encode onset and offset response depending on luminance increments and decrements that bring about different delayed information of excitations and inhibitions, after a seminal work with on and off mechanisms proposed in [114]. Moreover, this model has been successfully implemented in a mobile ground robot performing well in an arena for collision detection in near-range navigation.

Figure 11. 

Schematic of a nonlinear and multi-layered LGMD1 visual neural network, adapted from [8]. This modeling study emphasizes the nonlinear interactions between the feedforward excitation and the feedforward inhibition.

Figure 11. 

Schematic of a nonlinear and multi-layered LGMD1 visual neural network, adapted from [8]. This modeling study emphasizes the nonlinear interactions between the feedforward excitation and the feedforward inhibition.

2.2 LGMD2-Based Neuron Models and Applications

2.2.1 Characterization

LGMD2 is also a neighboring partner to LGMD1 as a looming detector. It has similar characteristics but different selectivity to LGMD1 [110, 166, 168, 171, 176, 199, 210]. With respect to neuromorphology, as illustrated in Figure 12, LGMD2 also has large fan-shaped dendrite trees within its presynaptic area (Figure 12(a)); however, in contrast with LGMD1, the lateral fields (B, C in Figure 12(b)) that convey feedforward inhibitions are not found in LGMD2, and moreover the postsynaptic target neuron to the LGMD2 has not been explored so far. Importantly, a physiological study has demonstrated the development of both neurons in locusts, from adolescence to adulthood: The LGMD2 matures earlier in juvenile locusts that lack wings and live mainly on the ground [210]. As a result, the LGMD2 plays crucial roles in juvenile locusts for perceiving predators, and likely leads to hiding from looming stimuli [72, 210].

Figure 12. 

Neuromorphology of LGMD1 and LGMD2: (a) 3D reconstruction of dendritic trees of LGMD1 and LGMD2 indicated by white and green arrows, respectively, adapted from [210]; (b) a schematic diagram of both the presynaptic and the postsynaptic areas of LGMD1 (red) and LGMD2 (gray), adapted from [68].

Figure 12. 

Neuromorphology of LGMD1 and LGMD2: (a) 3D reconstruction of dendritic trees of LGMD1 and LGMD2 indicated by white and green arrows, respectively, adapted from [210]; (b) a schematic diagram of both the presynaptic and the postsynaptic areas of LGMD1 (red) and LGMD2 (gray), adapted from [68].

More precisely, the specific living environments for young locusts endow the LGMD2 with a particular neural characteristic: It is only sensitive to dark looming objects within a bright background while not responding to white or light objects approaching within a dark background, which represent a preference for the light-to-dark luminance change. Moreover, the biological functions of LGMD2 differ from those of LGMD1 in a number of ways. First, LGMD2 is not sensitive to a light or white looming object, whereas LGMD1 is. Second, LGMD2 does not respond at all to dark objects that recede, while LGMD1 is often excited, though very briefly [199]. Both LGMD1 and LGMD2 respond only briefly to translation, regardless of the size and the direction of moving objects [199]. Neither are they sensitive to wide-field luminance change and grating movements [199]. Figure 13 clearly illustrates these features. Furthermore, to investigate the place where the looming selectivity arises in such neurons, Rind et al. recently looked into the presynaptic neuropile medulla layer in the locust's visual brain. This study proposed that the specific looming selectivity of both LGMDs may be generated within the presynaptic fields [176]: The lateral-and-self inhibition mechanism works effectively to produce the selectivity.

Figure 13. 

Biological data on LGMD2 neuron responses to (a) dark and light objects looming and (b) translating stimuli; adapted from [199].

Figure 13. 

Biological data on LGMD2 neuron responses to (a) dark and light objects looming and (b) translating stimuli; adapted from [199].

2.2.2 Computational Models and Applications

For computationally modeling the LGMD2, only a handful of studies have been proposed so far. A seminal work appeared in 2015: Fu and Yue proposed an LGMD2-based visual neural network to implement an LGMD2 in a vision-based micro-robot with similar selectivity to the light-to-dark luminance change via the modeling of on and off mechanisms [69]. This model separates luminance change into parallel channels and encodes the excitations and the inhibitions via spatiotemporal computation similarly to the LGMD1 model [101], but with different delayed information, as illustrated in Figure 14. More precisely, in order to achieve the specific looming selectivity of LGMD2 to dark objects only, the on channels are rigorously sieved; the on and off mechanisms also bring about different temporally delayed information in each separate pathway. The effectiveness and flexibility of this LGMD2-based visual neural network has been validated by arena tests of an autonomous micro-robot [72], as illustrated in Figure 15. For the first time, the specific functionality of an LGMD2 neuron revealed by biologists has been realized in a computational structure. In contrast to all the aforementioned LGMD1 models, it only responds to dark looming objects and briefly to the receding of light objects, expressing a preference for light-to-dark luminance change, as depicted in Figure 16(a).

Figure 14. 

Schematic of a seminal LGMD2-based visual neural network, adapted from [69, 72]: The model processes visual information with on and off mechanisms that encode brightness increments and decrements separately. The on channels are rigorously suppressed to realize the LGMD2's specific looming selectivity to dark objects. In this model, the excitations are delayed in the off pathway and the inhibitions are delayed in the on pathway, due to the on (onset) and off (offset) mechanisms.

Figure 14. 

Schematic of a seminal LGMD2-based visual neural network, adapted from [69, 72]: The model processes visual information with on and off mechanisms that encode brightness increments and decrements separately. The on channels are rigorously suppressed to realize the LGMD2's specific looming selectivity to dark objects. In this model, the excitations are delayed in the off pathway and the inhibitions are delayed in the on pathway, due to the on (onset) and off (offset) mechanisms.

Figure 15. 

Micro-robot arena tests implemented with an LGMD2 model. The robot agent was tested by different layouts and densities of obstacles in an arena. The black lines indicate robot trajectories over time and the red circles denote the obstacles. The blue circles indicate the start position of the robot agent. The experimental data is adapted from [72].

Figure 15. 

Micro-robot arena tests implemented with an LGMD2 model. The robot agent was tested by different layouts and densities of obstacles in an arena. The black lines indicate robot trajectories over time and the red circles denote the obstacles. The blue circles indicate the start position of the robot agent. The experimental data is adapted from [72].

Figure 16. 

Results of shaping the selectivity between the LGMD1 and the LGMD2 neuronal models via the modeling of on and off pathways and spike frequency adaptation: (a) The LGMD2's response to dark and light approach and recession movements compared with that of an LGMD1 model [235]; adapted from [68]. (b) The effects of shaping the selectivity through blocking either the on or the off pathways; adapted from [68]. The horizontal dashed lines indicate the spiking threshold. The image size change within the field of view is depicted below each.

Figure 16. 

Results of shaping the selectivity between the LGMD1 and the LGMD2 neuronal models via the modeling of on and off pathways and spike frequency adaptation: (a) The LGMD2's response to dark and light approach and recession movements compared with that of an LGMD1 model [235]; adapted from [68]. (b) The effects of shaping the selectivity through blocking either the on or the off pathways; adapted from [68]. The horizontal dashed lines indicate the spiking threshold. The image size change within the field of view is depicted below each.

Further investigating the different looming selectivity between LGMD1 and LGMD2, Fu et al. proposed a hybrid visual neural model, which was smoothly implemented in an autonomous micro-robot for collision detection in an environment containing multiple dynamic robot agents [67] (Figure 17). In this research, both the LGMDs were implemented in the robot agents and tested in both dark and bright environments. Each looming sensitive neuron handles a half region of the field of view for bilateral control of robot reactive avoidance behaviors [67]. This study has verified the effectiveness of such a strategy for guiding timely collision avoidance of mobile robots and, more importantly, demonstrated that the on and off pathways would be a crucial structure for distinguishing the different looming selectivities between LGMD1 and LGMD2. Although there is little physiological evidence showing the existence of such polarity pathways in the locust's visual systems [151, 227], the proposed computational models could evidence similar mechanisms in looming-sensitive neural circuits or pathways [68].

Figure 17. 

Schematic of a general LGMD model; adapted from [67]. In this visual neural network, the functionality of on and off pathways and a spike frequency adaptation mechanism are modeled. This model can realize the characteristics of both the LGMD1 and the LGMD2 with different bias in the partial neural networks.

Figure 17. 

Schematic of a general LGMD model; adapted from [67]. In this visual neural network, the functionality of on and off pathways and a spike frequency adaptation mechanism are modeled. This model can realize the characteristics of both the LGMD1 and the LGMD2 with different bias in the partial neural networks.

2.3 Different Mechanisms to Mediate Looming Selectivity

The selectivity to looming objects over other kinds of visual challenges has been developed very effectively in locusts through millions of years of evolution. As a result, the locust LGMDs spike most strongly in response to looming stimuli. However, from the perspective of computational modelers, it is still an open challenge to shape the selectivity in looming-perception neuron models. Though these state-of-the-art models can be applied as quick and efficient collision detectors, their performance, compared with the locusts', is far from acceptable, since they are greatly affected by irrelevant motion patterns such as recession and translation. They are also restricted by the complexity of backgrounds of real-world visual clutter. In the future, artificial machines should possess similar ability to the locusts', with efficient and robust collision-free systems to better serve human society.

A few methodologies have been proposed to mediate looming selectivity. Rind et al. demonstrated that two kinds of inhibitions—presynaptic lateral inhibitions and feedforward inhibition—can cooperatively mediate looming selectivity by cutting down the excitation caused by receding and translating stimuli [170]. Gabbiani et al. proposed that nonlinear computations make the neuron liable to differentiate looming from receding stimuli [76]. In addition, they revealed an intrinsic neural property of such looming-sensitive neurons, namely, the spike frequency adaptation (SFA), which leads the LGMD to discriminate objects that approach from those that recede or translate [75, 155, 156]. As mentioned above, these methods have been implemented in mobile ground robots for collision detection in near-range navigation (e.g., [8, 101, 194, 235, 237, 241]). However, these theories have not been validated by challenge in more complex real-world scenarios, though some of them have been casually touched upon [124, 207, 240].

In addition, for computationally modeling these mechanisms, there is a tradeoff between the algorithmic efficiency and the efficacy. Computational modelers have always tried to balance the two, in order to achieve reliable and efficient performance in intelligent machines like mobile robots and vehicles. Yue and Rind proposed a hybrid neural system incorporating a translation-sensitive neural network, in order to extract collision information from mixed motion cues [234]. This method is effective in some driving scenarios; nevertheless it costs more computational power than an LGMD model alone. Meng et al. designed an organization of the LGMD1's postsynaptic field to monitor the gradient change of the model output for discriminating approach from recession [133] (Figure 10(d)). This method was smoothly implemented in the FPGA [132]; however, such a structure is not biologically reasonable. Moreover, a neural network of directional motion-detecting neurons in locusts was integrated with the LGMD1 neural network to ensure the recognition of imminent collision in some driving scenarios [246]. In this research, the field of view was divided into different regions processed separately by specialized neurons. More recently, Fu et al. have demonstrated the efficacy of combining two biologically plausible mechanisms—on and off pathways, and spike frequency adaptation—to enhance the required selectivity of both the LGMD1 and the LGMD2 models [67, 68]. These models have been validated by bio-robotic tests on the embedded system. Some example results are illustrated in Figure 16(b) to clarify the effects of separating visual processing in parallel pathways in a computational structure to achieve different looming selectivities for detecting dark and light objects. Notably, the neuron model with the on or the off pathway blocked is only briefly activated by a light or dark object moving away, respectively.

2.4 Further Discussion

Within this section, we have reviewed the computational models and applications originating from locust visual systems research throughout the past several decades. These computational models have demonstrated effectiveness and flexibility for collision detection in some mobile machines, which sheds light on building robust collision-detecting neuromorphic sensors for future intelligent machines for collision detection in both a cheap and a reliable manner.

Biologists have also explored similar looming-sensitive visual neurons in other animals; these include fruit flies (Drosophila) [42, 45, 138, 214] and arthropods like crabs [131]. For instance, the lobula giant neurons (LGNs) in crabs have been identified as looming detectors that are located in the lobula layer and correspond to reactive collision avoidance behaviors [18, 149, 211]. The possible computational roles of such visual neurons have also been proposed in [150]. However, there are no systematic studies on the modeling and applications of such interesting looming detectors in crabs. Though the LGNs have different neuromorphology from the LGMDs, the computational modeling of LGNs may benefit from practical experience with existing LGMD models.

3 Neural Systems for Translation Perception

This section reviews computational models and applications of translation-sensitive motion detectors and neural networks inspired by insect visual neurons and pathways. First, the modeling of directionally selective motion-detecting neurons in locusts, namely the locust direction-selective neurons (DSN) is introduced in Section 3.1. Then, in Section 3.2, we review a classic model category of fly elementary motion detectors (EMDs) and corresponding applications to robotics. After that, the cutting-edge biological findings and computational models of fly on and off pathways and lobula plate tangential cells (LPTCs), namely the fly DSNs, is presented in Section 3.3.

Compared to the nondirectional neurons such as the looming-sensitive LGMD1 and LGMD2, the research on the DSN in animals has a much longer history; it dates back to two centuries ago. Franceschini pointed out that an initial idea of “directionally selective motion sensitive cells” was proposed by Exner in 1894 [61]. The past several decades have witnessed much physiological progress in our understanding of the cellular mechanisms underlying the DS. More specifically, DSNs have been found in many animal species. These include invertebrates like flies [29] and locusts [166], as well as vertebrates like rabbits [15] and mice [31]. Borst demonstrated the similarities of circuits and algorithms in the design of insect and vertebrate motion detection systems for translational motion perception [31]. Generally speaking, this field of research has been attracting much more attention across disciplines. In this article, we focus on presenting some milestone biological theories on translation perception neurons and pathways, and corresponding models with bio-robotic applications.

3.1 Computational Models of Locust DSNs

For locusts, Rind explored the characteristics of DSNs in 1989, in both physiology and morphology [166, 167]. These neurons respond to horizontal motion: They are rigorously excited by translation in a preferred direction (PD) and inhibited by translation in non-preferred directions (NDs).

On the basis of these biological findings, a few computational models have been proposed by Yue and Rind [236, 238]. Generally speaking, these translation-sensitive visual neural networks have been modeled with exquisite organization of different postsynaptic DSNs for collision recognition, especially in driving scenarios. Interestingly, as illustrated in Figure 18, these locust DSN models arise from the LGMD1-based visual neural networks: The computational structure of the DSN looks similar to LGMD1; they nevertheless possess different lateral-inhibition mechanisms. More concretely, in the LGMD1 neural networks, the inhibitions spread out symmetrically to the surrounding areas of a corresponding excitatory cell (Figure 6); on the contrary, in the DSN neural networks, the inhibitions spread out asymmetrically and directionally, as shown in Figure 18(a) and 18(c). Therefore, the specific DS of locust DSNs could have been shaped by such a directional lateral-inhibition mechanism that cuts down local excitations caused by nearby translation. For example, with a strong inhibition from the right side with temporal delay, the excitation caused by left-moving edges can be eliminated or attenuated (Figure 18(b)). Likewise, each direction-sensitive cell is inhibited by the same directional motion. With a design of postsynaptic architecture, as illustrated in Figure 18(c) and 18(d), this model can detect looming objects and, moreover, recognize the direction of impending collision via activation of a specific DSN. Furthermore, Yue and Rind extended the functionality of the DSN visual neural network to sense eight directional motions with similar methods, as depicted in Figure 18(d) [238]; in this research, they also investigated the effects of different postsynaptic organizations on collision detection via evolutionary computation.

Figure 18. 

Schematics of the locust DSN-based visual neural networks with both presynaptic and postsynaptic structures, adapted from [236, 238].

Figure 18. 

Schematics of the locust DSN-based visual neural networks with both presynaptic and postsynaptic structures, adapted from [236, 238].

It appears that locust DSN and LGMD models are both effective solutions to the problem of collision detection. A question emerges concerning which type of visual neurons in locusts may play dominant roles in collision recognition. To address this, Yue and Rind designed a hybrid visual neural network integrating the functionality of both neuron models [239], as illustrated in Figure 19. In this research, two individual neural networks competed with the hybrid neural system via a switch gene and evolutionary computation. As a result, the LGMD model alone outperforms other candidates for collision recognition, due to its computational simplicity and robustness.

Figure 19. 

Schematics of a hybrid visual neural network with three submodels—the LGMD alone, the DSNs alone, and the hybrid—competing for the collision recognition role via genetic algorithms. Adapted from [239].

Figure 19. 

Schematics of a hybrid visual neural network with three submodels—the LGMD alone, the DSNs alone, and the hybrid—competing for the collision recognition role via genetic algorithms. Adapted from [239].

3.2 Fly EMDs Models and OF-based Strategy

3.2.1 Background

Fly visual systems have been investigated by a significant number of studies over a century since the first appearance of such research early in the 1890s [61]. Since those beginnings, the fly visual systems have become prominent model paradigms for studying biological visual processing methodologies and designing artificial motion perception vision systems [27, 29, 30, 49, 92]. With development of biological techniques, the fundamental structures of neuropile layers and cellular implementations within the fly preliminary visual pathways have been better understood by biologists, as reported in numerous articles, such as [21, 61, 84, 115, 121, 212, 224, 250]. They have always attempted to understand the mechanisms underlying motion perception, from fly compound eyes (e.g., [57, 58, 98]), to internal pathways and neurons (e.g., [4, 21, 87, 125, 209]). Figure 20 illustrates fly compound eyes and the underlying neuropile layers of preliminary visual pathways for motion perception.

Figure 20. 

Illustrations of fly compound eyes (a) and underlying neuropile layers (b) for motion perception, adapted from [28, 61].

Figure 20. 

Illustrations of fly compound eyes (a) and underlying neuropile layers (b) for motion perception, adapted from [28, 61].

3.2.2 Fly Motion Detectors

For defining the computational roles of fly motion detection, a few theories have been proposed in the past half century, as summarized in [4] and illustrated in Figure 21. A classic and elegant mathematical model was proposed by Hassenstein and Reichardt to account for biological motion perception [92]. It was named the HR detector or HRC (Hassenstein-Reichardt correlator), but has become commonly referred to as the Reichardt detector or simply the EMD [27, 64, 89, 163]. As illustrated in Figure 22, it has been used to explain motion perception strategy by the activity of neighboring photoreceptors in the field of view. From these works, we can summarize that EMD models have the following characteristics for translational motion detection:

  • 1. 

    The direction of motion can be recognized by comparing the activity of at least two correlated photoreceptors in space.

  • 2. 

    The EMD cannot tell the true velocity of a translating pattern; it is sensitive to the spatiotemporal frequencies of stripes that pass over the stationary detectors.

  • 3. 

    It is also affected by the contrast between a moving pattern and its background, namely, the model responds more strongly to higher-contrast translating objects at an identical speed.

  • 4. 

    There are two paramount parameters in the EMD—the spacing between a pair of detectors and the temporal delays for both detectors—which can significantly influence the detection of motion direction and intensity.

Figure 21. 

Schematics of a variety of classic fly motion detectors, adapted from [4].

Figure 21. 

Schematics of a variety of classic fly motion detectors, adapted from [4].

Figure 22. 

Schematic of an EMD process in four steps. d and M indicate a time delay and a multiplication process on correlated signals from two neighboring sensitive cells (red). Adapted from [64].

Figure 22. 

Schematic of an EMD process in four steps. d and M indicate a time delay and a multiplication process on correlated signals from two neighboring sensitive cells (red). Adapted from [64].

Recently, two studies on fly motion detectors [4, 87] brought together previously well-known algorithmic models in the literature: the HR half-detector (Figure 21(a)) to enhance motion in the PDs, the Barlow-Levick (BL) detector (Figure 21(b)) to suppress motion in the NDs, and the full HR detectors (Figure 21(c)) that map PD and ND motion to positive and negative outputs. More importantly, both articles proposed that the HR and the BL mechanisms may work in different subregions of the fly receptive field [4, 87]. It also appears that visual motion detection in flies might rely upon the processing of three input channels that integrates an HR detector with a BL detector [4], as illustrated in Figure 21(d).

3.2.3 EMD Models and OF-Based Applications to Robotics

There are a huge number of computational models and applications that arise from the EMD theories. A main utility of the EMDs is to mimic fly and bee optic flow (OF) strategy within the field of view (e.g., [24, 32, 51, 111, 121, 122]). As illustrated in Figure 23(a), the OF can be defined as a flow vector field perceived by the visual modality of either animals or machines [121, 193]; this field is generated by the apparent motion and the relative motion between observer and scene. The OF includes two subfields of the translational flow and the rotational flow; both are rigorously dependent on the structure of the environment, including textures and brightness. Most importantly, such a visual strategy can be used to conduct various forms of insect behavior such as landing (e.g., [13, 184, 214]), terrain following (e.g., [53, 141, 184, 193]), tunnel crossing or traveling (e.g., [14, 161]), corridor centering response (e.g., [5, 41, 192]), collision or obstacle avoidance (e.g., [20, 130, 139, 159, 248]), target racking (e.g., [136, 186]), and fixation (e.g., [12]).

Figure 23. 

Illustrations of fly OF strategy based on the EMDs: (a) a classic method for the integration of local optic flow vectors for the estimation of self-motion, adapted from [121]; (b) a collision avoidance strategy based on the OF of a dragonfly, adapted from [86].

Figure 23. 

Illustrations of fly OF strategy based on the EMDs: (a) a classic method for the integration of local optic flow vectors for the estimation of self-motion, adapted from [121]; (b) a collision avoidance strategy based on the OF of a dragonfly, adapted from [86].

A well-known type of velocity-tuned EMDs was proposed by Franceschini et al. in 1992 [60]. In contrast with the classic HRC, the output of the velocity-tuned EMDs depends on the ratio between the photoreceptor angles in space, as well as the time delay for each pairwise contrast detection photoreceptor. This model was initially tested on a mobile robot [60]. Subsequently, it has also been called the “time of travel” [137] or “facilitate and sample” scheme and has been implemented in pulse-based analogue VLSI velocity sensors [120].

Furthermore, Iida proposed a method to integrate each pairwise local EMD in a spatiotemporal manner to compute the visual odometer over time (Figure 24(a)); this approach was validated by navigation of a flying robot [104, 106]. Snippe and Koenderink demonstrated possible solutions to extract optical velocity by the design of ensembles of HR detectors [202]. Zanker et al. investigated the speed tuning and estimation of EMDs [242, 243]. Subsequently, they analyzed video sequences of outdoor scenes from a panoramic camera and utilized optic flows as motion signal maps via two-dimensional EMDs [244]. Rajesh et al. modified the traditional HR detector to improve its velocity sensitivity and reduce its dependence on contrast and image structure; this work matched the neurobiological findings that an adaptive feedback mechanism is able to normalize the contrast of input signals in order to improve the reliability of velocity estimation [162]. In addition, Bahl et al. incorporated in the EMDs a parallel position-sensitive pathway to track a translating stripe in a simple background and mimic Drosophila behavioral response to fixation, as shown in Figure 24(b); this work was very well reconciled with electrophysiological recordings from motion-blind flies [12].

Figure 24. 

A variety of EMD models: (a) This model integrates multiple pairwise EMDs in a two-dimensional form to compute the visual odometer; adapted from [106]. (b) This model is used to simulate fly fixation behavior by combining the EMD with an individual location-sensitive pathway; adapted from [12]. (c), (d) These are models for estimating angular velocity in the bee's brain, adapted from [41, 218], respectively.

Figure 24. 

A variety of EMD models: (a) This model integrates multiple pairwise EMDs in a two-dimensional form to compute the visual odometer; adapted from [106]. (b) This model is used to simulate fly fixation behavior by combining the EMD with an individual location-sensitive pathway; adapted from [12]. (c), (d) These are models for estimating angular velocity in the bee's brain, adapted from [41, 218], respectively.

As a variation of the EMD, a few angular velocity estimation models were proposed to account for corridor-centering behaviors of bees (e.g., [41, 218]), as illustrated in Figure 24(c) and 24(d). Similarly to [106], the integrated response can be used as a visual odometer over time. Moreover, there are also studies on the temporal adaptation of EMDs [38, 89], the contrast sensitivity of EMDs [178], and an EMD-based algorithm for global motion estimation [140], as well as a nondirectional HR detector model for simulating speed-dependent insect behavior [96], and so on.

The many aforementioned computational studies have demonstrated the effectiveness and computational simplicity of the fly EMD models and OF-based control strategies. As a result, these models have been leading the design of small and efficient artificial translation perception sensors, such as a silicon retina mimicking the structure of the fly compound eye as shown in Figure 25(g), which was designed in a very tiny size and used for on-board autopilots [215]. Another example is the implementation of CurvACE, a miniature curved artificial compound eye as seen in Figure 25(h), which achieves optical spatial filtering with the bell-shaped angular sensitivity created on each photoreceptor [55]. Yet another is the sensor implementation using thresholding-based digital processing on the MAV [179, 185]. In addition, as illustrated in Figure 25, these models have been very widely used in bio-robotic applications. These bio-inspired robots can show visually guided behaviors like those of insects. More concretely, there are many ground and flying robots benefiting from OF-based sensors that are used for guiding the robots for autopilots (e.g., [62, 63, 161, 183, 187]); collision avoidance (Figure 25(a), 25(c)) (e.g., [59, 86, 158, 193]); tunnel crossing (Figure 25(a), 25(c)) (e.g., [180, 181, 193]); terrain following (e.g., [53, 186]); takeoff and landing behaviors (Figure 25(d), 25(e), 25(f)) (e.g., [59]); and indoor and outdoor visual odometry on a carlike robot [128]. Also, on aerial robots, one study has compared autoadaptive retina-based implementations using thresholding-based digital processing with those using cross-correlation digital processing [215]. In this field, two prominent review articles [59, 193] introduce, more systematically, the relevant bio-robotic approaches and applications of the fly EMDs and OF.

Figure 25. 

A few instances of bio-robotic applications of the fly EMDs and OF-based control strategy: (a) is adapted from [86], (b) is adapted from [181], (c)–(f) are adapted from [59], (g) is adapted from [215], (h) is adapted from [55].

Figure 25. 

A few instances of bio-robotic applications of the fly EMDs and OF-based control strategy: (a) is adapted from [86], (b) is adapted from [181], (c)–(f) are adapted from [59], (g) is adapted from [215], (h) is adapted from [55].

3.2.4 Further Discussion

A shortcoming or unsolved problem of the fly EMD-based models for translation perception is the velocity tuning of motion detection. In other words, a biological motion-detecting circuit may not tell the true velocity of the stimuli [64]. The reason is that for each combination of such delay-and-correlate motion detectors, it is advisable to decide the spacing between each pair of detectors, and the time span for the delay, in follow-up nonlinear computation involving a multiplication, each factor of which will affect the model's performance for sensing translation [243]. For example, perceiving faster movements requires a larger spatial span between detectors if one fixes the temporal delay, but a shorter time delay when the spacing is unchanged.

Another defect is that the state-of-the-art models or strategies for motion perception still lack robust mechanisms for filtering out irrelevant motion from dynamic visual clutter of great complexity in the real world, so that they are always influenced by environmental noise such as windblown vegetation and shifting of background or surroundings caused by ego-motions. From a computational modeler's perspective, it is still a big challenge to make the motion perception visual systems robust by filtering out irrelevant from relevant motion when dealing with a complex and dynamic scene as in driving scenarios.

3.3 Modeling of Fly on and off Pathways and LPTCs

Within this subsection, we will continue to present the cutting-edge biological research on fly visual systems. We will focus on the understanding of underlying circuits or pathways in fly visual brains for preliminary motion detection. We have now understood from Section 3.2 that visual neurons compute the direction of motion conforming to the HR or related theories: Both flies and bees apply the EMDs to sense optical flows, which is very effective for dealing with a variety of visually guided insect behaviors. We have also seen that the optical flow is sensed by the fly compound eye after spatial filtering of motion information. However, a few questions still exist: Where does the specific DS occur to perceive translation stimuli within the internal structure of fly visual pathways? Which neurons carry out the corresponding neural computation? And how does the neural response connect to visuomotor control?

3.3.1 Biological Exploration

To address these questions, biologists have demonstrated that fly on and off parallel visual pathways and LPTCs are the proper places to implement directionally selective signal processing, as illustrated in Figure 26(a) [26, 28, 30, 31]. A seminal work by Franceschini et al., early in 1989, proposed a transient on-off nature of EMD response in the housefly. In this research, a splitting of an EMD into an on-EMD and an off-EMD was presented to sense light and dark edge translation, separately [61]. Though many parts of the underlying neural mechanisms in the fly visual systems remain unknown today, many efforts have been made by biologists to explore internal structures, including cellular functionality, underlying directional motion perception, particularly in the recent two decades [2, 4, 17, 21, 54, 87, 125, 177, 195, 196, 208, 209, 212, 217]. More concretely, a nice advance was made by Joesch et al., in 2010. They looked into the downstream processing of photoreceptors and found that the visual signals are split into two parallel polarity pathways by L1 (on) and L2 (off) interneurons in the neuropile lamina layer [108], as illustrated in Figure 26(a). After that, Maisak et al. revealed the characteristics of T4 and T5 cells in the neuropile medulla and lobula layers [129] as shown in Figure 26(b). Subsequently, a group of LPTCs in the neuropile lobula plate layer were also identified as wide-field detectors that integrate upstream visual signals [74, 84, 129, 191] (Figures 26(a) and 26(b)). Importantly, the LPTCs have been demonstrated to process the optic flow field sensed by photoreceptors on a higher level corresponding to the control of the visual flight course [193]. Furthermore, as shown in Figure 26(c), the horizontal-translation-sensitive LPTCs are rigorously activated by motion in the PD, yet inhibited by motion in the ND underlying the specific DS of the fly DSNs.

Figure 26. 

Schematic diagrams of fly preliminary motion-detecting neuropile layers: (a) The underlying on and off pathways with interneurons and LPTCs perceive visual motion stimuli and generate the DS to four cardinal directional motions. (b) The neuromorphology of fly visual circuits—the LPTCs pool directionally specific motion signals individually into each sublayer. (c) The LPTCs respond to directional translating stimuli by movements of dark and light bars. (a), (b) are adapted from [87], and (c) is adapted from [28].

Figure 26. 

Schematic diagrams of fly preliminary motion-detecting neuropile layers: (a) The underlying on and off pathways with interneurons and LPTCs perceive visual motion stimuli and generate the DS to four cardinal directional motions. (b) The neuromorphology of fly visual circuits—the LPTCs pool directionally specific motion signals individually into each sublayer. (c) The LPTCs respond to directional translating stimuli by movements of dark and light bars. (a), (b) are adapted from [87], and (c) is adapted from [28].

On the basis of these biological findings, we can draw a directional tuning map of fly preliminary visual processing through layers to demonstrate the following steps for translational motion perception, according to Figure 26(a):

  • 1. 

    The motion detection starts with splitting visual signals captured by photoreceptors (R1–6) in the first retina layer into two parallel on and off pathways; this is done by the large monopolar cells (LMCs) L1 and L2 in the second lamina layer.

  • 2. 

    The L1 interneurons convey onset response by luminance increments to neurons in the medulla layer; while the L2 interneurons relay offset response by luminance decrements to neurons in the lobula layer.

  • 3. 

    The EMDs are assigned to the T4 cells in the medulla layer in the on pathway and to the T5 cells in the lobula layer in the off pathway, in order to generate directionally specific motion signals. It is worth emphasizing that the four cardinal directions are determined separately in different groups of T4 and T5 cells.

  • 4. 

    Finally, the LPTCs integrate signals from on and off channels in the lobula plate; motion information with the same direction congregates at the same sublayer and jointly flows downstream to circuits such as motion-driven neural systems.

  • 5. 

    There is another pathway, namely, the L3–Tm9–T5 (Figure 26(a)) that provides locational information on objects to the off pathway and is independent of the direction of motion [54]; this pathway cooperates effectively with the on and off pathways to produce fly fixation behavior [12, 70].

3.3.2 on and off Motion Detectors

Since the cellular implementations of EMDs have been demonstrated to be produced in the on and off pathways, there have been a few fly motion detectors with on and off mechanisms, as illustrated in Figure 27. The fundamental computation conforms to the full HR detectors (Figure 22(c)). More precisely, a four-quadrant (4-Q) detector encodes input combinations of on-on, off-off, on-off and off-on cells, which mathematically conform to the traditional EMDs (Figure 27(a)). Eichner et al. proposed a two-quadrant (2-Q) motion detector, as illustrated in Figure 27(b); this model processes input combinations of only same-sign signals, that is, on-on in the on pathway and off-off in the off pathway [52]. In addition, they showed that a small fraction of the original signals can pass through into the motion-detecting circuits, which demonstrates that not only the transient luminance change but also the permanent brightness can be encoded by motion-sensitive neurons. Moreover, Clark et al. presented a six-quadrant (6-Q) detector through behavioral experiments on fruit flies, as illustrated in Figure 27(c); this model was constructed to match the behavioral observations in L1- and L2-blocked flies [36]. In this computational theory, pathways of either polarity convey positive and negative contrast changes with edge selectivity inside motion detection circuits. To make a decision between these alternatives that determine the motion detection strategy in flies, a case study suggested the existence of two- rather than six-quadrant detectors, by genetically silencing either the on or the off pathway [109].

Figure 27. 

Schematics of classic fly motion detectors with different combinations of on and off detectors: (a) three basic types—a pairwise EMD (A), a 4-Q model (B) and a 2-Q model (C), adapted from [52]; (b) the 2-Q detector with input combinations of the same-sign polarity detectors, adapted from [52]; (c) a 6-Q detector processing on and off edge information in both pathways, adapted from [36].

Figure 27. 

Schematics of classic fly motion detectors with different combinations of on and off detectors: (a) three basic types—a pairwise EMD (A), a 4-Q model (B) and a 2-Q model (C), adapted from [52]; (b) the 2-Q detector with input combinations of the same-sign polarity detectors, adapted from [52]; (c) a 6-Q detector processing on and off edge information in both pathways, adapted from [36].

3.3.3 Computational Models

Following previous EMD and OF strategies, these different on and off combinations of fly motion detectors have given us further understanding of the complex fly visual processing circuits; however, there have been very few modeling studies systematically testing these polarity detectors and LPTC in real-world scenes with dynamic visual clutter. To fill this gap, Fu and Yue recently proposed a fly DSNs-based visual neural network with ensembles of 2-Q detectors to extract translational motion cues from a cluttered real physical background [71]. Subsequently, they extended this model to a behavioral response to fixation mimicking the fly behavior, by incorporating in it a newly modeled position-sensing system of the L3-Tm9-T5 pathway in Figure 26(a) [70]. This model was then successfully implemented on the embedded system in an autonomous micro-robot [65]. Likewise, Wang et al. estimated the background motion from the LPTC responses to shifting of visually cluttered scenes. In this modeling study, a maximization operation mechanism was proposed to simulate the functionality of the wide-field Tm9 neurons [220], which effectively improves the performance in perceiving wide-field background translation.

4 Small-Target Motion Perception Models

In previous sections, we have reviewed motion perception models that originate from insect visual systems and possess specific DS to looming and translation visual stimuli and corresponding applications to artificial machines. This section continues to review computational models of a specific group of visual neurons, which are sensitive to moving objects with small sizes only. These include the small-target motion detector (STMD) in Section 4.2 and the figure detection neuron (FDN) in Section 4.3, which have particular size selectivity for small-target motion, with relevant biological findings about the STMD and the FDN in insect visual systems.

4.1 Background

Due to the long sight distance, targets such as mates or prey always appear as small dim speckles whose size may vary from one pixel to a few pixels in the field of view. Being able to perceive such small targets, at a distance and early, would endow insects with more competitiveness for survival. However, from a modeler's perspective, small-target motion detection against naturally cluttered backgrounds has been considered as a challenging problem that includes the following aspects: (1) small targets cannot provide sufficient visual features, such as shape, color, and texture, for target detection; (2) small targets are often buried in cluttered background and difficult to separate from background noise; (3) ego-motion of the insects may add further difficulties to small-target motion detection. These challenges have been solved well by insects through long-term evolutionary development, and their small-target motion detection visual systems are efficient and reliable.

4.2 Small-Target Motion Detectors

4.2.1 Biological Research

In the insect visual systems, a class of specific small-field motion-sensitive neurons, called the small-target motion detectors (STMDs), is characterized by their exquisite sensitivity to small-target motion. STMD neurons have been observed in several insect groups, including hawk moths [39], hoverflies [40], and dragonflies [146, 147, 148]. In the past two decades, the anatomy and physiology of STMD neurons have been further investigated in a good number of studies [16, 25, 83, 142, 143, 144, 145].

The most significant difference between the STMD and other wide-field motion-sensitive neurons, like the LGMD, the DSN, and the LPTC is that the STMD has specific size selectivity to small-field movements. More precisely, the STMD represents peak responses to targets subtending 1–3 deg of the field of view, yet have no response to larger bars (typically > 10°) or to wide-field grating stimuli [142, 143]. To clearly demonstrate the size selectivity of STMD neurons, the response of the STMD neuron to targets of varying heights is shown in Figure 28. From Figure 28(a), we can see that the smallest two targets, whose heights are equal to 0.8° and 3°, respectively, can elicit stronger neural response of the STMD neurons. However, the response to the largest target, whose height equals 15°, is much weaker and indistinguishable from spontaneous activity. The selectivity of the STMD to target height can be clearly seen in Figure 28(b). These results demonstrate that the STMD has an optimal size sensitivity corresponding to the strongest neural response. When the target height is higher or lower than that optimal one, the neural response will significantly decrease.

Figure 28. 

STMD neuronal raw responses: (a) Neuronal responses to motion of three different-size targets (0.8°, 3°, or 15° high by 0.8° wide) drifted against bright backgrounds; the horizontal bars indicate the movement duration, and the arrows denote the direction of target motion. Adapted from [143]. (b) The response of an STMD to targets of varying height. Adapted from [142].

Figure 28. 

STMD neuronal raw responses: (a) Neuronal responses to motion of three different-size targets (0.8°, 3°, or 15° high by 0.8° wide) drifted against bright backgrounds; the horizontal bars indicate the movement duration, and the arrows denote the direction of target motion. Adapted from [143]. (b) The response of an STMD to targets of varying height. Adapted from [142].

Some STMD neurons have also demonstrated DS [143, 145]. These directionally selective STMD neurons respond strongly to small-target motion oriented along the PD, but show weaker or no, or even fully opponent, response to the ND motion. Figure 29(a) illustrates raw responses of a directionally selective STMD neuron that prefers target motion to the left, stimulated by three different-size targets; this demonstrates that the larger target, whose height equals 15°, cannot activate the STMD neuron by PD motion. However, for the smaller targets, whose heights are equal to 0.8° and 3°, the STMD neuron responds strongly to PD motion. On the other hand, when the smaller targets move in the ND, the response of the STMD neuron is not significantly different from spontaneous activity, which means it is inactive. In further research [16, 143], biologists assert that both the size and the DS of an STMD are independent of background motion. More concretely, an STMD will rigorously respond to small-target motion against visually cluttered background, regardless of background motion direction and velocity. In Figure 29(b), we can see that the STMD neuron shows strong response to a small target moving along the PD (downward), but much weaker response to one moving along the ND (upward). Besides, the response to small-target motion is quite robust in spite of either the direction or the velocity of background motion. In other words, the STMD can recognize small-target motion even without relative motion between the moving objects and the background.

Figure 29. 

(a) Raw responses of a directionally selective STMD neuron that prefers target motion to the left, tested by motion of three different-size targets (0.8°, 3°, or 15° high by 0.8° wide) drifted against bright backgrounds. The horizontal bars indicate the stimulus duration, and the arrows denote the direction of target motion. Adapted from [143]. (b) Responses of an STMD neuron that prefers target motion downward, to targets drifted against cluttered backgrounds. Adapted from [143].

Figure 29. 

(a) Raw responses of a directionally selective STMD neuron that prefers target motion to the left, tested by motion of three different-size targets (0.8°, 3°, or 15° high by 0.8° wide) drifted against bright backgrounds. The horizontal bars indicate the stimulus duration, and the arrows denote the direction of target motion. Adapted from [143]. (b) Responses of an STMD neuron that prefers target motion downward, to targets drifted against cluttered backgrounds. Adapted from [143].

4.2.2 Computational Models and Applications

On the basis of these biological findings, a few computational models have been put forward, in the past decade, to simulate the STMD. Wiederman et al. proposed a seminal model of an elementary small-target motion detector (ESTMD in Figure 30) to account for the specific size selectivity of the STMD [229]. However, the ESTMD model is unable to realize the DS of the STMD revealed by biologists. To implement the DS, two hybrid models (i.e., the ESTMD-EMD and the EMD-ESTMD) were proposed for achieving the DS of the STMD [228]. More specifically, the ESTMD-EMD indicates that the ESTMD cascades with the EMD, while the EMD-ESTMD indicates that the EMD cascades with the ESTMD. These two hybrid models have been successfully used for target tracking against cluttered backgrounds in an autonomous mobile ground robot [9, 10, 11]. Another directionally selective STMD model, the directionally selective small-target motion detector (DSTMD in Figure 30), was very recently proposed by Wang et al. [222]. Compared to other STMD models, the DSTMD provides unified and rigorous mathematical descriptions. Besides, the DS of the DSTMD has been systematically studied and the motion direction of small targets can be estimated with it [222], as illustrated in Figure 31. In comparison with the ESTMD, we can obtain from Figure 31(b) and 31(c) that the most significant difference between the DSTMD and the ESTMD is that the former can generate the DS to small target motion. More precisely, in Figure 31(c), the DSTMD has eight outputs corresponding to eight PDs θ, θ ∈ {0, π4, π2, 3π4, π, 5π4, 3π2, 7π4}. On the other hand, in Figure 31(b), the ESTMD produces only a single directional response. To clearly show the DS, the DSTMD responses to a small target are shown in polar coordinates as well in Figure 31(c): The DSTMD exhibits the strongest response along the direction θ = π, which is consistent with the motion direction of the detected small-target translation. The other seven directional outputs of the DSTMD decrease as the corresponding direction θ deviates from the target's motion direction.

Figure 30. 

Schematics of an ESTMD (adapted from [229]) and a DSTMD (adapted from [222]) computational model for the detection of small-target motion. The ESTMD integrates signals from each single position, while the DSTMD has correlations between every two positions.

Figure 30. 

Schematics of an ESTMD (adapted from [229]) and a DSTMD (adapted from [222]) computational model for the detection of small-target motion. The ESTMD integrates signals from each single position, while the DSTMD has correlations between every two positions.

Figure 31. 

The input signal (a) and the model outputs of the ESTMD (b) and the DSTMD (c). The DSTMD has eight outputs corresponding to eight PDs θ represented by eight colors in polar coordinates. The angular coordinate describes the PD motion θ, while the radial coordinate describes the strength of neural response along this PD.

Figure 31. 

The input signal (a) and the model outputs of the ESTMD (b) and the DSTMD (c). The DSTMD has eight outputs corresponding to eight PDs θ represented by eight colors in polar coordinates. The angular coordinate describes the PD motion θ, while the radial coordinate describes the strength of neural response along this PD.

4.3 Figure Detection Neurons

4.3.1 Biological Research

One class of LPTCs, namely figure detection neurons (FDNs), has also been demonstrated to show preference to small targets [47, 48, 82, 117, 118]. Although both the FDN and the STMD exhibit size selectivity to moving targets, they differ in the preferred size. More specifically, the STMD shows strongest response to targets of size 1–3 deg [142, 143]. However, the FDN responds optimally to targets whose size is in the range of 6–12 deg [48, 95]. Figure 32(a) presents responses of the FDN to targets with varying widths. There we can see that the optimal width for the FDN is 6 deg, which is larger than that of the STMD (1–3 deg). Another difference between these two types of small-target motion-sensitive neurons is the underlying mechanisms for size selectivity. To be more precise, the STMD does not receive inhibition from wide-field neurons [143], and its size selectivity results from a second-order lateral inhibition mechanism [25]. However, the size selectivity of an FDN is assumed to be the result of inhibition from wide-field neurons [95, 223]. Figure 32(b) shows a wiring sketch of an FDN input circuit; the FDN is inhibited by the vCH cell [223], which receives excitatory and inhibitory inputs from other motion-sensitive LPTCs, including HSE, HSS, H1, H2, and Hu cells [46, 123, 203].

Figure 32. 

(a) The response of an FDN to targets of various widths. The horizontal axis denotes the target width (degrees), while the vertical axis denotes the neural response (spikes/cycle). Adapted from [48]. (b) Wiring sketch of the FD1 cell input circuit, adapted from [93]. The FD1 cell is a thoroughly analyzed FDN.

Figure 32. 

(a) The response of an FDN to targets of various widths. The horizontal axis denotes the target width (degrees), while the vertical axis denotes the neural response (spikes/cycle). Adapted from [48]. (b) Wiring sketch of the FD1 cell input circuit, adapted from [93]. The FD1 cell is a thoroughly analyzed FDN.

4.3.2 Computational Models and Applications

For computationally modeling the FDN, a few models called small-field systems (SFSs) have been proposed to account for the specific size selectivity of FDNs [48, 164, 165]. These SFSs are quite similar; an instance is shown schematically in Figure 33. This SFS is composed of an output neuron, the FDN (xi), which receives retinotopic inputs (Vi+, Vi), as well as inputs from inhibitory neurons (PrightCW, PrightCCW). The retinotopic inputs (Vi+, Vi) denote motion information that is detected by the EMDs. In [93], these SFSs were modified to allow a simulated fly to track a small moving target in a virtual environment. Although the size selectivity of the FDN results from the inhibition from wide-field neurons, the wiring scheme between the wide-field neurons and the FDNs is still unclear. In [95], Hennig et al. analyzed three kinds of wiring schemes between the wide-field neurons and the FDN, based on new anatomical and electrophysiological findings [50, 88], as illustrated in Figure 34. The authors indicated that the latter two wiring schemes (i.e., direct distributed inhibition and indirect distributed inhibition in Figure 34) can account well for the size selectivity of the FDN and the dependence of the FDN on the relative velocity between the small target and the background. In [93], Hennig et al. further improved the existing SFS by modeling the presynaptic neurons of the FDN, including H1, Hu, HSE, and HSS [94], as illustrated in Figure 32(b). They integrated the responses of presynaptic neurons in the proposed FDN circuit; this effectively matches the corresponding biological structure. In addition, comparing this with the existing studies on modeling FDNs that all use simple synthetic stimuli, the authors applied naturalistic stimuli to test the proposed FDN circuit and demonstrated its characteristics.

Figure 33. 

Schematic of an SFS based on the FDN. Excitatory and inhibitory synapses are indicated by black and white triangles, respectively. Shunting inhibition is indicated by gray triangles. Responses from neighboring photoreceptors (PR) are input to EMDs. The EMD outputs are split into positive and negative components. These components are aggregated into directionally selective monocular pool cells (P+, P) and then into CW and CCW binocular pool cell responses (PrightCW, PrightCCW). These directionally selective binocular pool cells interact via shunting inhibition with the individual motion detector output channels prior to their combination by the unit xi. For simplicity, only the right-side computation is diagrammed. This figure is adapted from [97].

Figure 33. 

Schematic of an SFS based on the FDN. Excitatory and inhibitory synapses are indicated by black and white triangles, respectively. Shunting inhibition is indicated by gray triangles. Responses from neighboring photoreceptors (PR) are input to EMDs. The EMD outputs are split into positive and negative components. These components are aggregated into directionally selective monocular pool cells (P+, P) and then into CW and CCW binocular pool cell responses (PrightCW, PrightCCW). These directionally selective binocular pool cells interact via shunting inhibition with the individual motion detector output channels prior to their combination by the unit xi. For simplicity, only the right-side computation is diagrammed. This figure is adapted from [97].

Figure 34. 

Schematics of possible circuits for the input organization of an FDN. The small-field selective FDN receives excitatory retinotopic input from motion-sensitive elements. Inhibitory input of the FDN is mediated by the CH cell via HS cells. For simplicity, only one of the two HS cells that provide input to the CH neuron is shown in this sketch. (A) The CH inhibits the FDN after spatial pooling (direct pooled inhibition). (B) The CH inhibits the FDN in a directly distributed way. (C) The CH inhibits the retinotopic input elements of the FDN in an indirectly distributed way. This figure is adapted from [95].

Figure 34. 

Schematics of possible circuits for the input organization of an FDN. The small-field selective FDN receives excitatory retinotopic input from motion-sensitive elements. Inhibitory input of the FDN is mediated by the CH cell via HS cells. For simplicity, only one of the two HS cells that provide input to the CH neuron is shown in this sketch. (A) The CH inhibits the FDN after spatial pooling (direct pooled inhibition). (B) The CH inhibits the FDN in a directly distributed way. (C) The CH inhibits the retinotopic input elements of the FDN in an indirectly distributed way. This figure is adapted from [95].

5 Discussion

Throughout the above survey, we have introduced motion perception visual models that possess different direction and size selectivity, inspired by insect visual systems, and we have provided background biological research and corresponding applications in artificial mobile machines like robots, MAVs, UAVs, and ground vehicles for motion perception. These models can sense different motion patterns including looming, directional translation, and small-target motion. This section further discusses the similarities in modeling of different insect motion detectors, and summarizes the computational generation of both the direction and the size selectivity to proposed different motion patterns, and also points out existing and possible hardware implementations.

5.1 Similarities in Different Motion Perception Models

Though these motion detectors display different direction or size selectivity, there are similarities that can be summarized through these computational studies. Taking the fly- and locust-inspired visual neural networks or models as examples, a great majority of these models have been focusing on structural modeling of internal circuits or pathways of insect visual systems. These models can share some similar visual processing methods:

  • 1. 

    In most insect species, vision is mediated by compound eyes in the first layer of the retina [34, 98], which consists of photoreceptors capturing grayscale or single-channel (normally the green channel) image sequences. This neuropile layer retrieves motion information by spatial filtering of input signals at the ommatidia level [98], as shown in the LGMD models in Figures 5, 6, 10, 11, 14, and 16, the fly LPTC models in Figures 26 and 27, and the STMD models in Figure 30.

  • 2. 

    With respect to the biological findings of on and off pathways in many animals including the various kinds of flies, the second lamina layer consists of rectifying transient cells separating visual signals into parallel channels. Although this structure has not yet been found in locusts, recent studies on LGMD1 and the LGMD2 may evidence similar on (onset) and off (offset) mechanisms in the locust visual systems [67, 68, 69, 72, 176].

  • 3. 

    Within the medulla and lobula computational layers, both the direction and the size selectivity are generated and sharpened with respect to specific motion patterns. Both the lateral inhibition mechanism and the HR-like nonlinear computation can contribute to shape the specific DS.

  • 4. 

    The modeled LGMDs, LPTCs, and DSNs are wide-field motion-sensitive visual neurons that pool the intact presynaptic local directional motion information and then generate spikes for further sensorimotor neural systems. However, the small-field STMD models generate the DS to small-target movements in every local pixel-scale field.

  • 5. 

    We highlight the functionality of on and off visual pathways that can explain biological visual processing in insect motion-sensitive circuits. Such a structure can be modeled in different motion perception neuron models, including LGMDs, DSNs, LPTCs, and STMDs. In addition, the modeling of motion-sensitive visual systems in other animals, such as the LGNs in crabs [150], may profit from the existing models.

5.2 Realization of Direction and Size Selectivity with Respect to Different Motion Patterns

This subsection summarizes generation of both direction and the size selectivity diversity in the preceding computational motion-perception neural networks and models. In the above reviews of looming- and translation-sensitive neural systems, we note that (1) the different direction selectivity of various models is shaped presynaptically to the wide-field motion detectors of the DSNs, the LPTCs, and the LGMDs, that is, in the medulla or the lobula neuropile layers, as illustrated in Figures 2 and 20; and (2) the spatiotemporal computation including the lateral [170, 238] and the self [67, 176] inhibition mechanisms and the nonlinear interactions between neighboring cells [27] can well mediate the specific DS to either looming or translation motion patterns.

Specifically for locusts, as reviewed in Section 2 and Section 3.1, there are two types of motion-sensitive visual neurons: the LGMDs and the DSNs. Though they are applied as collision-detecting sensors, different DSs exists between the LGMDs and the DSNs, as the schematics in Figures 6, 14, and 18 show. Firstly, for realizing the specific DS of the locust LGMDs to looming stimuli only, these neuron models and neural networks have been demonstrated for the lateral-only and the lateral-and-self inhibition mechanisms of spatiotemporal convolution processes. More precisely, the inhibitions in the computational medulla layer are formed by convolving surrounding and symmetrically spreading excitations with temporal delay [170, 176, 235, 239, 240]. That is,
Ixyt=∫∫∫EuvsWxuyvtsdudvds,
(1)
where W is a local convolution kernel, and I and E denote the inhibition and excitation cells both in a three-dimensional form. After that, the excitatory and inhibitory signals compete with each other at every local cell. That is,
Sxyt=ExytwIxyt,
(2)
where S denotes the summation cells and w is a local bias. As a result, the excitations are cut down by the inhibitions. This mechanism plays a crucial role in shaping the selectivity in the LGMD models, which respond most strongly to looming (expanding of object edges) or to translating and receding objects. In addition, with regard to the modeling of on and off mechanisms in the LGMD models, excitations can be also formed by convolving surrounding delayed inhibitions [67, 68, 72], the calculation of which is similar to Equation 1.
Secondly, for the locust DSNs, each directionally specific neuron responds to motion oriented along a particular PD. With similar ideas to the modeling of LGMD, the DS can be realized by a directional convolution process asymmetrically spreading out inhibitions with temporal delay [236, 238, 239]. That is,
Ixyt=∫∫EuysWxuytsduds
(3)
in horizontal directions, and
Ixyt=∫∫ExvsWxyvtsdvds
(4)
in vertical directions. After that, there are also competitions between every local excitation and inhibition cell. The calculation conforms to Equation 2.
With regard to the fly EMD and LPTC models for translation perception as reviewed in Section 3.2 and Section 3.3, the specific DS to the four cardinal directions in the field of view (front-to-back, back-to-front, upward, and downward) is implemented by nonlinear spatiotemporal computations according to the classic HRC [27]. That is,
Rt=X1tϵX2tX1tX2tϵ,
(5)
where R is the output of each pairwise motion detector in space, X1 and X2 are two adjacent motion-sensitive cells, and ϵ is the temporal delay. Such a theory and its derived versions have been very widely used in a variety of fly EMD models (e.g., [106, 153, 244]), fly OF-based strategies (e.g., [59, 183, 193]), fly on-off polarity motion detectors (e.g., [36, 52, 74, 109]), fly LPTC models (e.g., [70, 71, 220]), insect directional STMD models (e.g., [219, 221, 222, 229]), and bee angular-velocity-detecting models (e.g., [41, 218]).

Regarding computational generation of the size selectivity, as reviewed in Section 4, the STMD and the FDN are small-field motion-sensitive neurons that have a specific size selectivity to small-target motion that is different from those of wide-field motion detectors. There have been two basic categories of STMD-based visual neural networks (i.e., the ESTMD and the DSTMD). The latter one possesses direction selectivity to small-target motion that can be achieved by similar methods to the EMDs. Wiederman et al. have proposed that the lateral inhibition mechanism plays a crucial role in adjusting the size selectivity via spatiotemporal neural computation [229, 230]. Following this theory, Wang et al. mathematically analyzed the way of generating the size selectivity in motion-sensitive visual pathways of insects. In this research, they applied a second-order lateral inhibition mechanism in the lobula computational layer, which can be represented by a difference-of-Gaussians algorithm [222].

5.3 Integration of Multiple Neural Systems

These proposed insect visual pathways or neurons are functionally specialized in recognizing different motion patterns containing looming, directional translation, and small-target movements. In animals' visual brains, evidence has been given that the complex visuomotor response is guided by collaboration of various visual neurons or circuits, rather than a single unit alone. However, the underlying mechanisms still remain elusive. While the biological substrates are unknown, computational modeling is of particular usefulness to help explain the still mysterious biological visual processing. Most of the current state-of-the-art computational models implement a single kind of neural systems. From a modeler's perspective, integrating multiple neural pathways or neurons can undoubtedly benefit the motion perception within more complicated dynamic visual environments involving diverse motion patterns. In addition, it can also make intelligent machines smarter for dealing with mixed visual cues and adopting more appropriate visually guided behaviors like insects'.

Taking some example computational studies, we discuss the advantages of integrating multiple systems for motion perception. First, the locust DSN-based visual neural networks proposed in [236, 238, 239] themselves are paradigms of integrating multiple neural pathways, as illustrated in Figure 18, since each directional pathway is sensitive to a particular PD motion and the postsynaptic organization of multiple DSNs can match well the requirements of collision recognition in dynamic scenes. Second, combining the translation-sensitive neural pathways with the LGMD neuron model can effectively enhance the collision selectivity, especially in complex driving scenarios [234, 246], since the translation and the looming perception pathways are perfectly complementary in function. Third, as mentioned in Section 2, LGMD1 and LGMD2 have different looming selectivity. We have suggested that combining the functionality of both neurons can enhance the collision-detecting performance in either dark or bright environments. With regard to this idea, a case study has initially demonstrated the usefulness of incorporating in the LGMD1 an LGMD2 neural pathway for collision detection in mobile ground robot scenarios under different illumination conditions [67]. In addition, Fu and Yue recently investigated a possible method of integrating multiple visual pathways in the Drosophila's brain for fast motion tracking and implementing a closed-loop behavioral response to fixation [70]. This approach has also been built on an embedded system in a miniaturized mobile robot [65]. Furthermore, a visual neural network that senses rotational or spiral motion patterns integrates mechanisms of the locust DSNs and the fly EMDs in a computational structure [99, 100]; this model can well recognize both clockwise and anticlockwise rotations of an object in a simple background.

To sum up briefly, an artificial vision system that possesses robust functions to detect multiple motion patterns and extract more abundant features from a visually dynamic and cluttered environment is very necessary for further intelligent machines like self-driving cars to better serve human society. The computational modeling and applications of insect visual systems can provide us with effective and efficient solutions.

5.4 Hardware Realization of Insect Motion Perception Models

Continuing with the surveys on computational models and applications of insect visual systems, this subsection discusses the appropriate hardware realization of these models and future trends. We propose that to achieve higher processing speed, larger scale, or real-time solutions, the implementation of neuromorphic visual models in hardware could be extremely advantageous.

From an engineering perspective, the neuromorphic visual sensors are realized in two different trends. One is single-chip solutions featuring compact size and specialized functions. Another trend is the use of high-performance circuits such as the FPGA.

The single-chip solutions (e.g., [1, 55, 90, 107, 190, 215]), are usually implemented by CMOS VLSI processes with mixed signals [135]. The photoreceptors in compound eyes can also be integrated inside them like the above-mentioned silicon retina [215] and CurvACE sensor [55]. Taking advantage of the compact design and low power consumption, these silicon implementations could be widely deployed as individual sensors for distributed systems (e.g., [55, 120, 137, 179, 216]), or as components on size-sensitive platforms such as micro-robots (e.g., [128, 215]), MAVs (e.g., [53, 116, 182, 185]), and UAVs (e.g., [187, 188]). This kind of integrated chip can also be utilized as an optical sensor for further applications. For instance, the dynamic vision sensor (DVS) [134, 160] technology is featured by its low latency and low data volume.

On the other hand, the high-performance solutions aim to capture images from commercial cameras with high resolution and high frame rate, and to implement the signal processing within the FPGA [119, 132, 247] or even application-specific integrated circuits (ASICs). Due to the feature that data arrays can be dealt with in parallel, the total frame rate can reach up to 350 fps at the resolution of 256 × 256 [247], or 5 kHz with 12 photodiodes [7]. These high-performance approaches could significantly enhance the visual model's spatial sensitivity and temporal response for further designs with critical requirements.

As presented above, these bio-inspired motion perception models could be ideal choices for design of neuromorphic vision sensors as a future trend of hardware realization of visual processing. Furthermore, these low-energy and miniaturized visual sensing modalities would be able to be incorporated in some control systems for much broader applications in robotics, such as the underactuated systems [127], and corresponding bio-inspired robot applications like the vibro-driven robot [126].

6 Conclusion

In this article, we have provided an overview of computational motion perception models originating from research on insect visual systems, as well as corresponding applications to artificial mobile machines for visual motion detection and control of insect-like behaviors, such as obstacle avoidance, landing, tunnel crossing, terrain following, and fixation. We have reviewed these motion perception models according to their specific direction and size selectivity with respect to different motion patterns, including looming, translation, and small-target motion. To a large extent, the physiology underlying motion perception in insect visual systems is still unknown. However, the diversity in direction and size selectivity in different types of visual neurons can be realized by spatiotemporal computation within the neural circuits or pathways. We have summarized different methodologies, including lateral inhibition mechanisms and nonlinear computation, to implement different selectivities. In addition, both biological and modeling studies, over decades, have demonstrated the similarities between different insect motion detectors. The effectiveness and efficiency of these bio-inspired models have been validated by a variety of applications to bio-robotics and other vision-based platforms for motion perception in a both low-power and fast mode. Through the existing modeling studies, we have pointed out the great potential of these dynamic vision systems in building neuromorphic sensors for volume production and utility in future intelligent machines.

Acknowledgments

This research was supported by the EU Horizon 2020 projects STEP2DYNA (691154) and ULTRACEPT (778062).

Note

1 

See Table 1 for definitions of abbreviations.

References

1
Abbott
,
D.
,
Moini
,
A.
,
Yakovleff
,
A.
,
Nguyen
,
X. T.
,
Blanksby
,
A.
,
Kim
,
G.
,
Bouzerdoum
,
A.
,
Bogner
,
R. E.
, &
Eshraghian
,
K.
(
1995
).
New VLSI smart sensor for collision avoidance inspired by insect vision
. In
Proceedings of SPIE 2344, Intelligent vehicle highway systems
(pp.
105
115
).
Bellingham, WA
:
SPIE
.
2
Aptekar
,
J. W.
,
Shoemaker
,
P. A.
, &
Frye
,
M. A.
(
2012
).
Figure tracking by flies is supported by parallel visual streams
.
Current Biology
,
22
(
6
),
482
487
.
3
Aptekar
,
J. W.
, &
Frye
,
M. A.
(
2013
).
Higher-order figure discrimination in fly and human vision
.
Current Biology
,
23
(
16
),
R694
R700
.
4
Arenz
,
A.
,
Drews
,
M. S.
,
Richter
,
F. G.
,
Ammer
,
G.
, &
Borst
,
A.
(
2017
).
The temporal tuning of the Drosophila motion detectors is determined by the dynamics of their input elements
.
Current Biology
,
27
(
7
),
929
944
.
5
Argyros
,
A. A.
,
Tsakiris
,
D. P.
, &
Groyer
,
C.
(
2004
).
Biomimetic centering behavior [mobile robots with panoramic sensors]
.
IEEE Robotics Automation Magazine
,
11
(
4
),
21
30
.
6
Arkin
,
R. C.
,
Ali
,
K.
,
Weitzenfeld
,
A.
, &
Cervantes-Pérez
,
F.
(
2000
).
Behavioral models of the praying mantis as a basis for robotic behavior
.
Robotics and Autonomous Systems
,
32
(
1
),
39
60
.
7
Aubépart
,
F.
, &
Franceschini
,
N.
(
2007
).
Bio-inspired optic flow sensors based on FPGA: Application to micro-air-vehicles
.
Microprocessors and Microsystems
,
31
(
6
),
408
419
.
8
Badia
,
S. B. I.
,
Bernardet
,
U.
, &
Verschure
,
P. F.
(
2010
).
Non-linear neuronal responses as an emergent property of afferent networks: A case study of the locust lobula giant movement detector
.
PLoS Computational Biology
,
6
(
3
),
e1000701
.
9
Bagheri
,
Z. M.
,
Cazzolato
,
B. S.
,
Grainger
,
S.
,
O'Carroll
,
D. C.
, &
Wiederman
,
S. D.
(
2017
).
An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments
.
Journal of Neural Engineering
,
14
(
4
),
046030
.
10
Bagheri
,
Z. M.
,
Wiederman
,
S. D.
,
Cazzolato
,
B. S.
,
Grainger
,
S.
, &
O'Carroll
,
D. C.
(
2015
).
Properties of neuronal facilitation that improve target tracking in natural pursuit simulations
.
Journal of the Royal Society Interface
,
12
(
108
),
20150083
.
11
Bagheri
,
Z. M.
,
Wiederman
,
S. D.
,
Cazzolato
,
B. S.
,
Grainger
,
S.
, &
O'Carroll
,
D. C.
(
2017
).
Performance of an insect-inspired target tracker in natural conditions
.
Bioinspiration & Biomimetics
,
12
(
2
),
025006
.
12
Bahl
,
A.
,
Ammer
,
G.
,
Schilling
,
T.
, &
Borst
,
A.
(
2013
).
Object tracking in motion-blind flies
.
Nature Neuroscience
,
16
(
6
),
730
738
.
13
Baird
,
E.
,
Boeddeker
,
N.
,
Ibbotson
,
M. R.
, &
Srinivasan
,
M. V.
(
2013
).
A universal strategy for visually guided landing
.
Proceedings of the National Academy of Sciences of the U.S.A.
,
110
(
46
),
18686
18691
.
14
Baird
,
E.
,
Kornfeldt
,
T.
, &
Dacke
,
M.
(
2010
).
Minimum viewing angle for visually guided ground speed control in bumblebees
.
Journal of Experimental Biology
,
213
(
10
),
1625
1632
.
15
Barlow
,
H.
, &
Levick
,
W.
(
1965
).
The mechanism of directionally selective units in rabbit's retina
.
Journal of Physiology
,
178
,
477
504
.
16
Barnett
,
P. D.
,
Nordström
,
K.
, &
O'Carroll
,
D. C.
(
2007
).
Retinotopic organization of small-field-target-detecting neurons in the insect visual system
.
Current Biology
,
17
(
7
),
569
578
.
17
Behnia
,
R.
,
Clark
,
D. A.
,
Carter
,
A. G.
,
Clandinin
,
T. R.
, &
Desplan
,
C.
(
2014
).
Processing properties of on and off pathways for Drosophila motion detection
.
Nature
,
512
(
7515
),
427
430
.
18
Bengochea
,
M.
, &
Berón de Astrada
,
M.
(
2014
).
Organization of columnar inputs in the third optic ganglion of a highly visual crab
.
Journal of Physiology Paris
,
108
(
2–3
),
61
70
.
19
Bermudez i Badia
,
S.
, &
Verschure
,
P. F.
(
2004
).
A collision avoidance model based on the lobula giant movement detector (LGMD) neuron of the locust
. In
Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IJCNN)
,
Vol. 3
(pp.
1757
1761
).
New York
:
IEEE
.
20
Bertrand
,
O. J. N.
,
Lindemann
,
J. P.
, &
Egelhaaf
,
M.
(
2015
).
A bio-inspired collision avoidance model based on spatial information derived from motion detectors leads to common routes
.
PLOS Computational Biology
,
11
(
11
),
1
28
.
21
Biswas
,
T.
, &
Lee
,
C. H.
(
2017
).
Visual motion: Cellular implementation of a hybrid motion detector
.
Current Biology
,
27
(
7
),
R274
R276
.
22
Blanchard
,
M.
,
Rind
,
F. C.
, &
Verschure
,
P. F.
(
2000
).
Collision avoidance using a model of the locust LGMD neuron
.
Robotics and Autonomous Systems
,
30
(
1
),
17
38
.
23
Blanchard
,
M.
,
Rind
,
F. C.
, &
Verschure
,
P. F.
(
2001
).
How accurate need sensory coding be for behaviour? Experiments using a mobile robot
.
Neurocomputing
,
38–40
,
1113
1119
.
24
Boeddeker
,
N.
,
Lindemann
,
J. P.
,
Egelhaaf
,
M.
, &
Zeil
,
J.
(
2005
).
Responses of blowfly motion-sensitive neurons to reconstructed optic flow along outdoor flight paths
.
Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology
,
191
,
1143
1155
.
25
Bolzon
,
D. M.
,
Nordström
,
K.
, &
O'Carroll
,
D. C.
(
2009
).
Local and large-range inhibition in feature detection
.
Journal of Neuroscience
,
29
(
45
),
14143
14150
.
26
Borst
,
A.
(
2014
).
Fly visual course control: Behaviour, algorithms and circuits
.
Nature Reviews Neuroscience
,
15
,
590
599
.
27
Borst
,
A.
, &
Egelhaaf
,
M.
(
1989
).
Principles of visual motion detection
.
Trends in Neurosciences
,
12
,
297
306
.
28
Borst
,
A.
, &
Euler
,
T.
(
2011
).
Seeing things in motion: Models, circuits, and mechanisms
.
Neuron
,
71
(
6
),
974
994
.
29
Borst
,
A.
, &
Haag
,
J.
(
2002
).
Neural networks in the cockpit of the fly
.
Journal of Comparative Physiology
,
188
(
6
),
419
437
.
30
Borst
,
A.
,
Haag
,
J.
, &
Reiff
,
D. F.
(
2010
).
Fly motion vision
.
The Annual Review of Neuroscience
,
33
,
49
70
.
31
Borst
,
A.
, &
Helmstaedter
,
M.
(
2015
).
Common circuit design in fly and mammalian motion vision
.
Nature Neuroscience
,
18
(
8
),
1067
1076
.
32
Browning
,
N. A.
,
Grossberg
,
S.
, &
Mingolla
,
E.
(
2009
).
A neural model of how the brain computes heading from optic flow in realistic scenes
.
Cognitive Psychology
,
59
(
4
),
320
356
.
33
Card
,
G. M.
(
2012
).
Escape behaviors in insects
.
Current Opinion in Neurobiology
,
22
(
2
),
180
186
.
34
Cheng
,
Y.
,
Cao
,
J.
,
Zhang
,
Y.
, &
Hao
,
Q.
(
2019
).
Review of state-of-the-art artificial compound eye imaging systems
.
Bioinspiration & Biomimetics
,
14
(
3
),
031002
.
35
Cizek
,
P.
,
Milicka
,
P.
, &
Faigl
,
J.
(
2017
).
Neural based obstacle avoidance with CPG controlled hexapod walking robot
. In
Proceedings of the 2017 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
650
656
).
New York
:
IEEE
.
36
Clark
,
D. A.
,
Bursztyn
,
L.
,
Horowitz
,
M. A.
,
Schnitzer
,
M. J.
, &
Clandinin
,
T. R.
(
2011
).
Defining the computational structure of the motion detector in Drosophila
.
Neuron
,
70
(
6
),
1165
1177
.
37
Clifford
,
C.
, &
Ibbotson
,
M.
(
2002
).
Fundamental mechanisms of visual motion detection: Models, cells and functions
.
Progress in Neurobiology
,
68
(
6
),
409
437
.
38
Clifford
,
C. W. G.
, &
Langley
,
K.
(
1996
).
A model of temporal adaptation in fly motion vision
.
Vision Research
,
36
,
2595
2608
.
39
Collett
,
T.
(
1971
).
Visual neurones for tracking moving targets
.
Nature
,
232
(
5306
),
127
.
40
Collett
,
T.
, &
Land
,
M.
(
1975
).
Visual control of flight behaviour in the hoverfly Syritta pipiens L
.
Journal of Comparative Physiology
,
99
(
1
),
1
66
.
41
Cope
,
A. J.
,
Sabo
,
C.
,
Gurney
,
K.
,
Vasilaki
,
E.
, &
Marshall
,
J. A.
(
2016
).
A model for an angular velocity-tuned motion detector accounting for deviations in the corridor-centering response of the bee
.
PLoS Computational Biology
,
12
(
5
),
1
22
.
42
De Vries
,
S. E. J.
, &
Clandinin
,
T. R.
(
2012
).
Loom-sensitive neurons link computation to action in the Drosophila visual system
.
Current Biology
,
22
(
5
),
353
362
.
43
Deng
,
M.
,
Inoue
,
A.
,
Shibata
,
Y.
,
Sekiguchi
,
K.
, &
Ueki
,
N.
(
2007
).
An obstacle avoidance method for two wheeled mobile robot
. In
Proceedings of the 2007 IEEE International Conference on Networking Sensing and Control
(pp.
689
692
).
New York
:
IEEE
.
44
DeSouza
,
G. N.
, &
Kak
,
A. C.
(
2002
).
Vision for mobile robot navigation: A survey
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
24
,
237
267
.
45
Dewell
,
R. B.
, &
Gabbiani
,
F.
(
2012
).
Escape behavior: Linking neural computation to action
.
Current Biology
,
22
(
5
),
R152
R153
.
46
Eckert
,
H.
, &
Dvorak
,
D. R.
(
1983
).
The centrifugal horizontal cells in the lobula plate of the blowfly, Phaenicia sericata
.
Journal of Insect Physiology
,
29
(
7
),
547
560
.
47
Egelhaaf
,
M.
(
1985
).
On the neuronal basis of figure-ground discrimination by relative motion in the visual system of the fly. 2: Figure-detection cells a new class of visual interneurons
.
Biological Cybernetics
,
52
(
2
),
123
140
.
48
Egelhaaf
,
M.
(
1985
).
On the neuronal basis of figure-ground discrimination by relative motion in the visual system of the fly. 3: Possible input circuitries and behavioural significance of the FD-cells
.
Biological Cybernetics
,
52
(
4
),
267
280
.
49
Egelhaaf
,
M.
, &
Borst
,
A.
(
1993
).
A look into the cockpit of the fly: Visual orientation, algorithms, and identified neurons
.
Journal of Neuroscience
,
13
(
11
),
4563
4574
.
50
Egelhaaf
,
M.
,
Borst
,
A.
,
Warzecha
,
A.-K.
,
Flecks
,
S.
, &
Wildemann
,
A.
(
1993
).
Neural circuit tuning fly visual neurons to motion of small objects. II. Input organization of inhibitory circuit elements revealed by electrophysiological and optical recording techniques
.
Journal of Neurophysiology
,
69
(
2
),
340
351
.
51
Egelhaaf
,
M.
,
Grewe
,
J.
,
Kern
,
R.
, &
Warzecha
,
A. K.
(
2001
).
Outdoor performance of a motion-sensitive neuron in the blowfly
.
Vision Research
,
41
(
27
),
3627
3637
.
52
Eichner
,
H.
,
Joesch
,
M.
,
Schnell
,
B.
,
Reiff
,
D. F.
, &
Borst
,
A.
(
2011
).
Internal structure of the fly elementary motion detector
.
Neuron
,
70
(
6
),
1155
1164
.
53
Expert
,
F.
, &
Ruffier
,
F.
(
2015
).
Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers
.
Bioinspiration & Biomimetics
,
10
,
026003
.
54
Fisher
,
Y. E.
,
Leong
,
J. C. S.
,
Sporar
,
K.
,
Ketkar
,
M. D.
,
Gohl
,
D. M.
,
Clandinin
,
T. R.
, &
Silies
,
M.
(
2015
).
A class of visual neurons with wide-field properties is required for local motion detection
.
Current Biology
,
25
(
24
),
3178
3189
.
55
Floreano
,
D.
,
Pericet-Camara
,
R.
,
Viollet
,
S.
,
Ruffier
,
F.
,
Bruckner
,
A.
,
Leitel
,
R.
,
Buss
,
W.
,
Menouni
,
M.
,
Expert
,
F.
,
Juston
,
R.
,
Dobrzynski
,
M. K.
,
Eplattenier
,
G. L.
,
Recktenwald
,
F.
,
Mallot
,
H. A.
, &
Franceschini
,
N.
(
2013
).
Miniature curved artificial compound eyes
.
Proceedings of the National Academy of Sciences of the U.S.A.
,
110
(
23
).
56
Floreano
,
D.
,
Zufferey
,
J.-C.
, &
Nicoud
,
J.-D.
(
2005
).
From wheels to wings with evolutionary spiking circuits
.
Artificial Life
,
11
(
1–2
),
121
138
.
57
Franceschini
,
N.
(
1975
).
Sampling of the visual environment by the compound eye of the fly: Fundamentals and applications
(pp.
98
125
).
Berlin, Heidelberg
:
Springer
.
58
Franceschini
,
N.
(
1984
).
Chromatic organization and sexual dimorphism of the fly retinal mosaic
(pp.
319
350
).
Boston
:
Springer US
.
59
Franceschini
,
N.
(
2014
).
Small brains, smart machines: From fly vision to robot vision and back again
.
Proceedings of the IEEE
,
102
,
751
781
.
60
Franceschini
,
N.
,
Pichon
,
J.
, &
Blanes
,
C.
(
1992
).
From insect vision to robot vision
.
Philosophical Transactions of the Royal Society B
,
337
(
1281
),
283
294
.
61
Franceschini
,
N.
,
Riehle
,
A.
, &
Le Nestour
,
A.
(
1989
).
Directionally selective motion detection by insect neurons
. In
D. G.
Stavenga
&
R. C.
Hardie
(Eds.),
Facets of vision
(pp.
360
390
).
Berlin, Heidelberg
:
Springer
.
62
Franceschini
,
N.
,
Ruffier
,
F.
, &
Serres
,
J.
(
2007
).
A bio-inspired flying robot sheds light on insect piloting abilities
.
Current Biology
,
17
,
329
335
.
63
Franceschini
,
N.
,
Ruffier
,
F.
, &
Serres
,
J.
(
2010
).
Insect inspired autopilots
.
Journal of Aero Aqua Bio-mechanisms
,
1
(
1
),
2
10
.
64
Frye
,
M.
(
2015
).
Elementary motion detectors
.
Current Biology
,
25
(
6
),
R215
R217
.
65
Fu
,
Q.
,
Bellotto
,
N.
,
Hu
,
C.
, &
Yue
,
S.
(
2018
).
Performance of a visual fixation model in an autonomous micro robot inspired by Drosophila physiology
. In
Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)
(pp.
1802
1808
).
New York
:
IEEE
.
66
Fu
,
Q.
,
Hu
,
C.
,
Liu
,
P.
, &
Yue
,
S.
(
2018
).
Towards computational models of insect motion detectors for robot vision
. In
M.
Giuliani
,
T.
Assaf
, &
M. E.
Giannaccini
(Eds.),
Towards Autonomous Robotic Systems Conference
(pp.
465
467
).
New York
:
Springer International Publishing
.
67
Fu
,
Q.
,
Hu
,
C.
,
Liu
,
T.
, &
Yue
,
S.
(
2017
).
Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot
. In
Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
(pp.
3996
4002
).
New York
:
IEEE
.
68
Fu
,
Q.
,
Hu
,
C.
,
Peng
,
J.
, &
Yue
,
S.
(
2018
).
Shaping the collision selectivity in a looming sensitive neuron model with parallel on and off pathways and spike frequency adaptation
.
Neural Networks
,
106
,
127
143
.
69
Fu
,
Q.
, &
Yue
,
S.
(
2015
).
Modelling LGMD2 visual neuron system
. In
Proceedings of the 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing
(pp.
1
6
).
New York
:
IEEE
.
70
Fu
,
Q.
, &
Yue
,
S.
(
2017
).
Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network
. In
Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)
(pp.
1636
1641
).
New York
:
IEEE
.
71
Fu
,
Q.
, &
Yue
,
S.
(
2017
).
Modeling direction selective visual neural network with on and off pathways for extracting motion cues from cluttered background
. In
Proceedings of the 2017 International Joint Conference on Neural Betworks (IJCNN)
(pp.
831
838
).
New York
:
IEEE
.
72
Fu
,
Q.
,
Yue
,
S.
, &
Hu
,
C.
(
2016
).
Bio-inspired collision detector with enhanced selectivity for ground robotic vision system
. In
E. R.
Hancock
,
Richard C.
Wilson
, &
W. A. P.
Smith
(Eds.),
British Machine Vision Conference
(pp.
1
13
).
Durham, U.K.
:
BMVA Press
.
73
Gabbiani
,
F.
,
Cohen
,
I.
, &
Laurent
,
G.
(
2005
).
Time-dependent activation of feed-forward inhibition in a looming-sensitive neuron
.
Journal of Neurophysiology
,
94
(
May
),
2150
2161
.
74
Gabbiani
,
F.
, &
Jones
,
P. W.
(
2011
).
A genetic push to understand motion detection
.
Neuron
,
70
(
6
),
1023
1025
.
75
Gabbiani
,
F.
, &
Krapp
,
H. G.
(
2006
).
Spike-frequency adaptation and intrinsic properties of an identified, looming-sensitive neuron
.
Journal of Neurophysiology
,
96
(
6
),
2951
2962
.
76
Gabbiani
,
F.
,
Krapp
,
H. G.
,
Hatsopoulos
,
N.
,
Mo
,
C. H.
,
Koch
,
C.
, &
Laurent
,
G.
(
2004
).
Multiplication and stimulus invariance in a looming-sensitive neuron
.
Journal of Physiology Paris
,
98
(
1–3 Spec. Iss.
),
19
34
.
77
Gabbiani
,
F.
,
Krapp
,
H. G.
,
Koch
,
C.
, &
Laurent
,
G.
(
2002
).
Multiplicative computation by a looming-sensitive neuron
. In
Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society. Engineering in Medicine and Biology
(pp.
1968
1969
).
New York
:
IEEE
.
78
Gabbiani
,
F.
,
Krapp
,
H. G.
,
Koch
,
C.
, &
Laurent
,
G.
(
2002
).
Multiplicative computation in a visual neuron sensitive to looming
.
Nature
,
420
(
6913
),
320
324
.
79
Gabbiani
,
F.
,
Krapp
,
H. G.
, &
Laurent
,
G.
(
1999
).
Computation of object approach by a wide-field, motion-sensitive neuron
.
Journal of Neuroscience
,
19
(
3
),
1122
1141
.
80
Gabbiani
,
F.
,
Laurent
,
G.
,
Hatsopoulos
,
N.
,
Krapp
,
H. G.
,
Rind
,
F. C.
, &
Simmons
,
P. J.
(
1999
).
The many ways of building collision-sensitive neurons
.
Trends in Neurosciences
,
22
(
10
),
437
438
.
81
Gabbiani
,
F.
,
Mo
,
C.
, &
Laurent
,
G.
(
2001
).
Invariance of angular threshold computation in a wide-field looming-sensitive neuron
.
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
,
21
(
1
),
314
329
.
82
Gauck
,
V.
, &
Borst
,
A.
(
1999
).
Spatial response properties of contralateral inhibited lobula plate tangential cells in the fly visual system
.
Journal of Comparative Neurology
,
406
(
1
),
51
71
.
83
Geurten
,
B. R.
,
Nordström
,
K.
,
Sprayberry
,
J. D.
,
Bolzon
,
D. M.
, &
O'Carroll
,
D. C.
(
2007
).
Neural mechanisms underlying target detection in a dragonfly centrifugal neuron
.
Journal of Experimental Biology
,
210
(
18
),
3277
3284
.
84
Gilbert
,
C.
(
2013
).
Brain connectivity: Revealing the fly visual motion circuit
.
Current Biology
,
23
(
18
),
R851
R853
.
85
Gray
,
J. R.
,
Lee
,
J. K.
, &
Robertson
,
R. M.
(
2001
).
Activity of descending contralateral movement detector neurons and collision avoidance behaviour in response to head-on visual stimuli in locusts
.
Journal of Comparative Physiology—A Sensory, Neural, and Behavioral Physiology
,
187
(
2
),
115
129
.
86
Green
,
W. E.
, &
Oh
,
P. Y.
(
2008
).
Optic-flow-based collision avoidance
.
IEEE Robotics Automation Magazine
,
15
(
1
),
96
103
.
87
Haag
,
J.
,
Arenz
,
A.
,
Serbe
,
E.
,
Gabbiani
,
F.
, &
Borst
,
A.
(
2016
).
Complementary mechanisms create direction selectivity in the fly
.
eLife
,
5
,
1
15
.
88
Haag
,
J.
, &
Borst
,
A.
(
2002
).
Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly
.
Journal of Neuroscience
,
22
(
8
),
3227
3233
.
89
Harris
,
R. A.
,
O'Carroll
,
D. C.
, &
Laughlin
,
S. B.
(
1999
).
Adaptation and the temporal delay filter of fly motion detectors
.
Vision Research
,
39
(
16
),
2603
2613
.
90
Harrison
,
R. R.
(
2005
).
A biologically inspired analog IC for visual collision detection
.
IEEE Transactions on Circuits and Systems I: Regular Papers
,
52
(
11
),
2308
2318
.
91
Hartbauer
,
M.
(
2017
).
Simplified bionic solutions: A simple bio-inspired vehicle collision detection system
.
Bioinspiration & Biomimetics
,
12
(
2
),
026007
.
92
Hassenstein
,
B.
, &
Reichardt
,
W.
(
1956
).
Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus
.
Zeitschrift für Naturforschung
(pp.
513
524
).
93
Hennig
,
P.
, &
Egelhaaf
,
M.
(
2012
).
Neuronal encoding of object and distance information: A model simulation study on naturalistic optic flow processing
.
Frontiers in Neural Circuits
,
6
,
14
.
94
Hennig
,
P.
,
Kern
,
R.
, &
Egelhaaf
,
M.
(
2011
).
Binocular integration of visual information: A model study on naturalistic optic flow processing
.
Frontiers in Neural Circuits
,
5
,
4
.
95
Hennig
,
P.
,
Möller
,
R.
, &
Egelhaaf
,
M.
(
2008
).
Distributed dendritic processing facilitates object detection: A computational analysis on the visual system of the fly
.
PLoS One
,
3
(
8
),
e3092
.
96
Higgins
,
C. M.
(
2004
).
Nondirectional motion may underlie insect behavioral dependence on image speed
.
Biological Cybernetics
,
91
(
5
),
326
332
.
97
Higgins
,
C. M.
, &
Pant
,
V.
(
2004
).
An elaborated model of fly small-target tracking
.
Biological Cybernetics
,
91
(
6
),
417
428
.
98
Horridge
,
G. A.
(
1977
).
The compound eye of insects
.
Scientific American
,
237
(
1
),
108
121
.
99
Hu
,
B.
,
Yue
,
S.
, &
Zhang
,
Z.
(
2017
).
A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing
.
IEEE Transactions on Neural Networks and Learning Systems
,
28
(
11
),
2803
2821
.
100
Hu
,
B.
, &
Zhang
,
Z.
(
2018
).
Bio-plausible visual neural network for spatiotemporally spiral motion perception
.
Neurocomputing
,
310
,
96
114
.
101
Hu
,
C.
,
Arvin
,
F.
,
Xiong
,
C.
, &
Yue
,
S.
(
2017
).
Bio-inspired embedded vision system for autonomous micro-robots: The LGMD case
.
IEEE Transactions on Cognitive and Developmental Systems
,
9
(
3
),
241
254
.
102
Hu
,
C.
,
Arvin
,
F.
, &
Yue
,
S.
(
2014
).
Development of a bio-inspired vision system for mobile micro-robots
. In
Proceedings of the 4th IEEE International Conference on Development and Learning and on Epigenetic Robotics
(pp.
81
86
).
New York
:
IEEE
.
103
Huber
,
S. A.
, &
Bülthoff
,
H. H.
(
2003
).
Visuomotor control in flies and behavior-based agents
. In
R. J.
Duro
,
J.
Santos
,
M.
Graña
, &
J.
Kacprzyk
(Eds.),
Biologically inspired robot behavior engineering
(pp.
89
117
).
Heidelberg
:
Physica-Verlag
.
104
Iida
,
F.
(
2003
).
Biologically inspired visual odometer for navigation of a flying robot
.
Robotics and Autonomous Systems
,
44
(
3–4
),
201
208
.
105
Iida
,
F.
(
2012
).
Book review: Flying insects and robots
.
Artificial Life
,
18
,
125
127
.
106
Iida
,
F.
, &
Lambrinos
,
D.
(
2000
).
Navigation in an autonomous flying robot by using a biologically inspired visual odometer
. In
Sensor fusion and decentralized control in robotic systems III
(pp.
86
97
).
Bellingham, WA
:
SPIE
.
107
Indiveri
,
G.
(
1998
).
Analog VLSI model of locust DCMD neuron for computation of object approach
.
Neuromorphic Systems. Engineering Silicon from Neurobiology
,
10
,
47
60
.
108
Joesch
,
M.
,
Schnell
,
B.
,
Raghu
,
S. V.
,
Reiff
,
D. F.
, &
Borst
,
A.
(
2010
).
on and off pathways in Drosophila motion vision
.
Nature
,
468
(
7321
),
300
304
.
109
Joesch
,
M.
,
Weber
,
F.
,
Eichner
,
H.
, &
Borst
,
A.
(
2013
).
Functional specialization of parallel motion detection circuits in the fly
.
Journal of Neuroscience
,
33
(
3
),
902
905
.
110
Judge
,
S.
, &
Rind
,
F.
(
1997
).
The locust DCMD, a movement-detecting neurone tightly tuned to collision trajectories
.
The Journal of Experimental Biology
,
200
,
2209
2216
.
111
Karmeier
,
K.
(
2006
).
Encoding of naturalistic optic flow by a population of blowfly motion-sensitive neurons
.
Journal of Neurophysiology
,
96
(
3
),
1602
1614
.
112
Keil
,
M. S.
(
2011
).
Emergence of multiplication in a biophysical model of a wide-field visual neuron for computing object approaches: Dynamics, peaks, and fits
. In
J.
Shawe-Taylor
,
R. S.
Zemel
,
P. L.
Bartlett
,
F.
Pereira
, &
K. Q.
Weinberger
(Eds.),
Advances in Neural Information Processing Systems 24
(pp.
469
477
).
New York
:
Curran Associates
.
113
Keil
,
M. S.
(
2015
).
Dendritic pooling of noisy threshold processes can explain many properties of a collision-sensitive visual neuron
.
PLoS Computational Biology
,
11
(
10
),
1
17
.
114
Keil
,
M. S.
,
Roca-Moreno
,
E.
, &
Rodriguez-Vazquez
,
A.
(
2004
).
A neural model of the locust visual system for detection of object approaches with real-world scenes
. In
Proceedings of the FourthIASTED International Conference on Visualization, Imaging, and Image Processing
(pp.
340
345
).
IASTED
.
115
Kele
,
M. F.
, &
Frye
,
M. A.
(
2017
).
Object-detecting neurons in Drosophila
.
Current Biology
,
27
(
5
),
680
687
.
116
Kerhuel
,
L.
,
Viollet
,
S.
, &
Franceschini
,
N.
(
2012
).
The VODKA sensor: A bio-inspired hyperacute optical position sensing device
.
IEEE Sensors Journal
,
12
(
2
),
315
324
.
117
Kimmerle
,
B.
, &
Egelhaaf
,
M.
(
2000
).
Detection of object motion by a fly neuron during simulated flight
.
Journal of Comparative Physiology A
,
186
(
1
),
21
31
.
118
Kimmerle
,
B.
,
Warzecha
,
A.-K.
, &
Egelhaaf
,
M.
(
1997
).
Object detection in the fly during simulated translatory flight
.
Journal of Comparative Physiology A
,
181
(
3
),
247
255
.
119
Köhler
,
T.
(
2015
).
Bioinspired motion detection based on an FPGA platform
(
Chap. 17
, pp.
405
424
).
Hoboken, NJ
:
Wiley
.
120
Kramer
,
J.
,
Sarpeshkar
,
R.
, &
Koch
,
C.
(
1997
).
Pulse-based analog VLSI velocity sensors
.
IEEE Transactions on Circuits and Systems—II: Analog and Digital Signal Processing
,
44
(
2
),
86
101
.
121
Krapp
,
H. G.
,
Hengstenberg
,
B.
, &
Hengstenberg
,
R.
(
1998
).
Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly
.
Journal of Neurophysiology
,
79
(
4
),
1902
1917
.
122
Krapp
,
H. G.
, &
Hengstenberg
,
R.
(
1996
).
Estimation of self-motion by optic flow processing in single visual interneurons
.
Nature
,
384
,
463
466
.
123
Krapp
,
H. G.
,
Hengstenberg
,
R.
, &
Egelhaaf
,
M.
(
2001
).
Binocular contributions to optic flow processing in the fly visual system
.
Journal of Neurophysiology
,
85
(
2
),
724
734
.
124
Krejan
,
A.
, &
Trost
,
A.
(
2011
).
LGMD-based bio-inspired algorithm for detecting risk of collision of a road vehicle
. In
Proceedings of the 2011 IEEE 7th International Symposium on Image and Signal Processing and Analysis
(pp.
319
324
).
New York
:
IEEE
.
125
Leong
,
J. C. S.
,
Esch
,
J. J.
,
Poole
,
B.
,
Ganguli
,
S.
, &
Clandinin
,
T. R.
(
2016
).
Direction selectivity in Drosophila emerges from preferred-direction enhancement and null-direction suppression
.
Journal of Neuroscience
,
36
(
31
),
8078
8092
.
126
Liu
,
P.
,
Neumann
,
G.
,
Fu
,
Q.
,
Pearson
,
S.
, &
Yu
,
H.
(
2018
).
Energy-efficient design and control of a vibro-driven robot
. In
Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
(pp.
1464
1469
).
New York
:
IEEE
.
127
Liu
,
P.
,
Yu
,
H.
, &
Cang
,
S.
(
2018
).
Optimized adaptive tracking control for an underactuated vibro-driven capsule system
.
Nonlinear Dynamics
,
94
(
3
),
1803
1817
.
128
Mafrica
,
S.
,
Servel
,
A.
, &
Ruffier
,
F.
(
2016
).
Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot
.
Bioinspiration & Biomimetics
,
11
(
6
),
066007
.
129
Maisak
,
M. S.
,
Haag
,
J.
,
Ammer
,
G.
,
Serbe
,
E.
,
Meier
,
M.
,
Leonhardt
,
A.
,
Schilling
,
T.
,
Bahl
,
A.
,
Rubin
,
G. M.
,
Nern
,
A.
,
Dickson
,
B. J.
,
Reiff
,
D. F.
,
Hopp
,
E.
, &
Borst
,
A.
(
2013
).
A directional tuning map of Drosophila elementary motion detectors
.
Nature
,
500
(
7461
),
212
216
.
130
Martin
,
N.
, &
Franceschini
,
N.
(
1994
).
Obstacle avoidance and speed control in a mobile vehicle equipped with a compound eye
. In
Proceedings of the Intelligent Vehicles '94 Symposium
(pp.
381
386
).
New York
:
IEEE
.
131
Medan
,
V.
,
Oliva
,
D.
, &
Tomsic
,
D.
(
2007
).
Characterization of lobula giant neurons responsive to visual stimuli that elicit escape behaviors in the crab Chasmagnathus
.
Journal of Neurophysiology
,
98
(
4
),
2414
2428
.
132
Meng
,
H.
,
Appiah
,
K.
,
Yue
,
S.
,
Hunter
,
A.
,
Hobden
,
M.
,
Priestley
,
N.
,
Hobden
,
P.
, &
Pettit
,
C.
(
2010
).
A modified model for the lobula giant movement detector and its FPGA implementation
.
Computer Vision and Image Understanding
,
114
,
1238
1247
.
133
Meng
,
H.
,
Yue
,
S.
,
Hunter
,
A.
,
Appiah
,
K.
,
Hobden
,
M.
,
Priestley
,
N.
,
Hobden
,
P.
, &
Pettit
,
C.
(
2009
).
A modified neural network model for lobula giant movement detector with additional depth movement feature
. In
Proceedings of the 2009 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
2078
2083
).
New York
:
IEEE
.
134
Milde
,
M. B.
,
Bertrand
,
O. J.
,
Benosmanz
,
R.
,
Egelhaaf
,
M.
, &
Chicca
,
E.
(
2015
).
Bioinspired event-driven collision avoidance algorithm based on optic flow
. In
Event-based control, communication, and signal processing
(pp.
1
7
).
New York
:
IEEE
.
135
Milde
,
M. B.
,
Blum
,
H.
,
Dietmüller
,
A.
,
Sumislawska
,
D.
,
Conradt
,
J.
,
Indiveri
,
G.
, &
Sandamirskaya
,
Y.
(
2017
).
Obstacle avoidance and target acquisition for robot navigation using a mixed signal analog/digital neuromorphic processing system
.
Frontiers in Neurorobotics
,
11
,
1
17
.
136
Missler
,
J. M.
, &
Kamangar
,
F. A.
(
1995
).
A neural network for pursuit tracking inspired by the fly visual system
.
Neural Networks
,
8
(
3
),
463
480
.
137
Moeckel
,
R.
, &
Liu
,
S.-C.
(
2007
).
Motion detection circuits for a time-to-travel algorithm
. In
Proceedings of the 2007 IEEE International Symposium on Circuits and Systems
(pp.
3079
3082
).
New York
:
IEEE
.
138
Muijres
,
F. T.
,
Elzinga
,
M. J.
,
Melis
,
J. M.
, &
Dickinson
,
M. H.
(
2014
).
Flies evade looming targets by executing rapid visually directed banked turns
.
Science
,
344
(
6180
),
172
177
.
139
Mura
,
F.
, &
Franceschini
,
N.
(
1996
).
Obstacle avoidance in a terrestrial mobile robot provided with a scanning retina
. In
Proceedings of the 1996 IEEE Intelligent Vehicles Symposium
(pp.
47
52
).
New York
:
IEEE
.
140
Nakamura
,
E.
,
Ichimura
,
M.
, &
Sawada
,
K.
(
2002
).
Fast global motion estimation algorithm based on elementary motion detectors
. In
Proceedings of the 2002 IEEE International Conference on Image Processing
,
Vol. 2
(pp.
297
300
).
New York
:
IEEE
.
141
Netter
,
T.
, &
Franceschini
,
N.
(
2002
).
A robotic aircraft that follows terrain using a neuromorphic eye
. In
Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
(pp.
129
134
).
New York
:
IEEE
.
142
Nordström
,
K.
(
2012
).
Neural specializations for small target detection in insects
.
Current Opinion in Neurobiology
,
22
(
2
),
272
278
.
143
Nordström
,
K.
,
Barnett
,
P. D.
, &
O'Carroll
,
D. C.
(
2006
).
Insect detection of small targets moving in visual clutter
.
PLoS Biology
,
4
(
3
),
e54
.
144
Nordström
,
K.
,
Bolzon
,
D. M.
, &
O'Carroll
,
D. C.
(
2011
).
Spatial facilitation by a high-performance dragonfly target-detecting neuron
.
Biology Letters
,
7
(
4
),
588
592
.
145
Nordström
,
K.
, &
O'Carroll
,
D. C.
(
2006
).
Small object detection neurons in female hoverflies
.
Proceedings of the Royal Society of London B: Biological Sciences
,
273
(
1591
),
1211
1216
.
146
O'Carroll
,
D.
(
1993
).
Feature-detecting neurons in dragonflies
.
Nature
,
362
(
6420
),
541
.
147
Olberg
,
R. M.
(
1981
).
Object- and self-movement detectors in the ventral nerve cord of the dragonfly
.
Journal of Comparative Physiology
,
141
(
3
),
327
334
.
148
Olberg
,
R. M.
(
1986
).
Identified target-selective visual interneurons descending from the dragonfly brain
.
Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology
,
159
(
6
),
827
840
.
149
Oliva
,
D.
,
Medan
,
V.
, &
Tomsic
,
D.
(
2007
).
Escape behavior and neuronal responses to looming stimuli in the crab Chasmagnathus granulatus (decapoda: Grapsidae)
.
Journal of Experimental Biology
,
210
(
5
),
865
880
.
150
Oliva
,
D.
, &
Tomsic
,
D.
(
2014
).
Computation of object approach by a system of visual motion-sensitive neurons in the crab Neohelice
.
Journal of Neurophysiology
,
112
(
6
),
1477
1490
.
151
O'Shea
,
M.
, &
Rowell
,
C. H. F.
(
1976
).
The neuronal basis of a sensory analyser, the acridid movement detector system
.
Journal of Experimental Biology
,
68
(
2
),
289
308
.
152
O'Shea
,
M.
, &
Williams
,
J. L.
(
1974
).
The anatomy and output connection of a locust visual interneurone; the lobular giant movement detector (LGMD) neurone
.
Journal of Comparative Physiology
,
91
(
3
),
257
266
.
153
Pallus
,
A.
, &
Fleishman
,
L. J.
(
2014
).
A two-dimensional visual motion detector based on biological principles
.
Available at https://muse.union.edu/visualmotion/main-article/ (accessed July 2014)
.
154
Paulk
,
A. C.
,
Dacks
,
A. M.
,
Phillips-Portillo
,
J.
,
Fellous
,
J.-M.
, &
Gronenberg
,
W.
(
2009
).
Visual processing in the central bee brain
.
Journal of Neuroscience
,
29
(
32
),
9987
9999
.
155
Peron
,
S.
, &
Gabbiani
,
F.
(
2009
).
Spike frequency adaptation mediates looming stimulus selectivity in a collision-detecting neuron
.
Nature Neuroscience
,
12
(
3
),
318
326
.
156
Peron
,
S. P.
, &
Gabbiani
,
F.
(
2009
).
Role of spike-frequency adaptation in shaping neuronal response to dynamic stimuli
.
Biological Cybernetics
,
100
(
6
),
505
520
.
157
Peron
,
S. P.
,
Jones
,
P. W.
, &
Gabbiani
,
F.
(
2009
).
Precise subcellular input retinotopy and its computational consequences in an identified visual interneuron
.
Neuron
,
63
(
6
),
830
842
.
158
Pichon
,
J.-M.
,
Blanes
,
C.
, &
Franceschini
,
N.
(
1990
).
Visual guidance of a mobile robot equipped with a network of self-motion sensors
.
Mobile Robots IV
,
1195
,
44
56
.
159
Portelli
,
G.
,
Ruffier
,
F.
, &
Franceschini
,
N.
(
2010
).
Honeybees change their height to restore their optic flow
.
Journal of Comparative Physiology A
,
196
(
4
),
307
313
.
160
Posch
,
C.
,
Matolin
,
D.
, &
Wohlgenannt
,
R.
(
2011
).
A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS
.
IEEE Journal of Solid-State Circuits
,
46
(
1
),
259
275
.
161
Raharijaona
,
T.
,
Serres
,
J.
,
Vanhoutte
,
E.
, &
Ruffier
,
F.
(
2017
).
Toward an insect-inspired event-based autopilot combining both visual and control events
. In
Proceedings of the 2017 3rd International Conference on Event-based Control, Communication and Signal Processing
(pp.
1
7
).
New York
:
IEEE
.
162
Rajesh
,
S.
,
O'Carroll
,
D.
, &
Abbott
,
D.
(
2005
).
Man-made velocity estimators based on insect vision
.
Smart Materials and Structures
,
14
(
2
),
413
424
.
163
Reichardt
,
W.
(
1987
).
Evaluation of optical motion information by movement detector
.
International Journal of Computer Vision
,
161
(
2
),
533
547
.
164
Reichardt
,
W.
,
Egelhaaf
,
M.
, &
Guo, A.-k.
(
1989
).
Processing of figure and background motion in the visual system of the fly
.
Biological Cybernetics
,
61
(
5
),
327
345
.
165
Reichardt
,
W.
,
Poggio
,
T.
, &
Hausen
,
K.
(
1983
).
Figure-ground discrimination by relative movement in the visual system of the fly. Part II: Towards the neural circuitry
.
Biological Cybernetics
,
46
(
1
),
1
30
.
166
Rind
,
F.
(
1990
).
Identification of directionally selective motion-detecting neurones in the locust lobula and their synaptic connections with an identified descending neurone
.
Journal of Experimental Biology
,
149
,
21
43
.
167
Rind
,
F. C.
(
1990
).
A directionally selective motion-detecting neurone in the brain of the locust: Physiological and morphological characterization
.
Journal of Experimental Biology
,
149
,
1
19
.
168
Rind
,
F. C.
(
1996
).
Intracellular characterization of neurons in the locust brain signaling impending collision
.
Journal of Neurophysiology
,
75
(
3
),
986
995
.
169
Rind
,
F. C.
(
2002
).
Motion detectors in the locust visual system: From biology to robot sensors
.
Microscopy Research and Technique
,
56
(
4
),
256
269
.
170
Rind
,
F. C.
, &
Bramwell
,
D. I.
(
1996
).
Neural network based on the input organization of an identified neuron signaling impending collision
.
Journal of Neurophysiology
,
75
(
3
),
967
985
.
171
Rind
,
F. C.
, &
Leitinger
,
G.
(
2000
).
Immunocytochemical evidence that collision sensing neurons in the locust visual system contain acetylcholine
.
Journal of Comparative Neurology
,
423
(
3
),
389
401
.
172
Rind
,
F. C.
,
Santer
,
R. D.
, &
Wright
,
G. A.
(
2008
).
Arousal facilitates collision avoidance mediated by a looming sensitive visual neuron in a flying locust
.
Journal of Neurophysiology
,
100
,
670
680
.
173
Rind
,
F. C.
, &
Simmons
,
P. J.
(
1992
).
Orthopteran DCMD neuron: A reevaluation of responses to moving objects. I. Selective responses to approaching objects
.
Journal of Neurophysiology
,
68
(
5
),
1654
1666
.
174
Rind
,
F. C.
, &
Simmons
,
P. J.
(
1998
).
Local circuit for the computation of object approach by an identified visual neuron in the locust
.
Journal of Comparative Neurology
,
395
(
3
),
405
415
.
175
Rind
,
F. C.
, &
Simmons
,
P. J.
(
1999
).
Seeing what is coming: Building collision-sensitive neurones
.
Trends in Neurosciences
,
22
(
5
),
215
220
.
176
Rind
,
F. C.
,
Wernitznig
,
S.
,
Polt
,
P.
,
Zankel
,
A.
,
Gutl
,
D.
,
Sztarker
,
J.
, &
Leitinger
,
G.
(
2016
).
Two identified looming detectors in the locust: Ubiquitous lateral connections among their inputs contribute to selective responses to looming objects
.
Scientific Reports
,
6
,
35525
.
177
Rister
,
J.
,
Pauls
,
D.
,
Schnell
,
B.
,
Ting
,
C.-Y.
,
Lee
,
C.-H.
,
Sinakevitch
,
I.
,
Morante
,
J.
,
Strausfeld
,
N. J.
,
Ito
,
K.
, &
Heisenberg
,
M.
(
2007
).
Dissection of the peripheral motion channel in the visual system of Drosophila melanogaster
.
Neuron
,
56
(
1
),
155
170
.
178
Rivera-Alvidrez
,
Z.
, &
Higgins
,
C. M.
(
2005
).
Contrast saturation in a neuronally-based model of elementary motion detection
.
Neurocomputing
,
65–66
(
Spec. Iss.
),
173
179
.
179
Roubieu
,
F.
,
Expert
,
F.
,
Sabiron
,
G.
, &
Ruffier
,
F.
(
2013
).
A two-directional 1-gram visual motion sensor inspired by the fly's eye
.
IEEE Sensors Journal
,
13
(
3
),
1025
1035
.
180
Roubieu
,
F. L.
,
Serres
,
J.
,
Franceschini
,
N.
,
Ruffier
,
F.
, &
Viollet
,
S.
(
2012
).
A fully-autonomous hovercraft inspired by bees: Wall following and speed control in straight and tapered corridors
. In
Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO)
(pp.
1311
1318
).
New York
:
IEEE
.
181
Roubieu
,
F. L.
,
Serres
,
J. R.
,
Colonnier
,
F.
,
Franceschini
,
N.
,
Viollet
,
S.
, &
Ruffier
,
F.
(
2014
).
A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors
.
Bioinspiration & Biomimetics
,
9
(
3
),
036003
.
182
Ruffier
,
F.
, &
Franceschini
,
N.
(
2004
).
Visually guided micro-aerial vehicle: Automatic takeoff, terrain following, landing and wind reaction
. In
Proceedings of the 2004 IEEE International Conference on Robotics and Automation (ICRA)
(pp.
2339
2346
).
New York
:
IEEE
.
183
Ruffier
,
F.
, &
Franceschini
,
N.
(
2005
).
Optic flow regulation: The key to aircraft automatic guidance
.
Robotics and Autonomous Systems
,
50
(
4
),
177
194
.
184
Ruffier
,
F.
, &
Franceschini
,
N.
(
2015
).
Optic flow regulation in unsteady environments: A tethered MAV achieves terrain following and targeted landing over a moving platform
.
Journal of Intelligent & Robotic Systems
,
79
(
2
),
275
293
.
185
Ruffier
,
F.
,
Viollet
,
S.
,
Amic
,
S.
, &
Franceschini
,
N.
(
2003
).
Bio-inspired optical flow circuits for the visual guidance of micro air vehicles
. In
Proceedings of the 2003 IEEE International Symposium on Circuits and Systems
,
Vol. 3
(pp.
846
849
).
New York
:
IEEE
.
186
Ruffier
,
F.
,
Viollet
,
S.
, &
Franceschini
,
N.
(
2003
).
OSCAR and OCTAVE: Two bio-inspired visually guided aerial micro-robots
. In
Proceedings of the 11th International Conference on Advanced Robotics
(pp.
726
732
).
New York
:
IEEE
.
187
Sabiron
,
G.
,
Chavent
,
P.
,
Raharijaona
,
T.
,
Fabiani
,
P.
, &
Ruffier
,
F.
(
2013
).
Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields
. In
Proceedings of the 2013 IEEE International Conference on Robotics and Automation
(pp.
1742
1749
).
New York
:
IEEE
.
188
Salt
,
L.
,
Indiveri
,
G.
, &
Sandamirskaya
,
Y.
(
2017
).
Obstacle avoidance with LGMD neuron: Towards a neuromorphic UAV implementation
. In
Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS)
(pp.
1
4
).
New York
:
IEEE
.
189
Santer
,
R. D.
,
Rind
,
F. C.
, &
Simmons
,
P. J.
(
2012
).
Predator versus prey: Locust looming-detector neuron and behavioural responses to stimuli representing attacking bird predators
.
PLoS One
,
7
(
11
),
1
11
.
190
Sarkar
,
M.
,
Bello
,
D. S. S.
,
van Hoof
,
C.
, &
Theuwissen
,
A. J.
(
2013
).
Biologically inspired CMOS image sensor for fast motion and polarization detection
.
IEEE Sensors Journal
,
13
(
3
),
1065
1073
.
191
Schnell
,
B.
,
Raghu
,
S. V.
,
Nern
,
A.
, &
Borst
,
A.
(
2012
).
Columnar cells necessary for motion responses of wide-field visual interneurons in Drosophila
.
Journal of Comparative Physiology
,
198
,
389
395
.
192
Serres
,
J. R.
,
Masson
,
G. P.
,
Ruffier
,
F.
, &
Franceschini
,
N.
(
2008
).
A bee in the corridor: Centering and wall-following
.
Naturwissenschaften
,
95
(
12
),
1181
1187
.
193
Serres
,
J. R.
, &
Ruffier
,
F.
(
2017
).
Optic flow-based collision-free strategies: From insects to robots
.
Arthropod Structure and Development
,
46
(
5
),
703
717
.
194
Shigang
,
Y.
, &
Rind
,
F. C.
(
2005
).
A collision detection system for a mobile robot inspired by the locust visual system
. In
Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA)
(pp.
3832
3837
).
New York
:
IEEE
.
195
Shinomiya
,
K.
,
Karuppudurai
,
T.
,
Lin
,
T. Y.
,
Lu
,
Z.
,
Lee
,
C. H.
, &
Meinertzhagen
,
I. A.
(
2014
).
Candidate neural substrates for off-edge motion detection in Drosophila
.
Current Biology
,
24
(
10
),
1062
1070
.
196
Shinomiya
,
K.
,
Takemura
,
S.-Y.
,
Rivlin
,
P. K.
,
Plaza
,
S. M.
,
Scheffer
,
L.
, &
Meinertzhagen
,
I. A.
(
2015
).
A common evolutionary origin for the on- and off-edge motion detection pathways of the Drosophila visual system
.
Frontiers in Neural Circuits
,
9
,
33
.
197
Silva
,
A.
, &
Santos
,
C.
(
2013
).
Computational model of the LGMD neuron for automatic collision detection
. In
Proceedings of the 2013 IEEE 3rd Portuguese Meeting in Bioengineering
(pp.
1
4
).
New York
:
IEEE
.
198
Silva
,
A.
, &
Santos
,
C. P.
(
2013
).
Modeling disinhibition within a layered structure of the LGMD neuron
. In
Proceedings of the 2013 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
1
7
).
New York
:
IEEE
.
199
Simmons
,
P. J.
, &
Rind
,
F. C.
(
1997
).
Responses to object approach by a wide field visual neurone, the LGMD2 of the locust: Characterization and image cues
.
Journal of Comparative Physiology A. Sensory, Neural, and Behavioral Physiology
,
180
(
3
),
203
214
.
200
Simmons
,
P. J.
,
Rind
,
F. C.
, &
Santer
,
R. D.
(
2010
).
Escapes with and without preparation: The neuroethology of visual startle in locusts
.
Journal of Insect Physiology
,
56
(
8
),
876
883
.
201
Simmons
,
P. J.
,
Sztarker
,
J.
, &
Rind
,
F. C.
(
2013
).
Looming detection by identified visual interneurons during larval development of the locust Locusta migratoria
.
The Journal of Experimental Biology
,
216
(
Pt 12
),
2266
2275
.
202
Snippe
,
H. P.
, &
Koenderink
,
J. J.
(
1994
).
Extraction of optical velocity by use of multi-input Reichardt detectors
.
Journal of the Optical Society of America
,
11
(
4
),
1222
1236
.
203
Spalthoff
,
C.
,
Egelhaaf
,
M.
,
Tinnefeld
,
P.
, &
Kurtz
,
R.
(
2010
).
Localized direction selective responses in the dendrites of visual interneurons of the fly
.
BMC Biology
,
8
(
1
),
36
.
204
Srinivasan
,
M.
,
Chahl
,
J.
,
Weber
,
K.
,
Venkatesh
,
S.
,
Nagle
,
M.
, &
Zhang
,
S.
(
1999
).
Robot navigation inspired by principles of insect vision
.
Robotics and Autonomous Systems
,
26
(
2–3
),
203
216
.
205
Srinivasan
,
M. V.
(
2011
).
Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics
.
Physiological Reviews
,
91
,
413
460
.
206
Srinivasan
,
M. V.
(
2011
).
Visual control of navigation in insects and its relevance for robotics
.
Current Opinion in Neurobiology
,
21
(
4
),
535
543
.
207
Stafford
,
R.
,
Santer
,
R. D.
, &
Rind
,
F. C.
(
2007
).
A bio-inspired visual collision detection mechanism for cars: Combining insect inspired neurons to create a robust system
.
Biosystems
,
87
(
2–3
),
164
171
.
208
Strother
,
J. A.
,
Nern
,
A.
, &
Reiser
,
M. B.
(
2014
).
Direct observation of on and off pathways in the Drosophila visual system
.
Current Biology
,
24
(
9
),
976
983
.
209
Strother
,
J. A.
,
Wu
,
S. T.
,
Wong
,
A. M.
,
Nern
,
A.
,
Rogers
,
E. M.
,
Le
,
J. Q.
,
Rubin
,
G. M.
, &
Reiser
,
M. B.
(
2017
).
The emergence of directional selectivity in the visual motion pathway of Drosophila
.
Neuron
94
(
1
),
168
182
.
210
Sztarker
,
J.
, &
Rind
,
F. C.
(
2014
).
A look into the cockpit of the developing locust: Looming detectors and predator avoidance
.
Developmental Neurobiology
,
74
(
11
),
1078
1095
.
211
Sztarker
,
J.
, &
Tomsic
,
D.
(
2008
).
Neuronal correlates of the visually elicited escape response of the crab Chasmagnathus upon seasonal variations, stimuli changes and perceptual alterations
.
Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology
,
194
(
6
),
587
596
.
212
Takemura
,
S.-Y.
,
Bharioke
,
A.
,
Lu
,
Z.
,
Nern
,
A.
,
Vitaladevuni
,
S.
,
Rivlin
,
P. K.
,
Katz
,
W. T.
,
Olbris
,
D. J.
,
Plaza
,
S. M.
,
Winston
,
P.
,
Zhao
,
T.
,
Horne
,
J. A.
,
Fetter
,
R. D.
,
Takemura
,
S.
,
Blazek
,
K.
,
Chang
,
L.-A.
,
Ogundeyi
,
O.
,
Saunders
,
M. A.
,
Shapiro
,
V.
,
Sigmund
,
C.
,
Rubin
,
G. M.
,
Scheffer
,
L. K.
,
Meinertzhagen
,
I. A.
, &
Chklovskii
,
D. B.
(
2013
).
A visual motion detection circuit suggested by Drosophila connectomics
.
Nature
,
500
(
7461
),
175
181
.
213
Tammero
,
L. F.
(
2004
).
Spatial organization of visuomotor reflexes in Drosophila
.
Journal of Experimental Biology
,
207
(
1
),
113
122
.
214
Tammero
,
L. F.
, &
Dickinson
,
M. H.
(
2002
).
Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster
.
The Journal of Experimental Biology
,
205
,
2785
2798
.
215
Vanhoutte
,
E.
,
Mafrica
,
S.
,
Ruffier
,
F.
,
Bootsma
,
R. J.
, &
Serres
,
J.
(
2017
).
Time-of-travel methods for measuring optical flow on board a micro flying robot
.
Sensors
,
17
(
3
),
571
.
216
Viollet
,
S.
,
Ray
,
T.
,
Ruffier
,
F.
,
Menouni
,
M.
,
Kerhuel
,
F. A. L.
, &
Franceschini
,
N.
(
2010
).
Characteristics of three miniature bio-inspired optic flow sensors in natural environments
. In
Proceedings of the 2010 IEEE International Conference on Sensor Technologies and Applications
(pp.
51
55
).
New York
:
IEEE
.
217
Vogt
,
N.
, &
Desplan
,
C.
(
2007
).
The first steps in Drosophila motion detection
.
Neuron
,
56
(
1
),
5
7
.
218
Wang
,
H.
,
Peng
,
J.
,
Baxter
,
P.
,
Zhang
,
C.
,
Wang
,
Z.
, &
Yue
,
S.
(
2018
).
A model for detection of angular velocity of image motion based on the temporal tuning of the Drosophila
. In
V.
Kůrková
,
Y.
Manolopoulos
,
B.
Hammer
,
L.
Iliadis
, &
I.
Maglogiannis
(Eds.),
Artificial neural networks and machine learning—ICANN 2018
(pp.
37
46
).
New York
:
Springer International Publishing
.
219
Wang
,
H.
,
Peng
,
J.
, &
Yue
,
S.
(
2016
).
Bio-inspired small target motion detector with a new lateral inhibition mechanism
. In
Proceedings of the 2016 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
4751
4758
).
New York
:
IEEE
.
220
Wang
,
H.
,
Peng
,
J.
, &
Yue
,
S.
(
2017
).
An improved LPTC neural model for background motion direction estimation
. In
Proceedings of the 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics
(pp.
47
52
).
New York
:
IEEE
.
221
Wang
,
H.
,
Peng
,
J.
, &
Yue
,
S.
(
2018
).
A feedback neural network for small target motion detection in cluttered backgrounds
. In
V.
Kůrková
,
Y.
Manolopoulos
,
B.
Hammer
,
L.
Iliadis
, &
I.
Maglogiannis
(Eds.),
Artificial neural networks and machine learning—ICANN 2018
(pp.
728
737
).
New York
:
Springer International Publishing
.
222
Wang
,
H.
,
Peng
,
J.
, &
Yue
,
S.
(
in press
).
IEEE Transactions on Cybernetics
.
A directionally selective small target motion detecting visual neural network in cluttered backgrounds
.
223
Warzecha
,
A.-K.
,
Egelhaaf
,
M.
, &
Borst
,
A.
(
1993
).
Neural circuit tuning fly visual interneurons to motion of small objects. I. Dissection of the circuit by pharmacological and photoinactivation techniques
.
Journal of Neurophysiology
,
69
(
2
),
329
339
.
224
Warzecha
,
A. K.
,
Rosner
,
R.
, &
Grewe
,
J.
(
2013
).
Impact and sources of neuronal variability in the fly's motion vision pathway
.
Journal of Physiology Paris
,
107
(
1–2
),
26
40
.
225
Webb
,
B.
(
2000
).
What does robotics offer animal behaviour?
Animal Behaviour
,
60
(
5
),
545
558
.
226
Webb
,
B.
(
2001
).
Can robots make good models of biological behaviour?
Behavioral and Brain Sciences
,
24
(
06
),
1033
1050
.
227
Wernitznig
,
S.
,
Rind
,
F. C.
,
Polt
,
P.
,
Zankel
,
A.
,
Pritz
,
E.
,
Kolb
,
D.
,
Bock
,
E.
, &
Leitinger
,
G.
(
2015
).
Synaptic connections of first-stage visual neurons in the locust Schistocerca gregaria extend evolution of tetrad synapses back 200 million years
.
Journal of Comparative Neurology
,
523
(
2
),
298
312
.
228
Wiederman
,
S. D.
, &
O'Carroll
,
D. C.
(
2013
).
Biologically inspired feature detection using cascaded correlations of off and on channels
.
Journal of Artificial Intelligence and Soft Computing Research
,
3
(
1
),
5
14
.
229
Wiederman
,
S. D.
,
Shoemaker
,
P. A.
, &
O'Carroll
,
D. C.
(
2008
).
A model for the detection of moving targets in visual clutter inspired by insect physiology
.
PLoS One
,
3
(
7
),
1
11
.
230
Wiederman
,
S. D.
,
Shoemaker
,
P. A.
, &
O'Carroll
,
D. C.
(
2013
).
Correlation between OFF and ON channels underlies dark target selectivity in an insect visual system
.
Journal of Neuroscience
,
33
(
32
),
13225
13232
.
231
Yakubowski
,
J. M.
,
Mcmillan
,
G. A.
, &
Gray
,
J. R.
(
2016
).
Background visual motion affects responses of an insect motion-sensitive neuron to objects deviating from a collision course
.
Physiological Reports
,
4
(
10
),
e12801
.
232
Yamawaki
,
Y.
(
2011
).
Defence behaviours of the praying mantis Tenodera aridifolia in response to looming objects
.
Journal of Insect Physiology
,
57
(
11
),
1510
1517
.
233
Yang
,
T.
,
Wu
,
S.
, &
Guo
,
D.
(
2012
).
Dynamic range enhance of visual sensor circuits and application for multi-object motion detection
. In
Proceedings of the 2012 IEEE International Conference on Intelligent Computation Technology and Automation
(pp.
151
154
).
New York
:
IEEE
.
234
Yue
,
S.
, &
Claire Rind
,
F.
(
2006
).
Visual motion pattern extraction and fusion for collision detection in complex dynamic scenes
.
Computer Vision and Image Understanding
,
104
(
1
),
48
60
.
235
Yue
,
S.
, &
Rind
,
F. C.
(
2006
).
Collision detection in complex dynamic scenes using a LGMD based visual neural network with feature enhancement
.
IEEE Transactions on Neural Networks
,
17
(
3
),
705
716
.
236
Yue
,
S.
, &
Rind
,
F. C.
(
2007
).
A synthetic vision system using directionally selective motion detectors to recognize collision
.
Artificial Life
,
13
(
2
),
93
122
.
237
Yue
,
S.
, &
Rind
,
F. C.
(
2009
).
Near range path navigation using LGMD visual neural networks
. In
Proceedings of the 2009 2nd IEEE International Conference on Computer Science and Information Technology
(pp.
105
109
).
New York
:
IEEE
.
238
Yue
,
S.
, &
Rind
,
F. C.
(
2013
).
Postsynaptic organizations of directional selective visual neural networks for collision detection
.
Neurocomputing
,
103
,
50
62
.
239
Yue
,
S.
, &
Rind
,
F. C.
(
2013
).
Redundant neural vision systems competing for collision recognition roles
.
IEEE Transactions on Autonomous Mental Development
,
5
(
2
),
173
186
.
240
Yue
,
S.
,
Rind
,
F. C.
,
Keil
,
M. S.
,
Cuadri
,
J.
, &
Stafford
,
R.
(
2006
).
A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment
.
Neurocomputing
,
69
(
13–15
),
1591
1598
.
241
Yue
,
S.
,
Santer
,
R. D.
,
Yamawaki
,
Y.
, &
Rind
,
F. C.
(
2010
).
Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated
.
Autonomous Robots
,
28
(
2
),
151
167
.
242
Zanker
,
J. M.
, &
Braddick
,
O. J.
(
1999
).
How does noise influence the estimation of speed?
Vision Research
,
39
(
14
),
2411
2420
.
243
Zanker
,
J. M.
,
Srinivasan
,
M. V.
, &
Egelhaaf
,
M.
(
1999
).
Speed tuning in elementary motion detectors of the correlation type
.
Biological Cybernetics
,
80
(
2
),
109
116
.
244
Zanker
,
J. M.
, &
Zeil
,
J.
(
2005
).
Movement-induced motion signal distributions in outdoor scenes
.
Network: Computation in Neural Systems
,
16
(
4
),
357
376
.
245
Zeil
,
J.
(
2012
).
Visual homing: An insect perspective
.
Current Opinion in Neurobiology
,
22
(
2
),
285
293
.
246
Zhang
,
G.
,
Zhang
,
C.
, &
Yue
,
S.
(
2016
).
LGMD and DSNs neural networks integration for collision predication
. In
Proceedings of the 2016 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
1174
1179
).
New York
:
IEEE
.
247
Zhang
,
T.
,
Wu
,
H.
,
Borst
,
A.
,
Kuhnlenz
,
K.
, &
Buss
,
M.
(
2008
).
An FPGA implementation of insect-inspired motion detector for high-speed vision systems
. In
Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA)
(pp.
335
340
).
New York
:
IEEE
.
248
Zhang
,
Z.
,
Yue
,
S.
, &
Zhang
,
G.
(
2015
).
Fly visual system inspired artificial neural network for collision detection
.
Neurocomputing
,
153
,
221
234
.
249
Zhao
,
J.
,
Hu
,
C.
,
Zhang
,
C.
,
Wang
,
Z.
, &
Yue
,
S.
(
2018
).
A bio-inspired collision detector for small quadcopter
. In
Proceedings of the 2018 IEEE International Joint Conference on Neural Networks (IJCNN)
(pp.
1
7
).
New York
:
IEEE
.
250
Zheng
,
L.
,
de Polavieja
,
G. G.
,
Wolfram
,
V.
,
Asyali
,
M. H.
,
Hardie
,
R. C.
, &
Juusola
,
M.
(
2006
).
Feedback network controls photoreceptor output at the layer of first visual synapses in Drosophila
.
The Journal of General Physiology
,
127
(
5
),
495
510
.