Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Florian Raudies
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (11): 2652–2668.
Published: 01 November 2014
FIGURES
| View All (22)
Abstract
View article
PDF
Visual motion direction ambiguities due to edge-aperture interaction might be resolved by speed priors, but scant empirical data support this hypothesis. We measured optic flow and gaze positions of walking mothers and the infants they carried. Empirically derived motion priors for infants are vertically elongated and shifted upward relative to mothers. Skewed normal distributions fitted to estimated retinal speeds peak at values above 20 /sec.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (9): 2421–2449.
Published: 01 September 2013
FIGURES
| View All (10)
Abstract
View article
PDF
Visual navigation requires the estimation of self-motion as well as the segmentation of objects from the background. We suggest a definition of local velocity gradients to compute types of self-motion, segment objects, and compute local properties of optical flow fields, such as divergence, curl, and shear. Such velocity gradients are computed as velocity differences measured locally tangent and normal to the direction of flow. Then these differences are rotated according to the local direction of flow to achieve independence of that direction. We propose a bio-inspired model for the computation of these velocity gradients for video sequences. Simulation results show that local gradients encode ordinal surface depth, assuming self-motion in a rigid scene or object motions in a nonrigid scene. For translational self-motion velocity, gradients can be used to distinguish between static and moving objects. The information about ordinal surface depth and self-motion can help steering control for visual navigation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (11): 2868–2914.
Published: 01 November 2011
FIGURES
| View All (6)
Abstract
View article
PDF
Motion transparency occurs when multiple coherent motions are perceived in one spatial location. Imagine, for instance, looking out of the window of a bus on a bright day, where the world outside the window is passing by and movements of passengers inside the bus are reflected in the window. The overlay of both motions at the window leads to motion transparency, which is challenging to process. Noisy and ambiguous motion signals can be reduced using a competition mechanism for all encoded motions in one spatial location. Such a competition, however, leads to the suppression of multiple peak responses that encode different motions, as only the strongest response tends to survive. As a solution, we suggest a local center-surround competition for population-encoded motion directions and speeds. Similar motions are supported, and dissimilar ones are separated, by representing them as multiple activations, which occurs in the case of motion transparency. Psychophysical findings, such as motion attraction and repulsion for motion transparency displays, can be explained by this local competition. Besides this local competition mechanism, we show that feedback signals improve the processing of motion transparency. A discrimination task for transparent versus opaque motion is simulated, where motion transparency is generated by superimposing large field motion patterns of either varying size or varying coherence of motion. The model’s perceptual thresholds with and without feedback are calculated. We demonstrate that initially weak peak responses can be enhanced and stabilized through modulatory feedback signals from higher stages of processing.