Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-2 of 2
Bruno Lara
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life567-573, (July 29–August 2, 2019) 10.1162/isal_a_00222
Abstract
View Paper
PDF
Echolocation is the process in which an animal produces a sound and recognises characteristics of its surrounding - for instance, the location of surfaces, objects or pray - by listening to the echoes reflected by the environment. Studies on robot echolocation can be found in the literature. Such works adopt active sensors for emitting sounds, and the echoes reflected from the environment are thus analysed to build up a representation of the robot’s surrounding. In this work, we address the usage of robot ego-noise for echolocation. By ego-noise, we mean the auditory noise (sound) that the robot itself is producing while moving due to the frictions in its gears and actuators. Ego-noise is a result not only of the morphological properties of the robot, but also of its interaction with the environment. We adopt a developmental approach in allowing a wheeled robot to learn how to anticipate characteristics of the environment before actually perceiving them. We programmed the robot to explore the environment in order to acquire the necessary sensorimotor information to learn the mapping between ego-noise, motor, and proximity data. Forward models trained with these data are used to anticipate proximity information and thus to classify whether a specific ego-noise is resulting from the robot being close to or distant from a wall. This experiment shows another promising application of predictive processes, that is for echolocation in mobile robots.
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems390-397, (July 4–6, 2016) 10.1162/978-0-262-33936-0-ch065
Abstract
View Paper
PDF
We present an implementation of a biologically inspired model for learning multimodal body representations in artificial agents in the context of learning and predicting robot ego-noise. We demonstrate the predictive capabilities of the proposed model in two experiments: a simple ego-noise classification task, where we also show the capabilities of the model to produce predictions in absence of input modalities; an ego-noise suppression experiment, where we show the effects in the ego-noise suppression performance of coherent and incoherent proprioceptive and motor information passed as inputs to the predictive process implemented by a forward model. In line with what has been proposed by several behavioural and neuroscience studies, our experiments show that ego-noise attenuation is more pronounced when the robot is the owner of the action. When this is not the case, sensory attenuation is worse, as the incongruence of the proprioceptive and motor information with the perceived ego-noise generates bigger prediction errors, which may constitute an element of surprise for the agent and allow it to distinguish between self-generated actions and those generated by other individuals. We argue that these phenomena can represent cues for a sense of agency in artificial agents.