Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Wataru Noguchi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life531-532, (July 29–August 2, 2019) doi: 10.1162/isal_a_00216
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life147-154, (July 23–27, 2018) doi: 10.1162/isal_a_00035
Abstract
PDF
Animals develop spatial recognition through visuomotor integrated experiences. In nature, animals change their behavior during development and develop spatial recognition. The developmental process of spatial recognition has been previously studied. However, it is unclear how behavior during development affects the development of spatial recognition. To investigate the effect of movement pattern (behavior) on spatial recognition, we simulated the development of spatial recognition using controlled behaviors. Hierarchical recurrent neural networks (HRNNs) with multiple time scales were trained to predict visuomotor sequences of a simulated mobile agent. The spatial recognition developed with HRNNs was compared for various values of randomness of the agent’s movement. The experimental results show that spatial recognition was not developed for movements with a randomness that was too small or too large but for movements with intermediate randomness.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life324-331, (September 4–8, 2017) doi: 10.1162/isal_a_055
Abstract
PDF
Spatial recognition is the ability to recognize the environment and generate goal-directed behaviors, such as navigation. Animals develop spatial recognition by integrating their subjective visual and motion experiences. We propose a model that consists of hierarchical recurrent neural networks with multiple time scales, fast, medium, and slow, that shows how spatial recognition can be obtained from only visual and motion experiences. For high-dimensional visual sequences, a convolutional neural network (CNN) was used to recognize and generate vision. Our model, which was applied to a simulated mobile agent, was trained to predict future visual and motion experiences and generate goal-directed sequences toward destinations that were indicated by photographs. Due to the training, our model was able to achieve spatial recognition, predict future experiences, and generate goal-directed sequences by integrating subjective visual and motion experiences. An internal state analysis showed that the internal states of slow recurrent neural networks were self-organized by the agent’s position. Furthermore, such representation of the internal states was obtained efficiently as the representation was independent of the prediction and generation processes.