Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-11 of 11
Hiroyuki Iizuka
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life50-51, (July 29–August 2, 2019) doi: 10.1162/isal_a_00139
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life531-532, (July 29–August 2, 2019) doi: 10.1162/isal_a_00216
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life147-154, (July 23–27, 2018) doi: 10.1162/isal_a_00035
Abstract
PDF
Animals develop spatial recognition through visuomotor integrated experiences. In nature, animals change their behavior during development and develop spatial recognition. The developmental process of spatial recognition has been previously studied. However, it is unclear how behavior during development affects the development of spatial recognition. To investigate the effect of movement pattern (behavior) on spatial recognition, we simulated the development of spatial recognition using controlled behaviors. Hierarchical recurrent neural networks (HRNNs) with multiple time scales were trained to predict visuomotor sequences of a simulated mobile agent. The spatial recognition developed with HRNNs was compared for various values of randomness of the agent’s movement. The experimental results show that spatial recognition was not developed for movements with a randomness that was too small or too large but for movements with intermediate randomness.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life659-664, (July 23–27, 2018) doi: 10.1162/isal_a_00120
Abstract
PDF
Bird song is one of the phenomena that increase in complexity through evolution. A complex song is known to be advantageous for survivability and birds are known to learn how to sing a song from each other. From these facts, we have a hypothesis that adversarial imitation learning plays a major role in the evolution process of a complex song. There is a previous study that demonstrates the complexation of a bird song time series by modeling the process of adversarial imitation learning using a logistic map. However, the real bird songs have much variety and time dependencies, like grammar. Therefore, in this study, adversarial imitation learning is modeled using an artificial neural network that can approximate any function. The network learns adversarial imitation using the gradient descent method. By making such changes, the results of our study show that the generated bird songs evolve through the process of adversarial imitation learning to chaos, as seen in the previous models.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life179-185, (July 23–27, 2018) doi: 10.1162/isal_a_00039
Abstract
PDF
Swarms of birds and fish produce well-organized behaviors even though each individual only interacts with their neighbors. Previous studies attempted to derive individual interaction rules using heuristic assumptions from data on captured animals. We propose a machine learning method to obtain the sensorimotor mapping mechanism of individuals directly from captured data. Data on swarm behaviors in fish was captured, and individual positions are determined. The sensory inputs and motor outputs are estimated and used as training data. A simple feedforward neural network is trained to learn the sensorimotor mapping of individuals. The trained network is implemented in the simulated environment and resulting swarm behaviors are investigated. As a result, our trained neural network could reproduce the swarm behavior better than the Boids model. The reproduced swarm behaviors are evaluated in terms of three different measures, and the difference from the Boids model is discussed.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life1-4, (July 23–27, 2018) doi: 10.1162/isal_e_00002
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Lifeix-xvii, (July 23–27, 2018) doi: 10.1162/isal_e_00001
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Lifei-672, (July 23–27, 2018) doi: 10.1162/isal_a_00122
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life324-331, (September 4–8, 2017) doi: 10.1162/isal_a_055
Abstract
PDF
Spatial recognition is the ability to recognize the environment and generate goal-directed behaviors, such as navigation. Animals develop spatial recognition by integrating their subjective visual and motion experiences. We propose a model that consists of hierarchical recurrent neural networks with multiple time scales, fast, medium, and slow, that shows how spatial recognition can be obtained from only visual and motion experiences. For high-dimensional visual sequences, a convolutional neural network (CNN) was used to recognize and generate vision. Our model, which was applied to a simulated mobile agent, was trained to predict future visual and motion experiences and generate goal-directed sequences toward destinations that were indicated by photographs. Due to the training, our model was able to achieve spatial recognition, predict future experiences, and generate goal-directed sequences by integrating subjective visual and motion experiences. An internal state analysis showed that the internal states of slow recurrent neural networks were self-organized by the agent’s position. Furthermore, such representation of the internal states was obtained efficiently as the representation was independent of the prediction and generation processes.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life264-270, (July 20–24, 2015) doi: 10.1162/978-0-262-33027-5-ch051
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life109, (August 8–12, 2011) doi: 10.7551/978-0-262-29714-1-ch109