Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-18 of 18
Hiroyuki Iizuka
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference8, (July 22–26, 2024) 10.1162/isal_a_00719
Abstract
View Paper
PDF
In this study, we investigated the extent to which Vision Language Models (VLMs) possess sensibilities similar to those of humans by focusing on color impressions, which have a significant impact on the sensory aspects of vision, and sound symbolism, which constitutes linguistic and auditory sensibilities. For the experiments, we newly constructed an evolving image generation system based on the CONRAD algorithm, which evolves images based on human evaluations. Our system can also reflect the evaluations of VLMs in addition to humans. Using this system, we analyzed the sensibilities of VLMs. The experimental results suggested similarities between human and VLM sensibilities in both color impressions and sound symbolism. In sound symbolism, VLMs demonstrated sound-symbolic sensibilities similar to those of humans, even for the pseudo-words we newly generated, yielding intriguing results. These findings suggest that VLM evaluations and feedback may have a certain level of effectiveness in tasks that have previously required human evaluations or annotations related to sensibility.
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference90, (July 22–26, 2024) 10.1162/isal_a_00709
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference74, (July 24–28, 2023) 10.1162/isal_a_00687
Abstract
View Paper
PDF
Body representations, which have multimodal receptive fields in the peripersonal space where individuals interact with the environment within their reach, show plasticity through tool use and are necessary for adaptive and skillful use of external tools. In this study, we propose a neural network model that develops a multimodal and body-centered peripersonal space representation of the plastic body representation through tool use, whereas previous developmental models can only explain the plastic body representation as a non-body-centered one. Our proposed model reconstructs visual and tactile sensations corresponding to proprioceptive sensations after integrating visual and tactile sensations through a Transformer based on a self-attention mechanism. By learning through camera vision and arm touch of a simulated robot and proprioception of camera and arm postures, a body representation was developed that localizes tactile sensations on a simultaneously developed peripersonal space representation. In particular, learning during tool use causes the body representation to have plasticity due to tool use, and the peripersonal space representation is shared by sharing part of the visual and tactile decoding modules. As a result, the model obtains the plastic body representation on the body-centered multimodal peripersonal space representation.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference46, (July 24–28, 2023) 10.1162/isal_a_00642
Abstract
View Paper
PDF
The rubber hand illusion is a phenomenon that involves perceiving a rubber hand as part of one’s own body. The occurrence of this illusion is evaluated by the subjective report and the proprioceptive drift, in which the position of the hand is shifted in perception. The proprioceptive drift and sense of body ownership are assumed to be related; however, some research results have cast doubt on this relationship. We built a deep neural network model to simulate the rubber hand experiment to investigate the principles behind proprioceptive drift. Our deep neural network model was trained using consistent multisensory data and tested with inconsistent data, such as the rubber hand illusion. The model successfully predicted proprioceptive drift, suggesting that simple predictive learning mechanisms can account for this phenomenon.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life46, (July 18–22, 2022) 10.1162/isal_a_00529
Abstract
View Paper
PDF
This study reveals what kind of temporal and spatial patterns form when learning in an adversarial relationship between two individuals. The model was implemented by coupling generative adversarial networks, which are well-known in the field of machine learning. The obtained temporal patterns resulted in chaos with a positive Lyapunov exponent for time-series learning, whereas spatial pattern learning produced structured patterns with a higher fractal dimension, not just more complexity with a higher entropy.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life62, (July 18–22, 2022) 10.1162/isal_a_00548
Abstract
View Paper
PDF
Chemotaxis is a phenomenon whereby organisms like ameba direct their movements responding to their environmental gradients, often called gradient climbing. It is considered to be the origin of self-movement that characterizes life forms. In this work, we have simulated the gradient climbing behaviour on Neural Cellular Automata (NCA) that has recently been proposed as a model to simulate morphogenesis. NCA is a cellular automata model using deep networks for its learnable update rule and it generates a target cell pattern from a single cell through local interactions among cells. Our model, Gradient Climbing Neural Cellular Automata (GCNCA), has an additional feature that enables itself to move a generated pattern by responding to a gradient injected into its cell states.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life51, (July 18–22, 2022) 10.1162/isal_a_00535
Abstract
View Paper
PDF
Several studies that deal with the acquisition of concepts in a bottom-up manner from experiences in the physical space exist, but there are few of them that deal with the bidirectional interaction between symbolic operations and experiences in the physical world. It was shown that a shared module neural network succeeded in generating a bottom-up spatial representation of the external world, without involving learning of the signals of the spatial structure. Furthermore, the module can understand the external map as a symbol based on its spatial representation, and top-down navigation can be performed using the map. In this study, we extended this model and proposed a simulation model that unifies the emergence of a number representation, learning of symbol manipulation on the representation, and top-down understanding of symbol manipulation onto the physical world. Our results show that the learning results of the symbol manipulation can be applied to the physical world prediction, and our proposed model succeeded in grounding symbol manipulation onto physical experiences.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life531-532, (July 29–August 2, 2019) 10.1162/isal_a_00216
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life50-51, (July 29–August 2, 2019) 10.1162/isal_a_00139
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life179-185, (July 23–27, 2018) 10.1162/isal_a_00039
Abstract
View Paper
PDF
Swarms of birds and fish produce well-organized behaviors even though each individual only interacts with their neighbors. Previous studies attempted to derive individual interaction rules using heuristic assumptions from data on captured animals. We propose a machine learning method to obtain the sensorimotor mapping mechanism of individuals directly from captured data. Data on swarm behaviors in fish was captured, and individual positions are determined. The sensory inputs and motor outputs are estimated and used as training data. A simple feedforward neural network is trained to learn the sensorimotor mapping of individuals. The trained network is implemented in the simulated environment and resulting swarm behaviors are investigated. As a result, our trained neural network could reproduce the swarm behavior better than the Boids model. The reproduced swarm behaviors are evaluated in terms of three different measures, and the difference from the Boids model is discussed.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life1-4, (July 23–27, 2018) 10.1162/isal_e_00002
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Lifeix-xvii, (July 23–27, 2018) 10.1162/isal_e_00001
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Lifei-672, (July 23–27, 2018) 10.1162/isal_a_00122
Abstract
View Paper
PDF
The complete Proceedings of the The 2018 Conference on Artificial Life: A Hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE)
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life147-154, (July 23–27, 2018) 10.1162/isal_a_00035
Abstract
View Paper
PDF
Animals develop spatial recognition through visuomotor integrated experiences. In nature, animals change their behavior during development and develop spatial recognition. The developmental process of spatial recognition has been previously studied. However, it is unclear how behavior during development affects the development of spatial recognition. To investigate the effect of movement pattern (behavior) on spatial recognition, we simulated the development of spatial recognition using controlled behaviors. Hierarchical recurrent neural networks (HRNNs) with multiple time scales were trained to predict visuomotor sequences of a simulated mobile agent. The spatial recognition developed with HRNNs was compared for various values of randomness of the agent’s movement. The experimental results show that spatial recognition was not developed for movements with a randomness that was too small or too large but for movements with intermediate randomness.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life659-664, (July 23–27, 2018) 10.1162/isal_a_00120
Abstract
View Paper
PDF
Bird song is one of the phenomena that increase in complexity through evolution. A complex song is known to be advantageous for survivability and birds are known to learn how to sing a song from each other. From these facts, we have a hypothesis that adversarial imitation learning plays a major role in the evolution process of a complex song. There is a previous study that demonstrates the complexation of a bird song time series by modeling the process of adversarial imitation learning using a logistic map. However, the real bird songs have much variety and time dependencies, like grammar. Therefore, in this study, adversarial imitation learning is modeled using an artificial neural network that can approximate any function. The network learns adversarial imitation using the gradient descent method. By making such changes, the results of our study show that the generated bird songs evolve through the process of adversarial imitation learning to chaos, as seen in the previous models.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life324-331, (September 4–8, 2017) 10.1162/isal_a_055
Abstract
View Paper
PDF
Spatial recognition is the ability to recognize the environment and generate goal-directed behaviors, such as navigation. Animals develop spatial recognition by integrating their subjective visual and motion experiences. We propose a model that consists of hierarchical recurrent neural networks with multiple time scales, fast, medium, and slow, that shows how spatial recognition can be obtained from only visual and motion experiences. For high-dimensional visual sequences, a convolutional neural network (CNN) was used to recognize and generate vision. Our model, which was applied to a simulated mobile agent, was trained to predict future visual and motion experiences and generate goal-directed sequences toward destinations that were indicated by photographs. Due to the training, our model was able to achieve spatial recognition, predict future experiences, and generate goal-directed sequences by integrating subjective visual and motion experiences. An internal state analysis showed that the internal states of slow recurrent neural networks were self-organized by the agent’s position. Furthermore, such representation of the internal states was obtained efficiently as the representation was independent of the prediction and generation processes.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life264-270, (July 20–24, 2015) 10.1162/978-0-262-33027-5-ch051
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life109, (August 8–12, 2011) 10.7551/978-0-262-29714-1-ch109