Skip Nav Destination
Close Modal
1-6 of 6
Perception
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life477-484, (July 29–August 2, 2019) 10.1162/isal_a_00207
Abstract
View Paper
PDF
An agent’s actions can be influenced by external factors through the inputs it receives from the environment, as well as internal factors, such as memories or intrinsic preferences. The extent to which an agent’s actions are “caused from within”, as opposed to being externally driven, should depend on its sensor capacity as well as environmental demands for memory and context-dependent behavior. Here, we test this hypothesis using simulated agents (“animats”), equipped with small adaptive Markov Brains (MB) that evolve to solve a perceptual-categorization task under conditions varied with regards to the agents’ sensor capacity and task difficulty. Using a novel formalism developed to identify and quantify the actual causes of occurrences (“what caused what?”) in complex networks, we evaluate the direct causes of the animats’ actions. In addition, we extend this framework to trace the causal chain (“causes of causes”) leading to an animat’s actions back in time, and compare the obtained spatio-temporal causal history across task conditions. We found that measures quantifying the extent to which an animat’s actions are caused by internal factors (as opposed to being driven by the environment through its sensors) varied consistently with defining aspects of the task conditions they evolved to thrive in.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life475-476, (July 29–August 2, 2019) 10.1162/isal_a_00206
Abstract
View Paper
PDF
As water dwelling vertebrates began to progressively evolve features that enabled them to survive on land, they also developed larger eyes, which would have considerably increased their range of vision above water. This increase in visual range may have facilitated their exploitation of new food sources on land and promoted increased cognitive capacity in the form of planning (MacIver et al., 2017). In this study, we use a multi-level agent-based model to attempt to replicate the dynamics of the hypothetical evolutionary scenario described above. To do so, we use a novel method called agent-centric Monte Carlo cognition (ACMCC) (Head and Wilensky, 2018), which allows us to represent the agents’ cognition in a quantifiable manner by performing micro-simulations in a separate agent-based model. In our simulations, we observe that as a population that is adapted to live on land emerges, their mean eye size and cognitive capacity increase.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life467-474, (July 29–August 2, 2019) 10.1162/isal_a_00205
Abstract
View Paper
PDF
It is well documented that cooperation may not be achieved in societies where self-interested agents are engaging in Prisoner’s Dilemma scenarios. In this paper we demonstrate, in contrast, that agent societies that use human-inspired emotions within their decision making, can reach stability in cooperation. Our work makes use of the Ortony, Clore, and Collins (OCC) model of emotions and we analyse the evolutionary stability of two different implementations that make use of key emotions from this model. Firstly, we consider an agent society that solely make use of this model of emotions for the agents’ decision making. Secondly we look at a model that extends the emotional agents with a model for representing mood. We set out a proof that shows that our emotional agents are an evolutionarily stable strategy when playing against a worst-case scenario strategy. The proof demonstrates that our established model of emotional agents enables evolutionary stability to be achieved, without modification to this model. In contrast, the model of moody agents was shown not to be an evolutionarily stable strategy. Our analysis sheds light on the nature of cooperation within agent societies and the useful role that simulated emotions can play in the agents’ decision making and the society as a whole.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life465-466, (July 29–August 2, 2019) 10.1162/isal_a_00204
Abstract
View Paper
PDF
Recent successes in Artificial Intelligence (AI) use machine learning to produce AI agents with both hand-engineered and procedurally generated elements learned from large amounts of data. As the balance shifts toward procedural generation, how can we predict interactions between such agents and humans? We propose to use Artificial Life to study emergence of group behaviours between procedurally generated AI agents and humans. We simulate Darwinian evolution to procedurally generate agents in a simple environment where the agents interact with human-controlled avatars. To reduce human involvement time, we machine-learn another set of AI agents that mimic human avatar behaviours and run the evolution with such human proxies instead of actual humans. This paper is an update on the on-going project.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life463-464, (July 29–August 2, 2019) 10.1162/isal_a_00203
Abstract
View Paper
PDF
Echolocating bats can avoid obstacles in complete darkness relying on their sonar system. Under experimental conditions, these animals can infer the 3D position of obstacles. However, in cluttered and complex environments their ability to locate obstacles is likely to be largely reduced, and they might need to rely on more robust cues that do not degrade as the complexity of the environment increases. Here, we present a robotic model of two hypothesized obstacle avoidance strategies in bats, both of which model observed behavior in bats: a Gaze Scanning Strategy and a Fixed Head Strategy. Critically, these strategies only employ interaural level differences and do not require locating obstacles. We found that both strategies were successful at avoiding obstacles in cluttered environments. However, the Fixed Head Strategy performed better. This indicates that acoustic gaze scanning, observed in hunting bats, might reduce obstacle avoidance performance. We conclude that strategies based on gaze scanning should be avoided when little or no spatial information is available to the bat, which corresponds to recent observations in bats.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life485-492, (July 29–August 2, 2019) 10.1162/isal_a_00208
Abstract
View Paper
PDF
Although determining the similarity of genotypes is often employed in artificial life experiments to measure or control diversity, in practical applications we may often be more interested in similarities of phenotypes. The latter may provide information about the effective diversity in a population, and thus it may be more suitable for diversity estimations and diversity-based search algorithms. A phenotype of a simulated creature can be understood as creature’s physiology or its behavior – e.g., body kinematics, movement patterns, or gaits. In this paper, we introduce a set of efficient measures which allow for describing the movement of simulated 3D stick creatures. We use these measures to analyze the results of evolutionary optimization of virtual creatures towards four unique behavioral goals. We show that most solutions obtained for each goal occupy distinct areas of the phenotype space. This suggests that measures defined in this paper create a useful behavioral space for movement-related fitness functions. Finally, we use the introduced measures to visualize how the properties of movement change in populations during the course of evolution.