Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-4 of 4
Madhavun Candadai
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life50, (July 18–22, 2022) 10.1162/isal_a_00534
Abstract
View Paper
PDF
What insights can statistical analysis of the time series recordings of neurons and brain regions during behavior give about the neural basis of behavior? With the increasing amount of whole-brain imaging data becoming available, the importance of addressing this unanswered theoretical challenge has become increasingly urgent. We propose a computational neuroethology approach to begin to address this challenge. We evolve dynamical recurrent neural networks to be capable of performing multiple tasks. We then analyze the neural activity using popular network neuroscience tools, specifically functional connectivity using Pearson’s correlation, mutual information, and transfer entropy. We compare the results from these tools against a series of informational lesions, as a way to reveal their degree of approximation to the ground-truth. Our initial analysis reveals an overwhelming large gap between the insights gained from statistical inference of the functionality of the circuits based on neural activity and the actual functionality of the circuits as revealed by mechanistic interventions.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life761-767, (July 13–18, 2020) 10.1162/isal_a_00331
Abstract
View Paper
PDF
Artificial Life has a long tradition of studying the interaction between learning and evolution. And, thanks to the increase in the use of individual learning techniques in Artificial Intelligence, there has been a recent revival of work combining individual and evolutionary learning. Despite the breadth of work in this area, the exact trade-offs between these two forms of learning remain unclear. In this work, we systematically examine the effect of task difficulty, the individual learning approach, and the form of inheritance on the performance of the population across different combinations of learning and evolution. We analyze in depth the conditions in which hybrid strategies that combine lifetime and evolutionary learning outperform either lifetime or evolutionary learning in isolation. We also discuss the importance of these results in both a biological and algorithmic context.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life441-449, (July 13–18, 2020) 10.1162/isal_a_00338
Abstract
View Paper
PDF
Living organisms learn on multiple time scales: evolutionary as well as individual-lifetime learning. These two learning modes are complementary: the innate phenotypes developed through evolution significantly influence lifetime learning. However, it is still unclear how these two learning methods interact and whether there is a benefit to part of the system being optimized on a different time scale using a population-based approach while the rest of it is trained on a different time-scale using an individualistic learning algorithm. In this work, we study the benefits of such a hybrid approach using an actor-critic framework where the critic part of an agent is optimized over evolutionary time based on its ability to train the actor part of an agent during its lifetime. Typically, critics are optimized on the same time-scale as the actor using the Bellman equation to represent long-term expected reward. We show that evolution can find a variety of different solutions that can still enable an actor to learn to perform a behavior during its lifetime. We also show that although the solutions found by evolution represent different functions, they all provide similar training signals during the lifetime. This suggests that learning on multiple time-scales can effectively simplify the overall optimization process in the actor-critic framework by finding one of many solutions that can still train an actor just as well. Furthermore, analysis of the evolved critics can yield additional possibilities for reinforcement learning beyond the Bellman equation.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life210-218, (July 13–18, 2020) 10.1162/isal_a_00319
Abstract
View Paper
PDF
Living organisms perform multiple tasks, often using the same or shared neural networks. Such multifunctional neural networks are composed of neurons that contribute to different degrees in the different behaviors. In this work, we take a computational modeling approach to evaluate the extent to which neural resources are specialized or shared across different behaviors. To this end, we develop multifunctional feed-forward neural networks that are capable of performing three control tasks: inverted pendulum, cartpole balancing and single-legged walker. We then perform information lesions of individual neurons to determine their contribution to each task. Following that, we investigate the ability of two commonly used methods to estimate a neuron's contribution from its activity: neural variability and mutual information. Our study reveals the following: First, the same feed-forward neural network is capable of reusing its hidden layer neurons to perform multiple behaviors; second, information lesions reveal that the same behaviors are performed with different levels of reuse in different neural networks; and finally, mutual information is a better estimator of a neuron's contribution to a task than neural variability.