Skip Nav Destination
Close Modal
1-7 of 7
Neural Networks
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life461-462, (July 29–August 2, 2019) 10.1162/isal_a_00202
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life454-460, (July 29–August 2, 2019) 10.1162/isal_a_00201
Abstract
View Paper
PDF
Is cognition a collection of loosely connected functions tuned to different tasks, or is it more like a general learning algorithm? If such an hypothetical general algorithm did exist, tuned to our world, could it adapt seamlessly to a world with different laws of nature? We consider the theory that predictive coding is such a general rule, and falsify it for one specific neural architecture known for high-performance predictions on natural videos and replication of human visual illusions: PredNet. Our results show that PredNet’s high performance generalizes without retraining on a completely different natural video dataset. Yet PredNet cannot be trained to reach even mediocre accuracy on an artificial video dataset created with the rules of the Game of Life (GoL). We also find that a submodule of PredNet, a Convolutional Neural Network trained alone, has excellent accuracy on the GoL while having mediocre accuracy on natural videos, showing that PredNet’s architecture itself might be responsible for both the high performance on natural videos and the loss of performance on the GoL. Just as humans cannot predict the dynamics of the GoL, our results suggest that there could be a trade-off between high performance on sensory inputs with different sets of rules.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life448-453, (July 29–August 2, 2019) 10.1162/isal_a_00200
Abstract
View Paper
PDF
It has recently been demonstrated that a Hopfield neural network that learns its own attractor configurations, for instance by repeatedly resetting the network to an arbitrary state and applying Hebbian learning after convergence, is able to form an associative memory of its attractors and thereby facilitate future convergences on better attractors. This process of structural self-optimization has so far only been demonstrated on relatively small artificial neural networks with random or highly regular and constrained topologies, and it remains an open question to what extent it can be generalized to more biologically realistic topologies. In this work, we therefore test this process by running it on the connectome of the widely studied nematode worm, C. elegans , the only living being whose neural system has been mapped in its entirety. Our results demonstrate, for the first time, that the self-optimization process can be generalized to bigger and biologically plausible networks. We conclude by speculating that the reset-convergence mechanism could find a biological equivalent in the sleep-wake cycle in C. elegans .
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life440-447, (July 29–August 2, 2019) 10.1162/isal_a_00199
Abstract
View Paper
PDF
We present a new method for addressing the challenge of continual learning wherein an agent must adapt to new tasks while maintaining high performance on previously learned tasks. To accomplish this, an agent must identify previously acquired information that generalizes to the new task while also adapting its internal model to learn information that is specific to the new task. Our approach is based on neurogenesis, which involves adding new neurons to a previously trained neural network in an intelligent way. To our knowledge, we are the first to leverage probabilistic programming within the framework of evolutionary computation to optimize the growth of neural networks for continual learning. Through a series of experiments, we show that our approach is able to consistently find better performing solutions than genetic algorithms and it is able to do so faster.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life432-439, (July 29–August 2, 2019) 10.1162/isal_a_00198
Abstract
View Paper
PDF
Natural environments are full of ambient noise; nevertheless, natural cognitive systems deal greatly with uncertainty but also have ways to suppress or ignore noise unrelated to the task at hand. For most intelligent tasks, experiences and observations have to be committed to memory and these representations of reality inform future decisions. We know that deep learned artificial neural networks (ANNs) often struggle with the formation of representations. This struggle may be due to the ANN’s fully interconnected, layered architecture. This forces information to be propagated over the entire system, which is different from natural brains that instead have sparsely distributed representations. Here we show how ambient noise causes neural substrates such as recurrent ANNs and long short-term memory neural networks to evolve more representations in order to function in these noisy environments, which also greatly improves their functionality. However, these systems also tend to further smear their representations over their internal states making them more vulnerable to internal noise. We also show that Markov Brains (MBs) are mostly unaffected by ambient noise, and their representations remain sparsely distributed (i.e. not smeared). This suggests that ambient noise helps to increase the amount of representations formed in neural networks, but also requires us to find additional solutions to prevent smearing of said representations.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life424-431, (July 29–August 2, 2019) 10.1162/isal_a_00197
Abstract
View Paper
PDF
The foundation of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. We argue that backpropagation is the natural way to navigate the space of network weights and show how it allows non-trivial self-replicators to arise naturally. We then extend the setting to construct an artificial chemistry environment of several neural networks.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life416-423, (July 29–August 2, 2019) 10.1162/isal_a_00196
Abstract
View Paper
PDF
Time varying artificial neural networks are commonly used for dynamic problems such as games controllers and robotics as they give the controller a memory of what occurred in previous states which is important as actions in previous states can influence the final success of the agent. Because of this temporal dependence, methods such as back-propagation can be difficult to use to optimise network parameters and so genetic algorithms (GAs) are often used instead. While recurrent neural networks (RNNs) are a common network used with GAs, long short term memory (LSTM) networks have had less attention. Since, LSTM networks have a wide range of temporal dynamics, in this paper, we evolve an LSTM network as a controller for a lunar lander task with two evolutionary algorithms: a steady state GA (SSGA) and an evolutionary strategy (ES). Due to the presence of a large local optima in the fitness space, we implemented an incremental fitness scheme to both evolutionary algorithms. We also compare the behaviour and evolutionary progress of the LSTM with the behaviour of an RNN evolved via NEAT and ES with the same fitness function. LSTMs proved themselves to be evolvable on such tasks, though the SSGA solution was outperformed by the RNN. However, despite using an incremental scheme, the ES developed solutions far better than both showing that ES can be used both for incremental fitness and for LSTMs and RNNs on dynamic tasks.