Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-4 of 4
Douglas Kirkpatrick
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life114, (July 18–22, 2021) 10.1162/isal_a_00458
Abstract
View Paper
PDF
Brains are among the most complex evolved objects. In recent years we have seen an explosion in the development of artificial cognitive systems constructed in silico (i.e. digital brains). In fact, we are now capable of creating digital brains whose operation is so complex that they are effectively black boxes (Castelvecchi, 2016; Gunning, 2017). Previous work (Marstaller et al., 2013; Hintze et al., 2018; Kirkpatrick and Hintze, 2019) has identified and expanded upon various information-theoretic measures that can shed light on the internal processes of digital brains. Here we introduce a new information-theoretic measure called Fragmentation ( F ) which can measure how fragmented information is in an a digital brain. To provide a example of the application of F we look at the evolutionary emergence of complexity. Questions regarding the evolution of complexity have been of interest for as long as evolution has been a theory (Gregory, 1935). Nature is responsible for the development of a massive array of complex organisms, each comprised of various organs and regulatory systems that are themselves complex (McShea and Brandon, 2010). It has been observed that complexity can evolve even when complexity itself is being selected against (Beslon et al., 2021). We conclude by using F to show a case of evolved complexity that results in coincidental encryption.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life113, (July 18–22, 2021) 10.1162/isal_a_00457
Abstract
View Paper
PDF
Artificial cognitive systems (e.g., artificial neural networks) have taken an ever more present role in the modern world, providing enhancements to everyday life in our cars, in our phones, and on the internet. In order to produce systems more capable of achieving their designated tasks, previous work has sought to direct the evolution of networks using a process referred to as R -augmentation. This process selects for the maximisation of an information-theoretic measure of the agent's stored understanding of the environment, or its representation ( R ) in addition to selecting for task performance. This method was shown to induce increased task performance in a shorter amount of evolutionary time compared to a standard genetic algorithm. Extensions of this work have looked at how R -augmentation affects the distribution of representations across the neurons of the brain ”tissue” or nodes of the network, referred to as smearedness ( S ). Here we seek to improve upon the prior methods by moving beyond the simple maximization used in the original augmentation formula by using the MAP-Elites algorithm to identify intermediate target values to optimize towards. We also examine the feasibility of using MAP-Elites itself as an optimization method as opposed to the traditional selection methods used with R -augmentation, to mixed success. These methods will allow us to shape how the network evolves, and produce better-performing artificial cognitive systems.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life432-439, (July 29–August 2, 2019) 10.1162/isal_a_00198
Abstract
View Paper
PDF
Natural environments are full of ambient noise; nevertheless, natural cognitive systems deal greatly with uncertainty but also have ways to suppress or ignore noise unrelated to the task at hand. For most intelligent tasks, experiences and observations have to be committed to memory and these representations of reality inform future decisions. We know that deep learned artificial neural networks (ANNs) often struggle with the formation of representations. This struggle may be due to the ANN’s fully interconnected, layered architecture. This forces information to be propagated over the entire system, which is different from natural brains that instead have sparsely distributed representations. Here we show how ambient noise causes neural substrates such as recurrent ANNs and long short-term memory neural networks to evolve more representations in order to function in these noisy environments, which also greatly improves their functionality. However, these systems also tend to further smear their representations over their internal states making them more vulnerable to internal noise. We also show that Markov Brains (MBs) are mostly unaffected by ambient noise, and their representations remain sparsely distributed (i.e. not smeared). This suggests that ambient noise helps to increase the amount of representations formed in neural networks, but also requires us to find additional solutions to prevent smearing of said representations.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life388-395, (July 23–27, 2018) 10.1162/isal_a_00076
Abstract
View Paper
PDF
Artificial neural networks (ANNs), while exceptionally useful for classification, are vulnerable to misdirection. Small amounts of noise can significantly affect their ability to correctly complete a task. Instead of generalizing concepts, ANNs seem to focus on surface statistical regularities in a given task. Here we compare how recurrent artificial neural networks, long short-term memory units, and Markov Brains sense and remember their environments. We show that information in Markov Brains is localized and sparsely distributed, while the other neural network substrates “smear” information about the environment across all nodes, which makes them vulnerable to noise.