Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Risto Miikkulainen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2017) 25 (3): 503–528.
Published: 01 September 2017
Abstract
View article
PDF
No Free Lunch (NFL) theorems have been developed in many settings over the last two decades. Whereas NFL is known to be possible in any domain based on set-theoretic concepts, probabilistic versions of NFL are presently believed to be impossible in continuous domains. This article develops a new formalization of probabilistic NFL that is sufficiently expressive to prove the existence of NFL in large search domains, such as continuous spaces or function spaces. This formulation is arguably more complicated than its set-theoretic variants, mostly as a result of the numerous technical complications within probability theory itself. However, a probabilistic conceptualization of NFL is important because stochastic optimization methods inherently need to be evaluated probabilistically. Thus the present study fills an important gap in the study of performance of stochastic optimizers.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2016) 24 (3): 459–490.
Published: 01 September 2016
FIGURES
| View All (16)
Abstract
View article
PDF
Many challenging sequential decision-making problems require agents to master multiple tasks. For instance, game agents may need to gather resources, attack opponents, and defend against attacks. Learning algorithms can thus benefit from having separate policies for these tasks, and from knowing when each one is appropriate. How well this approach works depends on how tightly coupled the tasks are. Three cases are identified: Isolated tasks have distinct semantics and do not interact, interleaved tasks have distinct semantics but do interact, and blended tasks have regions where semantics from multiple tasks overlap. Learning across multiple tasks is studied in this article with Modular Multiobjective NEAT, a neuroevolution framework applied to three variants of the challenging Ms. Pac-Man video game. In the standard blended version of the game, a surprising, highly effective machine-discovered task division surpasses human-specified divisions, achieving the best scores to date in this game. In isolated and interleaved versions of the game, human-specified task divisions are also successful, though the best scores are surprisingly still achieved by machine discovery. Modular neuroevolution is thus shown to be capable of finding useful, unexpected task divisions better than those apparent to a human designer.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (2): 99–127.
Published: 01 June 2002
Abstract
View article
PDF
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is signicantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1997) 5 (4): 373–399.
Published: 01 December 1997
Abstract
View article
PDF
This article demonstrates the advantages of a cooperative, coevolutionary search in difficult control problems. The symbiotic adaptive neuroevolution (SANE) system coevolves a population of neurons that cooperate to form a functioning neural network. In this process, neurons assume different but overlapping roles, resulting in a robust encoding of control behavior. SANE is shown to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches. Further empirical studies illustrate the emergent neuron specializations and the different roles the neurons assume in the population.