The theory of evolutionary computation (EC) has experienced rapid and productive growth in recent years. New proof techniques and novel theoretical frameworks have allowed advances in our understanding of the processes and structures inherent in evolutionary optimization. As a result, the frontiers of our knowledge have been expanded further than ever before. Some recent trends in this field, which are covered in this issue, include developments in the understanding of the behavior of evolutionary algorithms (EAs) in dynamic environments rather than just static settings, a theoretical appreciation of the advantages arising from the parallelization of evolutionary algorithms through a greater comprehension of the underlying dynamics, and an understanding of algorithm behavior on broad function classes, including -hard problems.

The primary goal of this special issue is to provide extended and polished versions of diverse examples of the best theoretical work presented at conferences in 2014, and to serve as a forum for researchers to advance the theoretical understanding of evolutionary computation methods. The papers included in this special issue span a plurality of topics and offer the reader a cross section of recent outstanding work in EC theory.

In dynamic optimization the objective function changes over time, and optimization algorithms face the additional challenge of tracking these changes to be successful. The article “Analysis of Randomised Search Heuristics for Dynamic Optimisation,” by Thomas Jansen and Christine Zarges, presents a novel analytical framework for the analysis of randomized search heuristics on dynamic problems inspired by the fixed-budget computations perspective. The authors introduce a new interesting class of bi-stable dynamic functions where the optimum oscillates between two complementary strings, and apply the framework to analyze and compare the performance of evolutionary algorithms and artificial immune systems on the novel class of functions.

Over three decades ago, László Lovász observed that in discrete optimization, submodularity is the counterpart to convexity. However, in contrast to the focus on convex functions in continuous evolutionary optimization, so far submodular functions have received comparatively little attention from EC theoreticians studying discrete functions. The article “Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms,” by Tobias Friedrich and Frank Neumann, addresses this gap by analyzing the performance of evolutionary algorithms on different classes of submodular functions. The maximization of submodular functions is -hard in general, and the authors present several approximation results for monotone submodular and nonmonotone symmetric submodular functions under different kinds of matroid constraints.

The idea behind parallel evolutionary algorithms is to evolve multiple subpopulations in parallel and allow interprocess communication at given time intervals. During these migration phases, fractions of each subpopulation can be shared among the subpopulations. There is very little understanding of how the migration frequency affects algorithmic performance, so setting the migration interval parameter appropriately may be difficult. In the article “Design and Analysis of Schemes for Adapting Migration Intervals in Parallel Evolutionary Algorithms,” Andrea Mambrini and Dirk Sudholt propose two schemes to automatically adapt the migration interval of parallel EAs during execution and provide a rigorous analytical framework that yields upper bounds on the expected runtime and expected communication effort of the parallel EAs with different migration topologies for various function classes.

In the field of genetic programming, a long-standing open problem is how to address the issue of bloat, the emergence during evolution of solution elements that do not contribute significantly or at all to program fitness or semantics but increase program complexity. The article “On the Performance of Different Genetic Programming Approaches for the SORTING Problem,” by Markus Wagner, Frank Neumann, and Tommaso Urli, tackles the issue of bloat control in the context of sorting. As a basis for their study, they consider program trees and use some measure of sortedness of an in-order traversal to evaluate their fitness. The authors investigate single- and multiobjective variants of genetic programming algorithms with and without bloat control mechanisms, give rigorous upper bounds on their running times, and complement the study with experiments.

The topic of constraint handling has recently gained traction in the continuous domain. The article “Markov Chain Analysis of Cumulative Step-Size Adaptation on a Linear Constrained Problem,” by Alexandre Chotard, Anne Auger, and Nikolaus Hansen, presents a rigorous analysis of a (1,)-Evolution Strategy using resampling on a linear function with a linear constraint. The authors prove the previously assumed property that a Markov chain, describing the behavior of the algorithm, exhibits stability in cases with constant step-size and with cumulative step-size adaptation with cumulation parameter equal to 1. This property characterizes the divergence of the algorithm with constant step-size and the geometric divergence or convergence with step-size adaptation, implying fast convergence of Monte Carlo simulations of the divergence rate.

In their seminal 2006 paper, Droste, Jansen, and Wegener introduced the concept of black box complexity in order to establish a complexity theory for general-purpose randomized search heuristics. Generally speaking, the black box complexity of a problem is a lower bound on the number of function evaluations needed by any black box algorithm to solve it. Recently, black box models have been refined and developed extensively, allowing the hardness of objective function classes to be more precisely understood. The article “Unbiased Black Box Complexities of Jump Functions,” by Benjamin Doerr, Carola Doerr, and Timo Kötzing, analyzes the unbiased black box complexity of a Jump function class where in each function a local optimum is k bits in distance from the global optimum. The authors provide polynomial upper bounds on the black box complexity of Jump for different sizes of the gap. In particular, they show that an unbiased polynomial-time black box algorithm exists even when almost all of the search space is a plateau of constant fitness.

The guest editors would like to thank the authors for their contributions, the referees for their careful reviewing and constructive comments, and the editor-in-chief, Hans-Georg Beyer, for his support in preparing this special issue.