Benchmarking of optimization algorithms is necessary to quantitatively assess the performance of optimizers and to understand their strengths and weaknesses. The Black Box Optimization Benchmarking (BBOB) workshops that took place in 2009, 2010, and 2012 during the Genetic and Evolutionary Computation Conference (GECCO) were set up to benchmark both stochastic and deterministic continuous optimization algorithms. For this purpose, a thorough experimental setting, a set of test functions, and a visualization tool were designed and provided. They are based on the idea that (i) test functions should be representative of typical known difficulties, scalable with dimension, and not too easy to solve, yet comprehensible; and (ii) performance measures should be quantitative. A tool for acquiring and postprocessing data was provided.

This special issue on Black Box Optimization Benchmarking contains papers that are extensions or based on results obtained during the BBOB GECCO 2009 and 2010 workshops. All articles were selected after the standard rigorous review process, from which seven papers in total were selected; five are published in this special issue and two papers will appear–because of space reasons–in a regular issue.

We would like to thank all of the authors for contributing to the special issue as well as the reviewers for their reviews. We are indebted to Hans-Georg Beyer, Editor-in-Chief of Evolutionary Computation, for his patience and support. The works presented in this special issue rely heavily on the Comparing Continuous Optimizer (COCO) tool continuously developed since 2008 by the BBOB team among which we would like to thank in particular for their work and enthusiasm Raymond Ros, Steffen Finck, Petr Po.šì.k, Mike Preuss, Olaf Mersmann, and Verena Heidrich-Meisner.