In combinatorial optimization, the goal is to find an optimal solution, according to some objective function, from a discrete search space. These problems arise widely in industry and academia and, unfortunately, many of them are NP-hard and no polynomial time algorithm can guarantee their solution to a certified optimality unless . Therefore, in the last decades researchers have investigated the use of stochastic search algorithms to find near optimal solutions to these problems. In particular, great research efforts have been devoted to the development and application of metaheuristic algorithms to solve combinatorial optimization problems.

This special issue contains six high-quality articles addressing practical applications and theoretical developments of metaheuristic algorithms in the context of combinatorial optimization problems. The articles in this issue have been selected from among 23 submissions after a thorough peer review process. Their contents, outlined in the next paragraphs, reflect the diversity of the application domains and the methods applied to solve the problems.

Existing computational approaches to de novo protein tertiary structure prediction fail to fully scale up to proteins of 100 residues and above. The article “Generating, Maintaining, and Exploiting Diversity in a Memetic Algorithm for Protein Structure Prediction” by Mario Garza-Fabre, Shaun M. Kandathil, Julia Handl, Joshua Knowles, and Simon C. Lovell states that this is mainly due to a lack of exploration of properly diverse protein folds, and to an acute form of deception in the energy function. In the article, solutions to both of these problems are studied through a multistage memetic algorithm incorporating the well-known Rosetta method as a local search routine. The authors state that the obtained results should translate to a noticeable improvement in the performance of protein structure prediction algorithms in blind experiments and, potentially, to a further step towards the more challenging problem of predicting the three-dimensional shape of large proteins.

Two articles in the issue deal with automated heuristic design or hyper-heuristics; these approaches aim to increase the level of generality and reduce the role of the human expert when applying heuristic optimization. With the increase of computing power, the use of ensemble methods in machine learning is gaining relevance. The article “A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling” by Emma Hart and Kevin Sim proposes a divide-and-conquer strategy where a group of evolved scheduling heuristics is maintained, and each heuristic solves a unique subset of the instance set considered. Heuristics are linear sequences of dispatching rules, where each rule is a tree structure evolved with Genetic Programming. After a training process, the ensemble outperforms both human-made dispatching rules and standard Genetic Programming on the studied benchmarks.

An important but mostly missed promise in hyper-heuristic research is to develop cross-domain approaches, that is, methodologies that work well and with minimum design effort across multiple problem domains. This is achieved by the article “Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems,” where V. N. Coelho et al. address three complex combinatorial optimization problems: the Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, the Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. The proposed method is an Evolution Strategy (ES) that self-adapts its mutation operations which are applied in combination with a Reduced Variable Neighborhood Search (RVNS) strategy. This hybrid approach also fits in the framework of self-adaptive Memetic Algorithms. The adaptive RVNS variant explores specific parts of each available neighborhood, playing with probabilities evolved through the ES evolutionary process. The approach implicitly considers the problem-specific characteristics and the success of a given neighborhood within the search. The self-adaptive ES was able to adapt the mutation operators in such a way that there is a balance between exploration and exploitation throughout the generations of the evolutionary process, producing competitive results on the studied domains.

Nowadays, large amounts of sensible data cross the networks at any time. In order to protect the data from potential intruders, all this information must be ciphered using fast and safe procedures. The article “Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography” by Stjepan Picek, Claude Carlet, Sylvain Guilley, Julian Miller, and Domagoj Jakobovic applies several evolutionary algorithms to find Boolean functions with desired properties to be used in cryptographic algorithms (ciphers). Genetic Algorithms, Genetic Programming, and Cartesian Genetic Programming are used to solve single-objective formulations of the problem, while NSGA-II is used to solve the multiobjective formulations. Two different problems are addressed: one in the context of masking secrets and the other in stream ciphers.

One of the crucial decisions when developing an evolutionary algorithm concerns the choice of a suitable fitness function. The article “A New Cost Function for Evolution of S-Boxes” by Stjepan Picek, Marko Cupic, and Leon Rotim deals with this question in the context of cryptography. More in detail, the authors consider the problem of evolving highly nonlinear S-Boxes that play an important role in many modern cryptographic algorithms. The authors develop a new fitness function for this purpose and present a comprehensive comparison of their new fitness function with current state-of-the-art fitness functions. A experimental study shows that the new fitness function leads not only to better results but also allows the algorithm to obtain these results faster.

Most of the metaheuristic algorithms that are applied to hard combinatorial optimization problems use the objective function as the only source of information of the problem. These are the so-called black-box algorithms. The black-box complexity of a problem class is the number of objective function evaluations that a black-box algorithm requires to solve any problem in the class. Determining the black-box complexity of a problem class allows the researchers to know if it is possible to improve the current black-box algorithms for the class. In “The Unrestricted Black-Box Complexity of Jump Functions,” Maxim Buzdalov, Benjamin Doerr, and Mikhail Kever find upper and lower bounds for the black-box complexity of the class of Jump functions, which model difficult optimization problems for elitist algorithms. As a side effect, a new method for obtaining lower bounds in black-box complexity is presented for the first time.

Finally, we would like to thank all the authors submitting a paper to this special issue (accepted or not), the referees for their constructive and thorough reviews, and the editor-in-chief, Hans-Georg Beyer, for his great and prompt support.

Guest Editors