Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-18 of 18
Kalyanmoy Deb
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2021) 29 (1): 157–186.
Published: 01 March 2021
FIGURES
Abstract
View articletitled, Effect of Objective Normalization and Penalty Parameter on Penalty Boundary Intersection Decomposition-Based Evolutionary Many-Objective Optimization Algorithms
View
PDF
for article titled, Effect of Objective Normalization and Penalty Parameter on Penalty Boundary Intersection Decomposition-Based Evolutionary Many-Objective Optimization Algorithms
An objective normalization strategy is essential in any evolutionary multiobjective or many-objective optimization (EMO or EMaO) algorithm, due to the distance calculations between objective vectors required to compute diversity and convergence of population members. For the decomposition-based EMO/EMaO algorithms involving the Penalty Boundary Intersection (PBI) metric, normalization is an important matter due to the computation of two distance metrics. In this article, we make a theoretical analysis of the effect of instabilities in the normalization process on the performance of PBI-based MOEA/D and a proposed PBI-based NSGA-III procedure. Although the effect is well recognized in the literature, few theoretical studies have been done so far to understand its true nature and the choice of a suitable penalty parameter value for an arbitrary problem. The developed theoretical results have been corroborated with extensive experimental results on three to 15-objective convex and non-convex instances of DTLZ and WFG problems. The article, makes important theoretical conclusions on PBI-based decomposition algorithms derived from the study.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2020) 28 (3): 339–378.
Published: 01 September 2020
FIGURES
| View All (8)
Abstract
View articletitled, Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit
View
PDF
for article titled, Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit
Multiobjective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multiobjective optimization problems. In fact, many real-world multiobjective problems contain a number of constraints. To promote research on constrained multiobjective optimization, we first propose a problem classification scheme with three primary types of difficulty, which reflect various types of challenges presented by real-world optimization problems, in order to characterize the constraint functions in constrained multiobjective optimization problems (CMOPs). These are feasibility-hardness, convergence-hardness, and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable CMOPs (DAS-CMOPs, or DAS-CMaOPs when the number of objectives is greater than three) with three types of parameterized constraint functions developed to capture the three proposed types of difficulty. In fact, the combination of the three primary constraint functions with different parameters allows the construction of a large variety of CMOPs, with difficulty that can be defined by a triplet, with each of its parameters specifying the level of one of the types of primary difficulty. Furthermore, the number of objectives in this toolkit can be scaled beyond three. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs and nine CMaOPs, to be called DAS-CMOP1-9 and DAS-CMaOP1-9, respectively. To evaluate the proposed test problems, two popular CMOEAs—MOEA/D-CDP (MOEA/D with constraint dominance principle) and NSGA-II-CDP (NSGA-II with constraint dominance principle) and two popular constrained many-objective evolutionary algorithms (CMaOEAs)—C-MOEA/DD and C-NSGA-III—are used to compare performance on DAS-CMOP1-9 and DAS-CMaOP1-9 with a variety of difficulty triplets, respectively. The experimental results reveal that mechanisms in MOEA/D-CDP may be more effective in solving convergence-hard DAS-CMOPs, while mechanisms of NSGA-II-CDP may be more effective in solving DAS-CMOPs with simultaneous diversity-, feasibility-, and convergence-hardness. Mechanisms in C-NSGA-III may be more effective in solving feasibility-hard CMaOPs, while mechanisms of C-MOEA/DD may be more effective in solving CMaOPs with convergence-hardness. In addition, none of them can solve these problems efficiently, which stimulates us to continue to develop new CMOEAs and CMaOEAs to solve the suggested DAS-CMOPs and DAS-CMaOPs.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2017) 25 (3): 439–471.
Published: 01 September 2017
FIGURES
| View All (9)
Abstract
View articletitled, Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations
View
PDF
for article titled, Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations
During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem’s hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2014) 22 (3): 439–477.
Published: 01 September 2014
FIGURES
| View All (12)
Abstract
View articletitled, Test Problem Construction for Single-Objective Bilevel Optimization
View
PDF
for article titled, Test Problem Construction for Single-Objective Bilevel Optimization
In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of 12 problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The code related to the paper may be accessed from the website http://bilevel.org .
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2012) 20 (1): 27–62.
Published: 01 March 2012
FIGURES
| View All (28)
Abstract
View articletitled, Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm
View
PDF
for article titled, Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm
In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (3): 403–449.
Published: 01 September 2010
Abstract
View articletitled, An Efficient and Accurate Solution Methodology for Bilevel Multi-Objective Programming Problems Using a Hybrid Evolutionary-Local-Search Algorithm
View
PDF
for article titled, An Efficient and Accurate Solution Methodology for Bilevel Multi-Objective Programming Problems Using a Hybrid Evolutionary-Local-Search Algorithm
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (4): 463–494.
Published: 01 December 2006
Abstract
View articletitled, Introducing Robustness in Multi-Objective Optimization
View
PDF
for article titled, Introducing Robustness in Multi-Objective Optimization
In optimization studies including multi-objective optimization, the main focus is placed on finding the global optimum or global Pareto-optimal solutions, representing the best possible objective values. However, in practice, users may not always be interested in finding the so-called global best solutions, particularly when these solutions are quite sensitive to the variable perturbations which cannot be avoided in practice. In such cases, practitioners are interested in finding the robust solutions which are less sensitive to small perturbations in variables. Although robust optimization is dealt with in detail in single-objective evolutionary optimization studies, in this paper, we present two different robust multi-objective optimization procedures, where the emphasis is to find a robust frontier, instead of the global Pareto-optimal frontier in a problem. The first procedure is a straightforward extension of a technique used for single-objective optimization and the second procedure is a more practical approach enabling a user to set the extent of robustness desired in a problem. To demonstrate the differences between global and robust multi-objective optimization principles and the differences between the two robust optimization procedures suggested here, we develop a number of constrained and unconstrained test problems having two and three objectives and show simulation results using an evolutionary multi-objective optimization (EMO) algorithm. Finally, we also apply both robust optimization methodologies to an engineering design problem.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (4): 501–525.
Published: 01 December 2005
Abstract
View articletitled, Evaluating the ε-Domination Based Multi-Objective Evolutionary Algorithm for a Quick Computation of Pareto-Optimal Solutions
View
PDF
for article titled, Evaluating the ε-Domination Based Multi-Objective Evolutionary Algorithm for a Quick Computation of Pareto-Optimal Solutions
Since the suggestion of a computing procedure of multiple Pareto-optimal solutions in multi-objective optimization problems in the early Nineties, researchers have been on the look out for a procedure which is computationally fast and simultaneously capable of finding a well-converged and well-distributed set of solutions. Most multi-objective evolutionary algorithms (MOEAs) developed in the past decade are either good for achieving a well-distributed solutions at the expense of a large computational effort or computationally fast at the expense of achieving a not-so-good distribution of solutions. For example, although the Strength Pareto Evolutionary Algorithm or SPEA (Zitzler and Thiele, 1999) produces a much better distribution compared to the elitist non-dominated sorting GA or NSGA-II (Deb et al., 2002a), the computational time needed to run SPEA is much greater. In this paper, we evaluate a recently-proposed steady-state MOEA (Deb et al., 2003) which was developed based on the ε-dominance concept introduced earlier (Laumanns et al., 2002) and using efficient parent and archive update strategies for achieving a well-distributed and well-converged set of solutions quickly. Based on an extensive comparative study with four other state-of-the-art MOEAs on a number of two, three, and four objective test problems, it is observed that the steady-state MOEA is a good compromise in terms of convergence near to the Pareto-optimal front, diversity of solutions, and computational time. Moreover, the ε-MOEA is a step closer towards making MOEAs pragmatic, particularly allowing a decision-maker to control the achievable accuracy in the obtained Pareto-optimal solutions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (4): 371–395.
Published: 01 December 2002
Abstract
View articletitled, A Computationally Efficient Evolutionary Algorithm for Real-Parameter Optimization
View
PDF
for article titled, A Computationally Efficient Evolutionary Algorithm for Real-Parameter Optimization
Due to increasing interest in solving real-world optimization problems using evolutionary algorithms (EAs), researchers have recently developed a number of real-parameter genetic algorithms (GAs). In these studies, the main research effort is spent on developing an efficient recombination operator. Such recombination operators use probability distributions around the parent solutions to create an offspring. Some operators emphasize solutions at the center of mass of parents and some around the parents. In this paper, we propose a generic parent-centric recombination operator (PCX) and a steady-state, elite-preserving, scalable, and computationally fast population-alteration model (we call the G3 model). The performance of the G3 model with the PCX operator is investigated on three commonly used test problems and is compared with a number of evolutionary and classical optimization algorithms including other real-parameter GAs with the unimodal normal distribution crossover (UNDX) and the simplex crossover (SPX) operators, the correlated self-adaptive evolution strategy, the covariance matrix adaptation evolution strategy (CMA-ES), the differential evolution technique, and the quasi-Newton method. The proposed approach is found to consistently and reliably perform better than all other methods used in the study. A scale-up study with problem sizes up to 500 variables shows a polynomial computational complexity of the proposed approach. This extensive study clearly demonstrates the power of the proposed technique in tackling real-parameter optimization problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (3): 263–282.
Published: 01 September 2002
Abstract
View articletitled, Combining Convergence and Diversity in Evolutionary Multiobjective Optimization
View
PDF
for article titled, Combining Convergence and Diversity in Evolutionary Multiobjective Optimization
Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Pareto-optimal solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms can progress towards the Pareto-optimal set with a widely spread distribution of solutions. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Pareto-optimal solutions with a wide diversity among the solutions. In this paper, we discuss why a number of earlier MOEAs do not have such properties. Based on the concept of ɛ-dominance, new archiving strategies are proposed that overcome this fundamental problem and provably lead to MOEAs that have both the desired convergence and distribution properties. A number of modifications to the baseline algorithm are also suggested. The concept of ɛ-dominance introduced in this paper is practical and should make the proposed algorithms useful to researchers and practitioners alike.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2001) 9 (2): 197–221.
Published: 01 June 2001
Abstract
View articletitled, Self-Adaptive Genetic Algorithms with Simulated Binary Crossover
View
PDF
for article titled, Self-Adaptive Genetic Algorithms with Simulated Binary Crossover
Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2000) 8 (2): 173–195.
Published: 01 June 2000
Abstract
View articletitled, Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
View
PDF
for article titled, Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
In this paper, we provide a systematic comparison of various evolutionary approaches to multiobjective optimization using six carefully chosen test functions. Each test function involves a particular feature that is known to cause difficulty in the evolutionary optimization process, mainly in converging to the Pareto-optimal front (e.g., multimodality and deception). By investigating these different problem features separately, it is possible to predict the kind of problems to which a certain technique is or is not well suited. However, in contrast to what was suspected beforehand, the experimental results indicate a hierarchy of the algorithms under consideration. Furthermore, the emerging effects are evidence that the suggested test functions provide sufficient complexity to compare multiobjective optimizers. Finally, elitism is shown to be an important factor for improving evolutionary multiobjective search.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2000) 8 (2): iii–iv.
Published: 01 June 2000
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (3): 205–230.
Published: 01 September 1999
Abstract
View articletitled, Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems
View
PDF
for article titled, Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems
In this paper, we study the problem features that may cause a multi-objective genetic algorithm (GA) difficulty in converging to the true Pareto-optimal front. Identification of such features helps us develop difficult test problems for multi-objective optimization. Multi-objective test problems are constructed from single-objective optimization problems, thereby allowing known difficult features of single-objective problems (such as multi-modality, isolation, or deception) to be directly transferred to the corresponding multi-objective problem. In addition, test problems having features specific to multi-objective optimization are also constructed. More importantly, these difficult test problems will enable researchers to test their algorithms for specific aspects of multi-objective optimization.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (1): 1–24.
Published: 01 March 1998
Abstract
View articletitled, Time Scheduling of Transit Systems With Transfer Considerations Using Genetic Algorithms
View
PDF
for article titled, Time Scheduling of Transit Systems With Transfer Considerations Using Genetic Algorithms
Scheduling of a bus transit system must be formulated as an optimization problem, if the level of service to passengers is to be maximized within the available resources. In this paper, we present a formulation of a transit system scheduling problem with the objective of minimizing the overall waiting time of transferring and nontransferring passengers while satisfying a number of resource- and service-related constraints. It is observed that the number of variables and constraints for even a simple transit system (a single bus station with three routes) is too large to tackle using classical mixed-integer optimization techniques. The paper shows that genetic algorithms (GAs) are ideal for these problems, mainly because they (i) naturally handle binary variables, thereby taking care of transfer decision variables, which constitute the majority of the decision variables in the transit scheduling problem; and (ii) allow procedure-based declarations, thereby allowing complex algorithmic approaches (involving if then-else conditions) to be handled easily. The paper also shows how easily the same GA procedure with minimal modifications can handle a number of other more pragmatic extensions to the simple transit scheduling problem: buses with limited capacity, buses that do not arrive exactly as per scheduled times, and a multiple-station transit system having common routes among bus stations. Simulation results show the success of GAs in all these problems and suggest the application of GAs in more complex scheduling problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1996) 4 (2): 133–167.
Published: 01 June 1996
Abstract
View articletitled, Analysis of Selection Algorithms: A Markov Chain Approach
View
PDF
for article titled, Analysis of Selection Algorithms: A Markov Chain Approach
A Markov chain framework is developed for analyzing a wide variety of selection techniques used in genetic algorithms (GAs) and evolution strategies (ESs). Specifically, we consider linear ranking selection, probabilistic binary tournament selection, deterministic s -ary ( s = 3,4, …) tournament selection, fitness-proportionate selection, selection in Whitley's GENITOR, selection in (μ, λ)-ES, selection in (μ + λ)-ES, (μ, λ)-linear ranking selection in GAs, (μ + λ)-linear ranking selection in GAs, and selection in Eshelman's CHC algorithm. The analysis enables us to compare and contrast the various selection algorithms with respect to several performance measures based on the probability of takeover. Our analysis is exact—we do not make any assumptions or approximations. Finite population sizes are considered. Our approach is perfectly general, and following the methods of this paper, it is possible to analyze any selection strategy in evolutionary algorithms.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1994) 2 (3): 221–248.
Published: 01 September 1994
Abstract
View articletitled, Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms
View
PDF
for article titled, Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms
In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1994) 2 (1): 37–66.
Published: 01 March 1994
Abstract
View articletitled, Implicit Niching in a Learning Classifier System: Nature's Way
View
PDF
for article titled, Implicit Niching in a Learning Classifier System: Nature's Way
We approach the difficult task of analyzing the complex behavior of even the simplest learning classifier system (LCS) by isolating one crucial subfunction in the LCS learning algorithm: covering through niching. The LCS must maintain a population of diverse rules that together solve a problem (e.g., classify examples). To maintain a diverse population while applying the GAs selection operator, the LCS must incorporate some kind of niching mechanism. The natural way to accomplish niching in an LCS is to force competing rules to share resources (i.e., rewards). This implicit LCS fitness sharing is similar to the explicit fitness sharing used in many niched GAs. Indeed, the LCS implicit sharing algorithm can be mapped onto explicit fitness sharing with a one-to-one correspondence between algorithm components. This mapping is important because several studies of explicit fitness sharing, and of niching in GAs generally, have produced key insights and analytical tools for understanding the interaction of the niching and selection forces. We can now bring those results to bear in understanding the fundamental type of cooperation (a.k.a. weak cooperation) that an LCS must promote.