Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Schalk Kok
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2017) 25 (1): 113–141.
Published: 01 March 2017
FIGURES
Abstract
View article
PDF
Control parameter studies assist practitioners to select optimization algorithm parameter values that are appropriate for the problem at hand. Parameter values are well suited to a problem if they result in a search that is effective given that problem’s objective function(s), constraints, and termination criteria. Given these considerations a many-objective tuning algorithm named MOTA is presented. MOTA is specialized for tuning a stochastic optimization algorithm according to multiple performance measures, each over a range of objective function evaluation budgets. MOTA’s specialization consists of four aspects: (1) a tuning problem formulation that consists of both a speed objective and a speed decision variable; (2) a control parameter tuple assessment procedure that utilizes information from a single assessment run’s history to gauge that tuple’s performance at multiple evaluation budgets; (3) a preemptively terminating resampling strategy for handling the noise present when tuning stochastic algorithms; and (4) the use of bi-objective decomposition to assist in many-objective optimization. MOTA combines these aspects together with differential evolution operators to search for effective control parameter values. Numerical experiments consisting of tuning NSGA-II and MOEA/D demonstrate that MOTA is effective at many-objective tuning.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (3): 437–453.
Published: 01 September 2009
Abstract
View article
PDF
Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points. The first variant is shown to possess a single stationary point, the global minimum. The second variant has numerous stationary points for high dimensionality. A previously proposed method is shown to be numerically intractable, requiring arbitrary precision computation in many cases to enumerate candidate solutions. Instead, a standard Newtonian method with multi-start is applied to locate stationary points. The relative magnitude of the negative and positive eigenvalues of the Hessian is also computed, in order to characterize the saddle points. For dimensions up to 100, only two local minimizers are found, but many saddle points exist. Two saddle points with a single negative eigenvalue exist for high dimensionality, which may appear as “near” local minima. The remaining saddle points we found have a predictable form, and a method is proposed to estimate their number. Monte Carlo simulation indicates that it is unlikely to escape these saddle points using uniform random search. A standard particle swarm algorithm also struggles to improve upon a saddle point contained within the initial population.