Abstract

This paper presents an effective evolutionary hybrid for solving the permutation flowshop scheduling problem. Based on a memetic algorithm, the procedure uses a construction component that generates initial solutions through the use of a novel reblocking mechanism operating according to a biased random sampling technique. This component is aimed at forcing the operations having smaller processing times to appear on the critical path. The goal of the construction component is to fill an initial pool with high-quality solutions for a memetic algorithm that looks for even higher-quality solutions. In the memetic algorithm, whenever a crossover operator and possibly a mutation are performed, the offspring genome is fine-tuned by a combination of 2-exchange swap and insertion local searches. The same with the employed construction method; in these local searches, the critical path notion has been used to exploit the structure of the problem. The results of computational experiments on the benchmark instances indicate that these components have strong synergy, and their integration has created a robust and effective procedure that outperforms several state-of-the-art procedures on a number of the benchmark instances. By deactivating different components enhancing the evolutionary module of the procedure, the effects of these components have also been examined.

1  Introduction

The large number of combinatorial optimization problems in various technical areas like telecommunication systems, operations scheduling, advanced manufacturing systems, and enterprise resource planning necessitates the development of effective search procedures. Among the most widely used methods in tackling search problems are local searches, which explore a search space whose states are complete rather than partial solutions.

In the traversal of the search space, local searches move from one state to the other by making a local change to the value(s) of one or more variables. Since, in these methods, exploration is performed through a hill-climbing technique, the typical issue associated with them is that they get stuck in local optima. These methods are point-based in the sense that they work on a single solution.

Genetic algorithms, on the other hand, are population-based strategies. In effect, whereas local searches are based on assessing small changes to complete solutions in the hope of enhancing these solutions, genetic algorithms incorporate some characteristics of a solution into another solution. Employing the principle of survival of the fittest to guide the search, genetic algorithms, at any stage of the search, operate on a collection of solutions and combine two or more solutions to create a new solution.

Employing the notion of population in these algorithms offers several conceptual advantages, such as diversifying search, increasing exploitation capability, and integrating the promising features of two candidate solutions. However, a key shortcoming of genetic algorithms is the phenomenon of hitchhiking (Mitchell et al., 1994), which is attributed to a process that causes an evolutionary harmful allele to be distributed in the gene pool by the virtue of being linked to a promising gene. This process can significantly degrade the performance of genetic algorithms and cause convergence toward low-quality solutions.

Genetic algorithms can be combined with local searches to reduce this shortcoming and to boost the effectiveness of the corresponding procedure. A well-known notion used to show the integration of local searches with genetic algorithms is the notion of memetic algorithms (Moscato, 1989). In these algorithms, not only can the various characteristics of a solution be incorporated into another solution but any new solution can undergo local changes to attain higher quality.

This article presents an effective hybrid for the permutation flowshop scheduling problem (PFSP) and by deactivating different components enhancing the evolutionary module of the procedure, examines their effects. Founded on an effective memetic algorithm, the procedure has three components. The first component is a reblocking mechanism that constructs the initial pool; the second component is a combination of two local searches that collectively improve the offspring genome. Both of these components use the concept of critical path to exploit the structure of the problem. The third component is a genetic algorithm module that is enhanced by the first two components and in this way guides the search toward high-quality solutions.

Since genetic algorithms are generally envisaged as schemata-processing machineries capable of identifying and merging functional building blocks, and local searches are perceived as fine-tuning mechanisms identifying small beneficial changes for improving a solution, the procedure we employed can be considered as an adjustable hybrid that has both of these beneficial capabilities. The procedure uses the reblocking mechanism in generating the initial pool, and not only identifies building blocks and exploits their structure in improving the quality of genomes but also fine-tunes the offspring genomes through a local search component.

The structure of the paper is as follows. Section 2 formulates the PFSP, and Section 3 presents the related work. Section 4 describes our procedure and presents a stepwise description of its corresponding algorithm. Section 5 presents the results of the computational experiments. Concluding remarks and several directions for future research are presented in Section 6.

2  Problem Formulation

The field of scheduling, which can be considered a major subfield of algorithmic design, is about the allocation of resources to tasks over time. In this field, the flowshop scheduling problem (FSP) is of twofold importance. On the one hand, it has a variety of applications in manufacturing, and on the other, both this general problem and its permutation variant, PFSP, are NP-hard problems (Rinnooy Kan, 1976; Röck, 1984). Since the successful development of Johnson’s procedure for the 2-machine problem, many procedures have been provided to tackle problems involved with more than two machines. However, even some medium-size instances cannot yet be solved to optimality within a reasonable time.

In the FSP each machine can process only one job at a time, and no job can be processed on different machines simultaneously or stopped in its execution on a machine. The objective is to find a permutation of jobs for each machine so that the time when the last job is completed on the last machine is minimized. This completion time is commonly referred to as makespan.

In the FSP setting, the assumption that there are n jobs, , and m machines, , leads to defining a set of operations for Jj, which should be performed sequentially on . In other words, Ojk should be performed only on Mk. In this setting, different permutations of jobs for can be the potential output of the problem, and each different permutation of jobs can be shown with . By adding a new constraint that to the FSP, it is converted to the PFSP. This new constraint simply implies that the sequence in which are processed should be identical for . The processing time of Jj on Mi is denoted with tji. In the rest of the article, we use the term machinei to refer to Mi and jobj to refer to Jj.

All jobs are processed on machines orderly, and their order on machine 1 is shown by permutation , in which shows the number of kth job in the sequence. Denoting the processing and completion times of job on machine i, with fki and cki, respectively, the makespan of the PFSP, cnm, can be calculated as follows:
formula
1
formula
2
formula
3
formula
4

Since fki is equal to , and cki equals the sum of fki and the starting time of job on machine i, Equation (4) implies that job can be started on a new machine as soon as it has been completed on the previous machine, and the new machine has completed the previous job in the sequence, namely job, . In effect, Equations (2) and (3) are simplified versions of Equation (4), in the machine and job direction, respectively. The reason is that since for the job , there is no previous job in the sequence, and for machine 1, there is no previous machine, finding the maximum value between two values where one has not been defined is impossible. That is why Equation (2) denotes that job can start on machine i as soon as it has been completed on machine , and Equation (3) denotes that machine 1 can start the job as soon as it has completed the job .

A point to notice with this formulation is that completion times are not only used to facilitate the calculation of the makespan but can determine a path showing what has forced the makespan to be of that length, namely, the critical path (Nowicki and Smutnicki, 1996). The critical path can be simply traced in matrix F, which shows the values of processing times, fki. In this matrix, cell (1,1), in the northwest, shows f11, and cell (n,m), in the southeast, shows fnm. Through a combination of vertical and horizontal lines obtained by Equation (4), depending on whether is greater or smaller than , each element fki is connected to its north or west neighboring element, respectively.

Figure 1 shows a sample 9×9 problem with 1-2-3-4-5-6-7-8-9 permutation of jobs. This permutation leads to a makespan of 1,184. The goal is to find a sequence of jobs so that it can minimize cnm, which shows the makespan of the problem. Based on this formulation, given a sequence of jobs that leads to the creation of the matrix F, the corresponding makespan can be obtained in time.

Figure 1:

A sample 9×9 problem with 1-2-3-4-5-6-7-8-9 permutation of jobs leading to a makespan of 1,184, which is the sum of the processing times on the critical path, indicated by boldface numerals.

Figure 1:

A sample 9×9 problem with 1-2-3-4-5-6-7-8-9 permutation of jobs leading to a makespan of 1,184, which is the sum of the processing times on the critical path, indicated by boldface numerals.

In Figure 1, boldface indicates the critical path. In determining the cells that constitute the critical path, one can start with the cell (n,m) and, based on its value in matrix C and the values in this matrix for its upper and left-side cells, connect it to one of those two cells.

Note that one of these two values plus the processing time of the nth job in the sequence on machine m constitute the value of cell(n,m), cnm. That is why among the two possible cells, a cell is selected that has such a property. After cell selection, this new cell becomes the current cell, and again among its upper and left-side cells, one is selected. This process continues until the cell (1,1) joins the path. In this process, ties are broken randomly, and therefore in case there are several critical paths, only one of them is selected.

Rearranging the jobs in the matrix F can only be beneficial if it can decrease the length of this critical path. In effect, the critical path can be used as guidance in rearranging the jobs. The PFSP can be viewed simply as a mini-max agents problem in which a maximizer agent calculates the values of cki to find the value of cnm. By a minimizer agent, this maximizer agent should be repeatedly denied access to its construction material for creating a long critical path, and this is obtained by rearranging exploitable permutations. In terms of the minimizer and maximizer agents, the critical path plays a vital role in finding a rearrangement of jobs (by the minimizer agent) and in making the permutation less exploitable (by the maximizer agent).

3  Related Work

This section presents a survey of some influential procedures for the PFSP. These procedures are reviewed chronologically, starting with one of the earliest (Bonney and Gundry, 1976) and ending with the evolutionary method presented by Li and Yin (2013). Among these procedures, those that have informed the procedure developed in this article are identified by the adjective informing.

One of the earliest procedures for the PFSP was introduced by Bonney and Gundry (1976). Based on a slope-matching mechanism, the procedure first assumes that only one job exists. Then through running a regression module, it calculates the starting and ending times of jobs on each machine, one after another. For this purpose, it calculates the starting and ending slope for each job and selects a job with the highest starting slope as the first job in the sequence. Then it finds a job whose ending slope best matches with the starting slope of the first job and fixes this job in the sequence as the second job. This process continues until the positions of all jobs are fixed in the sequence.

Based on a quite different approach but using the same concept of processing times, Nawaz et al. (1983) presented a procedure in which the relative order of jobs with greater total processing time is fixed in the process sooner. For this purpose, first the two jobs with the highest processing times are selected, and the best partial schedule, only including these two jobs, is determined. Then the next job with the highest processing time is determined and is added to the partial schedule, which now includes only three jobs. In adding this job to the partial schedule, there are three possible ways, and all of them are examined. In effect, in adding the kth job with the largest processing time to the partial schedule, there are k different ways, and all these k ways are examined, and the best location for this job is selected. This continues until all jobs join the schedule.

In the work of Nowicki and Smutnicki (1996), an informing tabu search technique was implemented that applies certain block properties toward lessening the burden of computation. A concept called critical path, based on the current permutation of jobs, distinguishes the blocks, and each block is associated with a particular machine and should have at least two operations. Operations in each block are limited to those moves that can affect the beginning or the end of the block. To increase the number of promising moves, the procedure also allows a job to move from a block to one of the neighboring blocks.

Using a simplified form of a representative neighborhood reported by Nowicki and Smutnicki (1996), and considering the features of the landscape induced by the operators, an informing genetic algorithm was presented by Reeves and Yamada (1998), which considers two operators of shift and exchange, and among them selects the shift operator. In these operations, whereas the shift operator takes a job from one position of the permutation and inserts it into another position, the exchange operator swaps the positions of two jobs. In implementing the shift operator, the authors considered the concept of critical block to make the operator more effective.

The Reeves and Yamada (1998) neighborhood in each critical block finds the best two moves, one to the left and the other to the right block. Then these two moves are considered the best moves of the corresponding block. The set of these moves that comprises the neighborhood seems to be one of the factors contributing to the efficiency of their procedure.

The procedure also uses a path-relinking component that traces a path from one local optimal solution to another. The distance between two solutions is calculated based on the precedence-based, not position-based, measure. In calculating the distance between two permutations, the position-based measure calculates the summation of the absolute differences between the positions of each two similar jobs.

On the other hand, the precedence-based measure calculates the distance between two permutations based on counting the number of times job i has appeared before job j in both permutations, for all possible unordered pairs of jobs. Then this number is subtracted from to reflect that when in both permutations all possible unordered pairs of jobs have the same precedence relations, the two permutations are identical. When a permutation is reversed, the distance between the original and the reversed permutations becomes , and this is the largest possible distance between two permutations. Each newly generated solution enters the pool if it is better than the worst solution and its makespan is not identical to that of any member of the pool.

The other effective informing genetic algorithm presented for the problem is by Iyer and Saxena (2004). The crossover operator employed in the algorithm was designed based on the fact that for a crossover operator to work efficiently, it should let the offspring inherit the precedence order existing in both parents to a good extent. In other words, if job i has occurred before job j in both parents, it is better that job i occur before job j in the offspring as well. The employed crossover operator is called longest common subsequence (LCS) because it preserves only the longest common subsequence of jobs in the parents and leaves any other common precedence relation.

For instance, in the two permutations (1, 7, 2, 3, 6, 9, 4, 8, 5) and (3, 9, 4, 2, 7, 6, 8, 5, 1) the subsequence of 3, 9, 4, 8, and 5 is the longest common subsequence that can be found. The problem of finding such a subsequence is easily solved by the dynamic programming technique in time. After keeping the positions of the longest common subsequence in each parent, the order of other jobs is determined based on their order in the other parent. Hence, the two offspring generated from these two parents are (2, 7, 6, 3, 1, 9, 4, 8, 5) and (3, 9, 4, 1, 7, 2, 8, 5, 6).

This crossover operator, LCS, does not preserve all precedence orders and at the same time is not as restrictive as the well-known 1X crossover operator (Davis, 1985). Despite the fact that the 1X crossover operator preserves all precedence orders, it has been shown that it is very restrictive and hence in some circumstances may slow down the convergence toward high-quality solutions.

The role of different crossover operators for the PFSP was investigated in Lian et al. (2006), who presented an informing particle swarm algorithm for the problem. It employs one-, two-, three-, and four-segment crossover operators, in the sense that it divides the parent permutations into 2, 4, 6, and 8 segments, respectively. This means that it keeps the jobs of the even segments from one parent and copies the unselected jobs from the other parent, one after another, into the unfilled positions of the offspring. Also, as its mutation operator, the procedure uses a shift operator that simply moves a job from its position and inserts it into another position of the permutation.

The hybrid particle swarm optimization procedure developed by Kuo et al. (2009) uses a random-key encoding scheme and converts a vector of real numbers based on such keys to a permutation through sorting these real numbers. The employed encoding has its own benefits and drawbacks. The major benefit is simply obtained through an n-dimensional Cartesian space on which the particles are placed and hence can move smoothly. The major drawback is derived from the fact that numerous random keys can be mapped into the same permutation. For instance, two random encoding instances of (0.3, 0.1, 0.8, 0.6) and (0.7, 0.4, 0.9. 0.8) lead to the same permutation (2, 1, 4, 3) because in both of them, the smallest element is the second, then the first, then the fourth, and finally the third, which is the largest element in both instances.

Similarly, the procedure presented by Wang et al. (2011) converts continuous vectors into job permutations. This procedure, which is a global-best harmony search and solves the blocking PFSP with the makespan criterion, uses an efficient initialization scheme that effectively manages the level of quality and diversity of solutions while the search progresses. Like many other similar techniques in which a local search enhances the solutions, in this procedure, a local search algorithm, which uses insertion neighborhood, is used for fine-tuning. The other characteristic of the method is an adjustment rule, which is designed to inherit good structures of the continuous vectors and spread these structures into the encoding of other solutions. Despite the fundamental differences of this adjustment rule from a crossover operator, both mechanisms are aimed at achieving the goal of inheriting beneficial structures in the encoding of generated solutions.

The evolutionary method presented by Li and Yin (2013) is a memetic algorithm that similarly uses random keys. These random keys, as continuous positions, are converted into permutations, and the population’s diversity is achieved through the tuning of crossover rate. The procedure employs both a pairwise-based local search and an opposition-based learning mechanism to enhance the quality of the overall solution. The local search is aimed at escaping local optimal solutions, and the opposition-based learning mechanism is used for the initialization as well as for generation jumping.

4  The Reblocking Adjustable Memetic Procedure

Our procedure, called the reblocking adjustable memetic procedure (RAMP), is an effective hybrid based on a memetic algorithm. The RAMP employs a construction component that generates initial solutions through the use of a novel reblocking mechanism that operates according to a biased random sampling technique. It is fully described in this section. Using this mechanism, the RAMP generates initial solutions and then improves these solutions through the combination of a genetic algorithm and two local searches improving each other’s results. This reblocking mechanism is aimed at forcing the operations with smaller processing times to appear on the critical path.

Upon performing a crossover and possibly mutation operator, combined 2-exchange swap and insertion local searches are performed on the offspring genome to improve its quality. In the construction method and the local searches, the structure of the problem is exploited by notion of the critical path. In describing the method, we start with the reblocking mechanism used for the generation of the initial pool.

Assuming that the maximizer agent connects the neighboring cells to one another, starting from the cell (1,1) and ending with the cell (n,m), to create a critical path in the matrix, the reblocking mechanism is aimed at preventing such an agent from connecting cells with high value of fki to one another. After all, as Figure 1 shows, it is the summation of the fki for all connected cells that comprises the makespan, cnm. For preventing such an agent from connecting such cells to one another, the matrix is divided into starting, middle, and ending columns and rows, as Figures 2 and 3 depict.

Figure 2:

The reblocking mechanism (i) for constructing the initial solution.

Figure 2:

The reblocking mechanism (i) for constructing the initial solution.

Figure 3:

The reblocking mechanism (ii) for constructing the initial solution.

Figure 3:

The reblocking mechanism (ii) for constructing the initial solution.

In both Figures 2 and 3, among the nine areas created, the two located in the northeast and southwest cannot be connected to one another in any critical path and hence are the best places to put large values. On the other hand, the cells located in the northwest and southeast, as well as those located in the middle, are the three highly accessible blocks for using their cells as intermediates for connecting cell (1,1) to cell (n,m). These northwest, southeast, and middle cells are suitable for placing the small values.

In Figure 2 small values are located in the three highly accessible blocks, and in Figure 3 large values are placed in the two hardly accessible blocks. Despite the fact that both mechanisms are based on the same rationale in minimizing the makespan, they can produce different results. Based on this rationale of the reblocking mechanism for placing the jobs in the matrix, first the total time of starting, middle, and ending operations of each job is calculated. Then the jobs are sorted based on each of these three criteria. Now the jobs can be selected one by one for positions until all the jobs have been assigned. Figures 4 and 5 show how, on the sample problem presented in Figure 1, these two mechanisms behaved. Moreover, at the cost of extra complexity, the two types of the selection can be amalgamated for the small and large cells to be simultaneously located in their corresponding areas.

Figure 4:

The result of applying the reblocking mechanism (i) to the sample problem and reducing its makespan from 1,184 to 904.

Figure 4:

The result of applying the reblocking mechanism (i) to the sample problem and reducing its makespan from 1,184 to 904.

Figure 5:

The result of applying the reblocking mechanism (ii) to the sample problem and reducing its makespan from 1,184 to 973.

Figure 5:

The result of applying the reblocking mechanism (ii) to the sample problem and reducing its makespan from 1,184 to 973.

We now discuss how the swapping is performed. First, jobs are sorted based on the total durations they have on a critical path. Suppose jobs are sorted, and they are in the order of 2, 6, 8, 3, 4, 1, 5, 7. This means that job 2 has the longest duration on the critical path and job 7 has the smallest duration on the critical path. Now, job 2 is swapped with job 6, and if their swap is beneficial, it is performed. If not, job 2 is examined to be swapped with jobs , one after another. If none of these swaps is beneficial, the swap of job 6 with job 8 is tested, and the test of swaps is performed one after another until finally the swap of jobs 5 and 7 is tested. It is conjectured that in this way the first beneficial swap can better improve the objective function.

A factor contributing to the effectiveness of the procedure is that not all swaps need exact evaluation, and nearly half of them can be rejected with the following preliminary evaluation, which can be performed with insignificant computational burden. When two jobs are supposed to be swapped, the locations they currently occupy on the critical path are recorded. Then the sum of the durations that each of these two jobs has on the locations that the other job occupies is calculated.

With the assumption that the critical path in the new arrangement is not changed, the result of the candidate swap is evaluated, and if it is not beneficial, the swap is rejected. Note that the assumption of having the same critical path for the after-swap arrangement for rejecting the candidate swap is realistic because the length of the critical path cannot be smaller than that of an arbitrary path. After all, the assumed critical path shows only one path among many other paths possible to draw.

The idea of rejecting a swap based on the assumption that the critical path in the new arrangement is not changed can be shown with the following example, in which two jobs whose operations on the current critical path are underscored are supposed to be swapped:
formula
After such a possible swap, the counterpart operations of these jobs will be placed on the assumed critical path. In other words, instead of (20, 23, 26), there will be (34, 38, 15), and instead of (14, 15), there will be (10, 32), on the assumed critical path. This causes the value of the assumed critical path to increase by 31, . As mentioned, this is the minimum increase in the makespan, and the real increase can be even higher. That is why the preliminary evaluation rejects such a deteriorating swap without performing any lengthy exact evaluation.

As well as rejecting deteriorating swaps without evaluating them exactly, the procedure uses forward and backward matrices as a facilitating memory to make swaps efficient (Taillard, 1991). Algorithm 1 shows pseudocode describing the stepwise operations of the RAMP. As a memetic algorithm in which individual’s behavior is evolved through undergoing a learning process, the RAMP performs this learning process by applying two local searches, which improve each other’s result, as shown at lines 13 and 14 of Algorithm 1. In the literature, this scheme is usually called variable neighborhood search (Hansen and Mladenović, 2005). For the RAMP to be effective, the individual components have been designed in such a way that a proper balance between exploration and exploitation can be achieved. Constructing the initial population through a problem-specific randomized heuristic, performed in lines 3 and 4 of the algorithm notably contributes to the quality of solutions of the first generation.

formula

Line 5 starts a loop that terminates at line 19, with each iteration of the loop managing a particular generation of the corresponding genetic algorithm. In each generation, the parents, as the individuals of the previous generation, produce offspring genomes. As the lines 20 and 21 indicate, the number of individuals in each generation is equal to an input parameter called populationSize. In general, two phases of selection are considered in evolutionary algorithms: parent selection and survival selection (De Jong, 2006). Whereas parent selection specifies the way in which parents are nominated for mating, survival selection indicates which individuals survive to the next generation.

In terms of survival selection, the method employed in the RAMP is (De Jong, 2006), in which shows the size of the parents, shows the number of offspring generated, and the sign + indicates that parent and offspring genomes compete to enter into the next generation. This selection highly favors intensification. On the other hand, in terms of parent selection, the RAMP adopts a selection mechanism to balance the bias toward intensification taken in the survival selection. That is why the parent selection completely favors diversification.

For this purpose, through a uniform, or egalitarian, scheme, all individuals receive the same chance of producing offspring genomes, and no bias is used toward selecting high-quality parents. Favoring intensification in the survival selection and diversification in the parent selection are aimed at increasing the effectiveness of the procedure in exploring various quality regions while concentrating on regions with high quality.

Reproductive operators, in lines 10 and 11 of Algorithm 1 are also employed to produce diverse, high-quality solutions. A crossover operator is considered effective if there is a high correlation between parent and offspring solution quality. In line with the procedure presented by Lian et al. (2006), the RAMP uses a modified version of k-point crossover as a mechanism to produce two offspring genomes from two given parents. Moreover, a simple swap and insertion mutation is used to promote diversity in the population.

Both the swap and insertion local searches, in lines 13 and 14 of the algorithm, follow the modified k-point crossover operator and the simple mutation mechanism employed. For describing our modified k-point crossover operator, we need to describe the k-point crossover operator first. The k-point crossover, kX (Djerid et al., 1996) can be seen as a generalization of linear order crossover (LOX) developed by Falkenauer and Bouffouix (1991). Basically, in kX, k random points are identified in the solution array uniformly. Based on these k points, the array is divided into sections. The odd and even sections are then identified. For the first offspring, the odd sections of the first parent are copied as they are. Then elements in even sections are reordered to have the same order they had in the second parent.

The second offspring is generated in similar way by the two parents changing their role. The crossover operator kX absolutely satisfies the precedence constraints property. In problems like the resource-constrained project scheduling problem (RCPSP) (Zamani, 2013b), the absolute satisfaction of precedence constraints is a requirement for the feasibility of solutions. For the PFSP, however, this is not a requirement.

Since one of the disadvantages of the kX operator is that by increasing the value of k, offspring genomes become very similar to one of the parents (Zamani, 2013a), a very simple modification has been made to this operator. The modification is based on carrying the unfixed genes from the second parent, one after another, to the offspring. In other words, rather than reordering the jobs in the even sections to have the same order they had in the second parent, all jobs in the even sections are identified in the second parent, and these jobs, one after another, from the second parent appear and fill the even sections. Figure 6 shows how two parents produce two offspring genomes using this modified k-point crossover, with .

Figure 6:

A sample kX crossover for on two parents (permutations) with the size of 9.

Figure 6:

A sample kX crossover for on two parents (permutations) with the size of 9.

As shown in lines 12–16 of Algorithm 1, both swap and insertion local searches are employed to fine-tune any offspring generated. To improve the effectiveness, the swap and insertion local searches are consecutively performed in a loop, started at line 12, until neither the swap nor the insertion local search can improve the solution quality. Local searches in general are time-consuming and slow down the genetic algorithms. To circumvent this issue, in both the local searches the concept of critical path is used to ignore unfruitful neighbors and expedite the search process. For this purpose, if the jobs to be swapped (or inserted) have their critical operations on the same machine, the corresponding swap (or insertion) is ignored. For instance, in the sample problem presented in Figure 1, any modification in the range 5, 6, 7, 8 cannot improve the makespan, and consequently any swap or insertion in this range can be ignored. This is in line with the procedure presented by Nowicki and Smutnicki (1996).

Figure 7 shows how the steps described in Algorithm 1 cause the population to evolve. For this purpose, the snapshots of the three generations of starting, middle, and ending populations are represented. In these snapshots both a solution value and the distance from the optimal solution are shown. Since each solution is a permutation, the distance between two solutions simply shows the number of jobs that do not have the same order in the two solutions. For instance, the distance between 1-4-3-2-5-6 and 1-2-3-4-5-6 is 2, which is the number of jobs that have different positions in the two permutations. The snapshots show that despite the fact that initial solutions are not very close to the optimal solution, the optimal solution was found within nine generations. However, the average distance of solutions from the optimal solution in generation 9, is still high. This is partly due to the fact that our distance measure, which is the Hamming distance, is fully consistent with swap moves, whereas in the RAMP, a combination of swap and insertion moves is used. In other words, while the Hamming distance is large, the optimal solution can be just one insertion move away from another solution.

Figure 7:

The individuals of the population in the instance ta071 at (a) generation 1, (b) generation 3, and (c) generation 9, at which the optimal value of 5,770 has been found.

Figure 7:

The individuals of the population in the instance ta071 at (a) generation 1, (b) generation 3, and (c) generation 9, at which the optimal value of 5,770 has been found.

Having described the overall pseudocode of the RAMP, we can now describe the pseudocode of its construction component, which initializes the population of the first generation through the randomized reblocking mechanism. The pseudocode for this is presented as Algorithm 2.

formula

In lines 9, 10, and 11 of Algorithm 2, for each job, the sums of its initial, middle, and final parts are calculated. At first glance, it seems that as the problem scales in terms of quantities of jobs and machines, the jobs could be split based on a number determined with respect to the quantities of machines and jobs. The problem with any number except 3, however, is that it makes the procedure too complicated. In effect, even if 3 were increased to 4, there would be as many as 16, 4 * 4, partitions to handle. Decreasing 3 to 2 was also unfruitful because none of the four created partitions could point to the central locations. That is why as a viable solution, initial, middle, and final parts are considered.

Based on this partitioning, in Algorithm 2, initial[j], middle[j], and final[j] show the sums of the processing times for the first, second, and third operations of the job j; respectively. In other words, the list of machines are equally divided into three sections, and for the first, second, and third sections, the sum of operations is recorded, with the first section comprising machines , the second section comprising machines , and the third section comprising machines .

Since the aim is to prevent the maximizer agent from accessing long processing times, two strategies can be adopted, with both strategies dividing the matrix of processing times into nine blocks. In the first strategy, shown in Figures 2 and 4, jobs can be rearranged in such a way that shorter processing times appear in diagonal blocks, whereas in the second strategy, shown in Figures 3 and 5, the jobs having longer processing times appear in the upper right and lower left blocks.

Although, with the same execution times, both strategies produce promising results, we observed that the first strategy outperforms the second one in terms of solution quality. In particular, in 12,000 random solutions generated for 12 Taillard’s instances with different sizes, the average makespan showed an improvement of 2.24% for the first reblocking strategy. Because of this observation, the second strategy was discarded. That is why line 13 in Algorithm 2 sorts initial, middle, and final arrays ascendingly, so that the jobs with shorter initial, middle, and final parts can be identified. Line 16 distinguishes the positions in the permutation to be filled, and lines 17–22 fill these positions one after another. For this purpose, line 21 sets appropriate jobs whose initial, middle, or final part meets the criterion stated. The initial, middle, or final part is selected depending on the block where the corresponding job is to be placed, and the criterion for selecting a job, as stated in line 20, is to find the best possible t jobs for selecting one of these jobs randomly.

Because of the existence of the parameter t at line 20, different applications of this reblocking mechanism lead to different solutions. Larger values of t lead to more diversification at the cost of decreasing solution quality, and smaller values of t, say, less than 3, increase solution quality at the cost of decreasing the diversity of the solutions generated. Hence, for small pools, t should be set to small values, and for large pools, t should be set to larger values, say 3 or 4.

5  Computational Experiments

The RAMP was implemented in C++ and compiled via Visual C++ compiler on a PC with 2.2 GHz speed. In line with other procedures, for removing the effect of the initial random number generation seed on the performance of the procedure, it was restarted 10 times for each instance with different random seeds. In each restart, a time limit of nm/10 seconds was considered, with n showing the number of jobs and m showing the number of machines. Before applying the RAMP to the benchmark instances and examining the effects of its different components, its parameters were set based on the preliminary computational experiments.

One of these parameters is the value of t used as a parameter of the reblocking construction module (see Algorithm 2). The value of t has a key role in producing initial solutions because setting t to a large value increases the diversity of generated solutions but reduces the quality of solutions at the same time. On the other hand, decreasing t close to 1 will result in high-quality initial solutions at the expense of lower diversity. Moreover, limited computational experiments show that t should be larger for instances having a larger number of jobs.

One reason for this can be the fact that as the number of jobs increases linearly, the size of the search space increases exponentially, and for effective search of such large space, highly diverse initial solutions are required. Based on the computational experiments performed, the value of t was set to , in which n is the number of jobs.

Three other factors which significantly affect the performance of the procedure include the values of population size, mutation probability, and the number of points in the kX crossover operator. Based on our preliminary experiments, we set the population size to 100, mutation probability to 0.2, and k to 3. The mutation also alternated between swap mutation and insertion mutation with a probability of 0.5.

A set of 120 instances was used in our computational experiments. These benchmark instances were devised by Taillard (1993). The instances were classified into 12 different classes, and each class included 10 different instances with the same specification. We first present the performance of the RAMP and compare it with that of other procedures on these 12 classes and then analyze the effects of individual components of the RAMP.

5.1  RAMP Performance versus Performance of Other Procedures

RAMP was compared with several metaheuristic methods in terms of solution quality and computation time. These methods included (1) NEGA (Zobolas et al., 2009), (2) particle swarm optimization algorithm, PSO (Tasgetiren et al., 2007), (3) simulated annealing algorithm, SAOP (Osman and Potts, 1989), (4) two ant colony optimization algorithms, PACO and M-MMAS (Rajendran and Ziegler, 2004), (5) iterated local search, ILS (Stützle, 1998), and (6) hybrid genetic algorithm, HGA_RMA (Ruiz et al., 2006). Note that except for PSO, in which the results on 500×20 instances were not reported, all other studies use the entire set of 120 Taillard’s instances.

The results of comparisons are presented in Table 1. As can be seen, for 6 out of 12 problem groups, the RAMP provides a better or equal performance compared to the best methods in the literature. Moreover, for three groups, comprising 30 instances, ta041-ta050, ta051-ta060, and ta071-ta080, the RAMP outperforms best methods in the literature.

Table 1:
Comparison of average percent deviation from the best-known solutions (%DEV) with other algorithms.
Problem GroupSizeSAOPILSM-MMASPACOHGA_RMAPSONEGARAMP
ta001–ta010 20×5 1.05 0.33 0.04 0.18 0.04 0.03 0.00 0.00 
ta011–ta020 20×10 2.60 0.52 0.07 0.24 0.02 0.02 0.01 0.03 
ta021–ta030 20×20 2.06 0.28 0.06 0.18 0.05 0.05 0.02 0.04 
ta031–ta040 50×5 0.34 0.18 0.02 0.05 0.00 0.00 0.00 0.00 
ta041–ta050 50×10 3.50 1.45 1.08 0.81 0.72 0.57 0.82 0.37 
ta051–ta060 50×20 4.66 2.05 1.93 1.41 0.99 1.36 1.08 0.61 
ta061–ta070 100×5 0.30 0.16 0.02 0.02 0.01 0.00 0.00 0.00 
ta071–ta080 100×10 1.34 0.64 0.39 0.29 0.16 0.18 0.14 0.06 
ta081–ta090 100×20 4.49 2.42 2.42 1.93 1.30 1.45 1.40 1.76 
ta091–ta100 200×10 0.94 0.50 0.30 0.23 0.14 0.18 0.16 0.15 
ta101–ta110 200×20 3.67 2.07 2.15 1.82 1.26 1.35 1.25 2.00 
ta111–ta120 500×20 2.20 1.20 1.02 0.85 0.69 — 0.71 1.20 
Problem GroupSizeSAOPILSM-MMASPACOHGA_RMAPSONEGARAMP
ta001–ta010 20×5 1.05 0.33 0.04 0.18 0.04 0.03 0.00 0.00 
ta011–ta020 20×10 2.60 0.52 0.07 0.24 0.02 0.02 0.01 0.03 
ta021–ta030 20×20 2.06 0.28 0.06 0.18 0.05 0.05 0.02 0.04 
ta031–ta040 50×5 0.34 0.18 0.02 0.05 0.00 0.00 0.00 0.00 
ta041–ta050 50×10 3.50 1.45 1.08 0.81 0.72 0.57 0.82 0.37 
ta051–ta060 50×20 4.66 2.05 1.93 1.41 0.99 1.36 1.08 0.61 
ta061–ta070 100×5 0.30 0.16 0.02 0.02 0.01 0.00 0.00 0.00 
ta071–ta080 100×10 1.34 0.64 0.39 0.29 0.16 0.18 0.14 0.06 
ta081–ta090 100×20 4.49 2.42 2.42 1.93 1.30 1.45 1.40 1.76 
ta091–ta100 200×10 0.94 0.50 0.30 0.23 0.14 0.18 0.16 0.15 
ta101–ta110 200×20 3.67 2.07 2.15 1.82 1.26 1.35 1.25 2.00 
ta111–ta120 500×20 2.20 1.20 1.02 0.85 0.69 — 0.71 1.20 

It is worth noting that SAOP, PACO, M-MMAS, ILS, and HGA_RMA were run on a PC with 2.6 GHz clock speed and the results reported by Ruiz et al. (2006). However, NEGA was run on a PC with 2.4 GHz, and PSO on a 2.8 GHz CPU. To provide a fair comparison, CPU clock speeds are presented in Table 2 along with scaling factors. It is worth mentioning that since computational times depend on factors like developer’s programming skills, compiler efficiency, implementation details, and CPU architecture, any running time comparison should be made with caution.

Table 2:
Scaling factor for different clock speeds.
AlgorithmClock Speed (GHz)Scaling Factor
RAMP 2.2 1.00 
NEGA 2.4 1.09 
HGA_RMA 2.6 1.18 
ILS 2.6 1.18 
SAOP 2.6 1.18 
M-MMAS 2.6 1.18 
PACO 2.6 1.18 
PSO 2.8 1.27 
AlgorithmClock Speed (GHz)Scaling Factor
RAMP 2.2 1.00 
NEGA 2.4 1.09 
HGA_RMA 2.6 1.18 
ILS 2.6 1.18 
SAOP 2.6 1.18 
M-MMAS 2.6 1.18 
PACO 2.6 1.18 
PSO 2.8 1.27 

Table 3 shows the comparison of real and scaled maximum allowed running times for the four high-performing procedures, namely, the RAMP, NEGA, PSO, and HGA_RMA. It is worth noting that the maximum running time of the other procedures is equal to that of HGA_RMA, because they all were implemented and compared by Ruiz et al. (2006). Furthermore, since only NEGA and PSO reported the average time needed to reach the best solution, Table 4 shows only these two methods.

Table 3:
Comparison of maximum allowed running times.
HGA_RMAPSONEGARAMP
Problem GroupSizeRealScaledRealScaledRealScaledRealScaled
ta001–ta010 20×5 4.5 5.31 300 381 10 10.90 10 10 
ta011–ta020 20×10 9.0 10.62 300 381 20 21.80 20 20 
ta021–ta030 20×20 18.0 21.24 300 381 40 43.60 40 40 
ta031–ta040 50×5 11.3 13.33 300 381 25 27.25 25 25 
ta041–ta050 50×10 22.5 26.55 300 381 50 54.50 50 50 
ta051–ta060 50×20 45.0 53.10 300 381 100 109.0 100 100 
ta061–ta070 100×5 22.5 26.55 600 762 50 54.50 50 50 
ta071–ta080 100×10 45.0 53.10 600 762 100 109.0 100 100 
ta081–ta090 100×20 90.0 106.20 600 762 200 218.0 200 200 
ta091–ta100 200×10 90.0 106.20 600 762 200 218.0 200 200 
ta101–ta110 200×20 180.0 212.40 600 762 400 436.0 400 400 
ta111–ta120 500×20 450.0 531.00 — — 1,000 1,090 1,000 1,000 
HGA_RMAPSONEGARAMP
Problem GroupSizeRealScaledRealScaledRealScaledRealScaled
ta001–ta010 20×5 4.5 5.31 300 381 10 10.90 10 10 
ta011–ta020 20×10 9.0 10.62 300 381 20 21.80 20 20 
ta021–ta030 20×20 18.0 21.24 300 381 40 43.60 40 40 
ta031–ta040 50×5 11.3 13.33 300 381 25 27.25 25 25 
ta041–ta050 50×10 22.5 26.55 300 381 50 54.50 50 50 
ta051–ta060 50×20 45.0 53.10 300 381 100 109.0 100 100 
ta061–ta070 100×5 22.5 26.55 600 762 50 54.50 50 50 
ta071–ta080 100×10 45.0 53.10 600 762 100 109.0 100 100 
ta081–ta090 100×20 90.0 106.20 600 762 200 218.0 200 200 
ta091–ta100 200×10 90.0 106.20 600 762 200 218.0 200 200 
ta101–ta110 200×20 180.0 212.40 600 762 400 436.0 400 400 
ta111–ta120 500×20 450.0 531.00 — — 1,000 1,090 1,000 1,000 
Table 4:
Comparison of average time needed to reach the best solution.
PSONEGARAMP
Problem GroupSizeRealScaledRealScaledRealScaled
ta001–ta010 20×5 13.5 17.145 2.2 2.4 0.0 0.0 
ta011–ta020 20×10 26.3 33.401 12.2 13.3 0.1 0.1 
ta021–ta030 20×20 69.3 88.011 29.2 31.8 0.5 0.5 
ta031–ta040 50×5 2.8 3.556 8.2 8.9 0.1 0.1 
ta041–ta050 50×10 79.8 101.346 32.3 35.2 16.3 16.3 
ta051–ta060 50×20 168.1 213.487 55.0 60.0 82.0 82.0 
ta061–ta070 100×5 52.6 66.802 30.8 33.6 0.7 0.7 
ta071–ta080 100×10 211.0 267.970 58.7 64.0 51.7 51.7 
ta081–ta090 100×20 310.8 394.716 122.7 133.7 158.6 158.6 
ta091–ta100 200×10 191.3 242.951 134.5 146.6 117.9 117.9 
ta101–ta110 200×20 438.7 557.149 271.7 296.2 308.0 308.0 
ta111–ta120 500×20 — — 523.4 570.5 522.1 522.1 
PSONEGARAMP
Problem GroupSizeRealScaledRealScaledRealScaled
ta001–ta010 20×5 13.5 17.145 2.2 2.4 0.0 0.0 
ta011–ta020 20×10 26.3 33.401 12.2 13.3 0.1 0.1 
ta021–ta030 20×20 69.3 88.011 29.2 31.8 0.5 0.5 
ta031–ta040 50×5 2.8 3.556 8.2 8.9 0.1 0.1 
ta041–ta050 50×10 79.8 101.346 32.3 35.2 16.3 16.3 
ta051–ta060 50×20 168.1 213.487 55.0 60.0 82.0 82.0 
ta061–ta070 100×5 52.6 66.802 30.8 33.6 0.7 0.7 
ta071–ta080 100×10 211.0 267.970 58.7 64.0 51.7 51.7 
ta081–ta090 100×20 310.8 394.716 122.7 133.7 158.6 158.6 
ta091–ta100 200×10 191.3 242.951 134.5 146.6 117.9 117.9 
ta101–ta110 200×20 438.7 557.149 271.7 296.2 308.0 308.0 
ta111–ta120 500×20 — — 523.4 570.5 522.1 522.1 

Because among the mentioned algorithms only NEGA reported performance for all individual instances, an instance-by-instance performance comparison is presented in Table 5, comparing the performance of the RAMP with that of NEGA (Zobolas et al., 2009). NEGA is a genetic algorithm (GA) that uses the NEH (Nawaz et al., 1983) to produce the initial solutions; its name was derived from these facts. The subscript VNS indicates that variable neighborhood search was used in the procedure.

Table 5:
Comparison of individual instances performance of RAMP and of NEGA. Boldface values show the best performance.
RAMPNEGA
InstancenmLBUBNEHBest%DEVTBest%DEVT
ta001 20 1,278 1,278 1,286 1,278 0.00 0 1,278 0.00 
ta002 20 1,359 1,359 1,365 1,359 0.00 0 1,359 0.00 
ta003 20 1,081 1,081 1,159 1,081 0.00 0 1,081 0.00 
ta004 20 1,293 1,293 1,325 1,293 0.00 0 1,293 0.00 
ta005 20 1,235 1,235 1,305 1,235 0.00 0 1,235 0.00 
ta006 20 1,195 1,195 1,228 1,195 0.00 0 1,195 0.00 
ta007 20 1,239 1,239 1,278 1,239 0.00 0 1,239 0.00 
ta008 20 1,206 1,206 1,223 1,206 0.00 0 1,206 0.00 
ta009 20 1,230 1,230 1,291 1,230 0.00 0 1,230 0.00 
ta010 20 1,108 1,108 1,151 1,108 0.00 0 1,108 0.00 
ta011 20 10 1,582 1,582 1,680 1,582 0.00 0 1,582 0.00 10 
ta012 20 10 1,659 1,659 1,729 1,659 0.00 0 1,659 0.02 
ta013 20 10 1,496 1,496 1,557 1,496 0.00 0 1,496 0.00 12 
ta014 20 10 1,377 1,377 1,439 1,377 0.01 0 1,377 0.05 17 
ta015 20 10 1,419 1,419 1,502 1,419 0.00 0 1,419 0.00 11 
ta016 20 10 1,397 1,397 1,453 1,397 0.00 0 1,397 0.00 15 
ta017 20 10 1,484 1,484 1,562 1,484 0.00 0 1,484 0.00 11 
ta018 20 10 1,538 1,538 1,609 1,538 0.26 0 1,538 0.00 10 
ta019 20 10 1,593 1,593 1,647 1,593 0.00 0 1,593 0.00 13 
ta020 20 10 1,591 1,591 1,653 1,591 0.00 0 1,591 0.00 14 
ta021 20 20 2,297 2,297 2,410 2,297 0.03 0 2,297 0.00 26 
ta022 20 20 2,099 2,099 2,150 2,099 0.01 0 2,099 0.01 33 
ta023 20 20 2,326 2,326 2,411 2,326 0.11 2 2,326 0.02 32 
ta024 20 20 2,223 2,223 2,262 2,223 0.00 0 2,223 0.00 22 
ta025 20 20 2,291 2,291 2,397 2,291 0.13 1 2,291 0.04 21 
ta026 20 20 2,226 2,226 2,349 2,226 0.03 0 2,226 0.03 35 
ta027 20 20 2,273 2,273 2,362 2,273 0.01 0 2,273 0.07 36 
ta028 20 20 2,200 2,200 2,249 2,200 0.00 0 2,200 0.00 25 
ta029 20 20 2,237 2,237 2,320 2,237 0.00 0 2,237 0.00 23 
ta030 20 20 2,178 2,178 2,277 2,178 0.03 1 2,178 0.02 39 
ta031 50 2,724 2,724 2,733 2,724 0.00 0 2,724 0.00 
ta032 50 2,834 2,834 2,843 2,834 0.00 0 2,834 0.00 
ta033 50 2,621 2,621 2,640 2,621 0.00 0 2,621 0.00 
ta034 50 2,751 2,751 2,782 2,751 0.00 0 2,751 0.00 11 
ta035 50 2,863 2,863 2,868 2,863 0.00 0 2,863 0.00 
ta036 50 2,829 2,829 2,850 2,829 0.00 0 2,829 0.00 
ta037 50 2,725 2,725 2,758 2,725 0.00 0 2,725 0.00 
ta038 50 2,683 2,683 2,721 2,683 0.00 0 2,683 0.00 
ta039 50 2,552 2,552 2,576 2,552 0.00 0 2,552 0.00 12 
ta040 50 2,782 2,782 2,790 2,782 0.00 0 2,782 0.00 21 
ta041 50 10 2,991 2,991 3,135 3,025 1.14 3 3,021 1.03 38 
ta042 50 10 2,867 2,867 3,032 2,877 0.56 39 2,902 1.28 41 
ta043 50 10 2,839 2,839 2,986 2,852 0.76 21 2,871 1.14 36 
ta044 50 10 3,063 3,063 3,198 3,063 0.00 5 3,070 0.28 29 
ta045 50 10 2,976 2,976 3,160 2,979 0.24 41 2,998 0.81 25 
ta046 50 10 3,006 3,006 3,178 3,006 0.00 4 3,024 0.68 32 
ta047 50 10 3,093 3,093 3,277 3,098 0.30 18 3,122 0.98 19 
ta048 50 10 3,037 3,037 3,123 3,038 0.11 21 3,063 0.93 29 
ta049 50 10 2,897 2,897 3,002 2,902 0.18 6 2,914 0.65 39 
ta050 50 10 3,065 3,065 3,257 3,078 0.47 6 3,076 0.44 35 
ta051 50 20 3,771 3,850 4,082 3,873 0.76 82 3,874 0.77 22 
ta052 50 20 3,668 3,704 3,921 3,714 0.28 56 3,734 1.02 87 
ta053 50 20 3,591 3,640 3,927 3,649 0.62 98 3,688 1.39 56 
ta054 50 20 3,635 3,720 3,969 3,739 0.79 85 3,759 1.14 39 
ta055 50 20 3,553 3,610 3,835 3,625 0.58 77 3,644 1.03 48 
ta056 50 20 3,667 3,681 3,914 3,695 0.49 93 3,717 1.07 74 
ta057 50 20 3,672 3,704 3,952 3,715 0.47 96 3,728 0.79 42 
ta058 50 20 3,627 3,691 3,938 3,709 0.85 99 3,730 1.18 28 
ta059 50 20 3,645 3,743 3,952 3,765 0.67 69 3,779 1.10 90 
ta060 50 20 3,696 3,756 4,079 3,773 0.64 65 3,801 1.35 64 
ta061 100 5,493 5,493 5,519 5,493 0.00 0 5,493 0.00 34 
ta062 100 5,268 5,268 5,348 5,268 0.00 0 5,268 0.00 26 
ta063 100 5,175 5,175 5,219 5,175 0.00 0 5,175 0.00 36 
ta064 100 5,014 5,014 5,023 5,014 0.01 5 5,014 0.00 33 
ta065 100 5,250 5,250 5,266 5,250 0.00 0 5,250 0.00 12 
ta066 100 5,135 5,135 5,139 5,135 0.00 0 5,135 0.00 42 
ta067 100 5,246 5,246 5,259 5,246 0.00 0 5,246 0.00 50 
ta068 100 5,094 5,094 5,120 5,094 0.00 0 5,094 0.00 31 
ta069 100 5,448 5,448 5,489 5,448 0.00 0 5,448 0.00 25 
ta070 100 5,322 5,322 5,341 5,322 0.00 1 5,322 0.00 19 
ta071 100 10 5,770 5,770 5,846 5,770 0.00 10 5,770 0.04 49 
ta072 100 10 5,349 5,349 5,453 5,349 0.01 35 5,358 0.23 78 
ta073 100 10 5,676 5,676 5,824 5,676 0.04 82 5,676 0.09 65 
ta074 100 10 5,781 5,781 5,929 5,781 0.04 71 5,792 0.23 22 
ta075 100 10 5,467 5,467 5,679 5,467 0.01 72 5,467 0.06 81 
ta076 100 10 5,303 5,303 5,375 5,303 0.07 20 5,311 0.20 72 
ta077 100 10 5,595 5,595 5,704 5,596 0.02 20 5,605 0.22 54 
ta078 100 10 5,617 5,617 5,760 5,623 0.11 34 5,617 0.05 64 
ta079 100 10 5,871 5,871 6,032 5,875 0.23 78 5,877 0.19 29 
ta080 100 10 5,845 5,845 5,918 5,845 0.05 95 5,845 0.09 73 
ta081 100 20 6,106 6,202 6,541 6,336 2.35 61 6,303 1.69 85 
ta082 100 20 6,183 6,183 6,523 6,271 1.59 124 6,266 1.45 75 
ta083 100 20 6,252 6,271 6,639 6,363 1.61 150 6,351 1.32 145 
ta084 100 20 6,254 6,269 6,557 6,334 1.26 197 6,360 1.49 129 
ta085 100 20 6,262 6,314 6,695 6,394 1.61 191 6,408 1.57 163 
ta086 100 20 6,302 6,364 6,664 6,482 1.92 196 6,453 1.50 108 
ta087 100 20 6,184 6,268 6,632 6,350 1.82 195 6,332 1.10 94 
ta088 100 20 6,315 6,401 6,739 6,530 2.17 148 6,482 1.49 112 
ta089 100 20 6,204 6,275 6,677 6,381 1.89 148 6,343 1.15 169 
ta090 100 20 6,404 6,434 6,677 6,496 1.40 176 6,506 1.26 147 
ta091 200 10 10,862 10,862 10,942 10,872 0.09 20 10,885 0.24 89 
ta092 200 10 10,480 10,480 10,716 10,499 0.23 100 10,495 0.19 125 
ta093 200 10 10,922 10,922 11,025 10,934 0.26 140 10,941 0.21 169 
ta094 200 10 10,889 10,889 11,057 10,889 0.03 93 10,889 0.04 158 
ta095 200 10 10,524 10,524 10,645 10,527 0.09 159 10,524 0.03 192 
ta096 200 10 10,329 10,329 10,458 10,334 0.06 104 10,346 0.21 91 
ta097 200 10 10,854 10,854 10,989 10,866 0.20 140 10,866 0.17 124 
ta098 200 10 10,730 10,730 10,829 10,743 0.19 162 10,741 0.15 112 
ta099 200 10 10,438 10,438 10,574 10,438 0.04 81 10,451 0.19 138 
ta100 200 10 10,675 10,675 10,807 10,685 0.34 180 10,684 0.14 147 
ta101 200 20 11,152 11,195 11,594 11,379 1.76 290 11,339 1.52 222 
ta102 200 20 11,143 11,203 11,675 11,453 2.39 281 11,344 1.47 268 
ta103 200 20 11,281 11,281 11,852 11,510 2.25 295 11,445 1.45 385 
ta104 200 20 11,275 11,275 11,803 11,462 1.85 297 11,434 1.49 154 
ta105 200 20 11,259 11,259 11,685 11,397 1.48 335 11,369 1.06 300 
ta106 200 20 11,176 11,176 11,629 11,413 2.22 387 11,292 1.01 254 
ta107 200 20 11,337 11,360 11,833 11,549 1.80 162 11,481 1.11 269 
ta108 200 20 11,301 11,334 11,913 11,526 1.93 349 11,442 1.03 311 
ta109 200 20 11,145 11,192 11,673 11,432 2.32 388 11,313 1.22 326 
ta110 200 20 11,284 11,288 11,869 11,479 2.00 295 11,424 1.14 228 
ta111 500 20 26,040 26,059 26,670 26,387 1.42 218 26,228 0.73 311 
ta112 500 20 26,500 26,520 27,232 26,890 1.53 940 26,688 0.77 552 
ta113 500 20 26,371 26,371 26,848 26,692 1.29 579 26,522 0.71 448 
ta114 500 20 26,456 26,456 27,055 26,688 1.06 281 26,586 0.54 269 
ta115 500 20 26,334 26,334 26,727 26,590 1.05 538 26,541 0.82 396 
ta116 500 20 26,469 26,477 26,992 26,753 1.19 857 26,582 0.49 682 
ta117 500 20 26,389 26,389 26,797 26,595 0.86 142 26,660 1.12 559 
ta118 500 20 26,560 26,560 27,138 26,812 1.21 489 26,711 0.61 814 
ta119 500 20 26,005 26,005 26,631 26,346 1.40 602 26,148 0.61 592 
ta120 500 20 26,457 26,457 26,984 26,687 1.03 576 26,611 0.67 611 
RAMPNEGA
InstancenmLBUBNEHBest%DEVTBest%DEVT
ta001 20 1,278 1,278 1,286 1,278 0.00 0 1,278 0.00 
ta002 20 1,359 1,359 1,365 1,359 0.00 0 1,359 0.00 
ta003 20 1,081 1,081 1,159 1,081 0.00 0 1,081 0.00 
ta004 20 1,293 1,293 1,325 1,293 0.00 0 1,293 0.00 
ta005 20 1,235 1,235 1,305 1,235 0.00 0 1,235 0.00 
ta006 20 1,195 1,195 1,228 1,195 0.00 0 1,195 0.00 
ta007 20 1,239 1,239 1,278 1,239 0.00 0 1,239 0.00 
ta008 20 1,206 1,206 1,223 1,206 0.00 0 1,206 0.00 
ta009 20 1,230 1,230 1,291 1,230 0.00 0 1,230 0.00 
ta010 20 1,108 1,108 1,151 1,108 0.00 0 1,108 0.00 
ta011 20 10 1,582 1,582 1,680 1,582 0.00 0 1,582 0.00 10 
ta012 20 10 1,659 1,659 1,729 1,659 0.00 0 1,659 0.02 
ta013 20 10 1,496 1,496 1,557 1,496 0.00 0 1,496 0.00 12 
ta014 20 10 1,377 1,377 1,439 1,377 0.01 0 1,377 0.05 17 
ta015 20 10 1,419 1,419 1,502 1,419 0.00 0 1,419 0.00 11 
ta016 20 10 1,397 1,397 1,453 1,397 0.00 0 1,397 0.00 15 
ta017 20 10 1,484 1,484 1,562 1,484 0.00 0 1,484 0.00 11 
ta018 20 10 1,538 1,538 1,609 1,538 0.26 0 1,538 0.00 10 
ta019 20 10 1,593 1,593 1,647 1,593 0.00 0 1,593 0.00 13 
ta020 20 10 1,591 1,591 1,653 1,591 0.00 0 1,591 0.00 14 
ta021 20 20 2,297 2,297 2,410 2,297 0.03 0 2,297 0.00 26 
ta022 20 20 2,099 2,099 2,150 2,099 0.01 0 2,099 0.01 33 
ta023 20 20 2,326 2,326 2,411 2,326 0.11 2 2,326 0.02 32 
ta024 20 20 2,223 2,223 2,262 2,223 0.00 0 2,223 0.00 22 
ta025 20 20 2,291 2,291 2,397 2,291 0.13 1 2,291 0.04 21 
ta026 20 20 2,226 2,226 2,349 2,226 0.03 0 2,226 0.03 35 
ta027 20 20 2,273 2,273 2,362 2,273 0.01 0 2,273 0.07 36 
ta028 20 20 2,200 2,200 2,249 2,200 0.00 0 2,200 0.00 25 
ta029 20 20 2,237 2,237 2,320 2,237 0.00 0 2,237 0.00 23 
ta030 20 20 2,178 2,178 2,277 2,178 0.03 1 2,178 0.02 39 
ta031 50 2,724 2,724 2,733 2,724 0.00 0 2,724 0.00 
ta032 50 2,834 2,834 2,843 2,834 0.00 0 2,834 0.00 
ta033 50 2,621 2,621 2,640 2,621 0.00 0 2,621 0.00 
ta034 50 2,751 2,751 2,782 2,751 0.00 0 2,751 0.00 11 
ta035 50 2,863 2,863 2,868 2,863 0.00 0 2,863 0.00 
ta036 50 2,829 2,829 2,850 2,829 0.00 0 2,829 0.00 
ta037 50 2,725 2,725 2,758 2,725 0.00 0 2,725 0.00 
ta038 50 2,683 2,683 2,721 2,683 0.00 0 2,683 0.00 
ta039 50 2,552 2,552 2,576 2,552 0.00 0 2,552 0.00 12 
ta040 50 2,782 2,782 2,790 2,782 0.00 0 2,782 0.00 21 
ta041 50 10 2,991 2,991 3,135 3,025 1.14 3 3,021 1.03 38 
ta042 50 10 2,867 2,867 3,032 2,877 0.56 39 2,902 1.28 41 
ta043 50 10 2,839 2,839 2,986 2,852 0.76 21 2,871 1.14 36 
ta044 50 10 3,063 3,063 3,198 3,063 0.00 5 3,070 0.28 29 
ta045 50 10 2,976 2,976 3,160 2,979 0.24 41 2,998 0.81 25 
ta046 50 10 3,006 3,006 3,178 3,006 0.00 4 3,024 0.68 32 
ta047 50 10 3,093 3,093 3,277 3,098 0.30 18 3,122 0.98 19 
ta048 50 10 3,037 3,037 3,123 3,038 0.11 21 3,063 0.93 29 
ta049 50 10 2,897 2,897 3,002 2,902 0.18 6 2,914 0.65 39 
ta050 50 10 3,065 3,065 3,257 3,078 0.47 6 3,076 0.44 35 
ta051 50 20 3,771 3,850 4,082 3,873 0.76 82 3,874 0.77 22 
ta052 50 20 3,668 3,704 3,921 3,714 0.28 56 3,734 1.02 87 
ta053 50 20 3,591 3,640 3,927 3,649 0.62 98 3,688 1.39 56 
ta054 50 20 3,635 3,720 3,969 3,739 0.79 85 3,759 1.14 39 
ta055 50 20 3,553 3,610 3,835 3,625 0.58 77 3,644 1.03 48 
ta056 50 20 3,667 3,681 3,914 3,695 0.49 93 3,717 1.07 74 
ta057 50 20 3,672 3,704 3,952 3,715 0.47 96 3,728 0.79 42 
ta058 50 20 3,627 3,691 3,938 3,709 0.85 99 3,730 1.18 28 
ta059 50 20 3,645 3,743 3,952 3,765 0.67 69 3,779 1.10 90 
ta060 50 20 3,696 3,756 4,079 3,773 0.64 65 3,801 1.35 64 
ta061 100 5,493 5,493 5,519 5,493 0.00 0 5,493 0.00 34 
ta062 100 5,268 5,268 5,348 5,268 0.00 0 5,268 0.00 26 
ta063 100 5,175 5,175 5,219 5,175 0.00 0 5,175 0.00 36 
ta064 100 5,014 5,014 5,023 5,014 0.01 5 5,014 0.00 33 
ta065 100 5,250 5,250 5,266 5,250 0.00 0 5,250 0.00 12 
ta066 100 5,135 5,135 5,139 5,135 0.00 0 5,135 0.00 42 
ta067 100 5,246 5,246 5,259 5,246 0.00 0 5,246 0.00 50 
ta068 100 5,094 5,094 5,120 5,094 0.00 0 5,094 0.00 31 
ta069 100 5,448 5,448 5,489 5,448 0.00 0 5,448 0.00 25 
ta070 100 5,322 5,322 5,341 5,322 0.00 1 5,322 0.00 19 
ta071 100 10 5,770 5,770 5,846 5,770 0.00 10 5,770 0.04 49 
ta072 100 10 5,349 5,349 5,453 5,349 0.01 35 5,358 0.23 78 
ta073 100 10 5,676 5,676 5,824 5,676 0.04 82 5,676 0.09 65 
ta074 100 10 5,781 5,781 5,929 5,781 0.04 71 5,792 0.23 22 
ta075 100 10 5,467 5,467 5,679 5,467 0.01 72 5,467 0.06 81 
ta076 100 10 5,303 5,303 5,375 5,303 0.07 20 5,311 0.20 72 
ta077 100 10 5,595 5,595 5,704 5,596 0.02 20 5,605 0.22 54 
ta078 100 10 5,617 5,617 5,760 5,623 0.11 34 5,617 0.05 64 
ta079 100 10 5,871 5,871 6,032 5,875 0.23 78 5,877 0.19 29 
ta080 100 10 5,845 5,845 5,918 5,845 0.05 95 5,845 0.09 73 
ta081 100 20 6,106 6,202 6,541 6,336 2.35 61 6,303 1.69 85 
ta082 100 20 6,183 6,183 6,523 6,271 1.59 124 6,266 1.45 75 
ta083 100 20 6,252 6,271 6,639 6,363 1.61 150 6,351 1.32 145 
ta084 100 20 6,254 6,269 6,557 6,334 1.26 197 6,360 1.49 129 
ta085 100 20 6,262 6,314 6,695 6,394 1.61 191 6,408 1.57 163 
ta086 100 20 6,302 6,364 6,664 6,482 1.92 196 6,453 1.50 108 
ta087 100 20 6,184 6,268 6,632 6,350 1.82 195 6,332 1.10 94 
ta088 100 20 6,315 6,401 6,739 6,530 2.17 148 6,482 1.49 112 
ta089 100 20 6,204 6,275 6,677 6,381 1.89 148 6,343 1.15 169 
ta090 100 20 6,404 6,434 6,677 6,496 1.40 176 6,506 1.26 147 
ta091 200 10 10,862 10,862 10,942 10,872 0.09 20 10,885 0.24 89 
ta092 200 10 10,480 10,480 10,716 10,499 0.23 100 10,495 0.19 125 
ta093 200 10 10,922 10,922 11,025 10,934 0.26 140 10,941 0.21 169 
ta094 200 10 10,889 10,889 11,057 10,889 0.03 93 10,889 0.04 158 
ta095 200 10 10,524 10,524 10,645 10,527 0.09 159 10,524 0.03 192 
ta096 200 10 10,329 10,329 10,458 10,334 0.06 104 10,346 0.21 91 
ta097 200 10 10,854 10,854 10,989 10,866 0.20 140 10,866 0.17 124 
ta098 200 10 10,730 10,730 10,829 10,743 0.19 162 10,741 0.15 112 
ta099 200 10 10,438 10,438 10,574 10,438 0.04 81 10,451 0.19 138 
ta100 200 10 10,675 10,675 10,807 10,685 0.34 180 10,684 0.14 147 
ta101 200 20 11,152 11,195 11,594 11,379 1.76 290 11,339 1.52 222 
ta102 200 20 11,143 11,203 11,675 11,453 2.39 281 11,344 1.47 268 
ta103 200 20 11,281 11,281 11,852 11,510 2.25 295 11,445 1.45 385 
ta104 200 20 11,275 11,275 11,803 11,462 1.85 297 11,434 1.49 154 
ta105 200 20 11,259 11,259 11,685 11,397 1.48 335 11,369 1.06 300 
ta106 200 20 11,176 11,176 11,629 11,413 2.22 387 11,292 1.01 254 
ta107 200 20 11,337 11,360 11,833 11,549 1.80 162 11,481 1.11 269 
ta108 200 20 11,301 11,334 11,913 11,526 1.93 349 11,442 1.03 311 
ta109 200 20 11,145 11,192 11,673 11,432 2.32 388 11,313 1.22 326 
ta110 200 20 11,284 11,288 11,869 11,479 2.00 295 11,424 1.14 228 
ta111 500 20 26,040 26,059 26,670 26,387 1.42 218 26,228 0.73 311 
ta112 500 20 26,500 26,520 27,232 26,890 1.53 940 26,688 0.77 552 
ta113 500 20 26,371 26,371 26,848 26,692 1.29 579 26,522 0.71 448 
ta114 500 20 26,456 26,456 27,055 26,688 1.06 281 26,586 0.54 269 
ta115 500 20 26,334 26,334 26,727 26,590 1.05 538 26,541 0.82 396 
ta116 500 20 26,469 26,477 26,992 26,753 1.19 857 26,582 0.49 682 
ta117 500 20 26,389 26,389 26,797 26,595 0.86 142 26,660 1.12 559 
ta118 500 20 26,560 26,560 27,138 26,812 1.21 489 26,711 0.61 814 
ta119 500 20 26,005 26,005 26,631 26,346 1.40 602 26,148 0.61 592 
ta120 500 20 26,457 26,457 26,984 26,687 1.03 576 26,611 0.67 611 

Moreover, since the NEH (Nawaz et al., 1983), which was named with the initials of its authors, is a famous procedure, the performance of this procedure is also presented in Table 5. The instances in the table are ta001 to ta120. For each instance, (1) the number of jobs, n, (2) the number of machines, m, (3) the tightest available lower bound, LB, (4) the tightest available upper bound, UB, and (5) the makespan produced by the NEH are presented. In comparing the performance of the RAMP with that of NEGA for each of the two procedures, three indicative items are presented. The column Best represents the best solution obtained for the corresponding instance, and the column T indicates the time, in seconds, taken for this makespan to be obtained in its related run. NEGA was coded in C++ and run on a Pentium IV PC with 2.4 GHz speed.

Also, %DEV shows the average deviation percentage in the 10 runs from the best available solution in the literature. To find the deviation percentage from the best available solution, UB was subtracted from the obtained solution and the result divided by UB. The information presented in Table 5 indicates that the RAMP performed comparatively in a satisfactory manner and that for some initial random seeds, it was able to produce high-quality solutions in a very short amount of time.

In Table 5, without respecting the execution time, wherever the RAMP or the NEGA obtained a better result, the corresponding value is shown in boldface font on the side of the procedure with better performance, and in the case of producing equal results, the corresponding values are shown in boldface font for the both procedures. In terms of solution quality, for 31 instances the RAMP has obtained better results, and for 56 instances it generated a solution with the same quality as that produced by the NEGA. Moreover, with respect to execution time, in 81 out of 120 instances the average time to find the best solution was less than or equal to that of the NEGA, and in 74 instances the average percentage deviation from the best available solution was smaller than or equal to that of the NEGA.

5.2  Analyzing the Effects of the RAMP Components

To identify the effects of the RAMP components in increasing the power of the evolutionary module, computational experiments were performed on a representative set of 12 Taillard’s benchmark instances. The consideration of this representative set, which includes the first instance of each problem group, ensures that instances with all possible numbers of jobs and machines were included in the experiments. For this purpose, in addition to the original RAMP, three of its variants were implemented.

In the first variant, RAMP, the reblocking construction was replaced with a uniformly random solution construction, wherein all solutions have an equal chance () to be constructed. The second variant, RAMP, discards the insertion neighborhood and employs only the swap neighborhood. The third variant, RAMP, employs only the insertion neighborhood and discards the swap neighborhood.

Each variant was run 10 times with a time limit of for each instance, and the average time to reach the best solution, as well as the average percent deviation from the best known solution in the literature, known as the best upper bound, is reported in Table 6. The best performance values are shown in boldface. The results indicate that while the RAMP and the RAMP show the worst performance, the RAMP and RAMP obtain better results, indicating that the reblocking procedure and the swap neighborhood play a critical role in effectiveness of the RAMP. In effect, the RAMP, despite having a slightly smaller (0.005%) average deviation when compared to the RAMP, shows more than 17% increase in average time needed to reach the best solution. Considering the combined criterion of average deviation and average time to reach the best solution, the RAMP outperforms its three variants.

Table 6:
Comparison of different variants of RAMP.
RAMPRAMPRAMPRAMP
Instancenm%DEVT%DEVT%DEVT%DEVT
ta001 20 0.000 0.012 0.000 0.004 0.000 0.004 0.000 0.004 
ta011 20 10 0.000 0.455 0.025 5.203 0.000 0.184 0.000 0.477 
ta021 20 20 0.013 0.744 0.131 0.283 0.039 0.489 0.048 0.541 
ta031 50 0.000 0.033 0.000 0.004 0.000 0.013 0.000 0.004 
ta041 50 10 1.127 9.816 1.130 5.177 1.137 5.618 1.137 4.802 
ta051 50 20 0.800 67.066 1.029 28.255 0.683 57.089 0.626 72.556 
ta061 100 0.000 0.060 0.000 0.036 0.000 0.068 0.000 0.026 
ta071 100 10 0.003 64.218 0.002 25.646 0.016 57.038 0.000 34.246 
ta081 100 20 2.338 173.369 1.866 187.601 2.252 164.069 2.161 167.751 
ta091 200 10 0.092 134.315 0.092 11.948 0.092 89.356 0.092 89.718 
ta101 200 20 1.937 311.479 1.630 326.155 1.899 342.882 1.709 333.164 
ta111 500 20 1.836 595.963 1.246 906.062 1.779 589.033 1.442 572.207 
Average   0.679 113.128 0.596 124.698 0.658 108.820 0.601 106.291 
RAMPRAMPRAMPRAMP
Instancenm%DEVT%DEVT%DEVT%DEVT
ta001 20 0.000 0.012 0.000 0.004 0.000 0.004 0.000 0.004 
ta011 20 10 0.000 0.455 0.025 5.203 0.000 0.184 0.000 0.477 
ta021 20 20 0.013 0.744 0.131 0.283 0.039 0.489 0.048 0.541 
ta031 50 0.000 0.033 0.000 0.004 0.000 0.013 0.000 0.004 
ta041 50 10 1.127 9.816 1.130 5.177 1.137 5.618 1.137 4.802 
ta051 50 20 0.800 67.066 1.029 28.255 0.683 57.089 0.626 72.556 
ta061 100 0.000 0.060 0.000 0.036 0.000 0.068 0.000 0.026 
ta071 100 10 0.003 64.218 0.002 25.646 0.016 57.038 0.000 34.246 
ta081 100 20 2.338 173.369 1.866 187.601 2.252 164.069 2.161 167.751 
ta091 200 10 0.092 134.315 0.092 11.948 0.092 89.356 0.092 89.718 
ta101 200 20 1.937 311.479 1.630 326.155 1.899 342.882 1.709 333.164 
ta111 500 20 1.836 595.963 1.246 906.062 1.779 589.033 1.442 572.207 
Average   0.679 113.128 0.596 124.698 0.658 108.820 0.601 106.291 

For further comparison of reblocking and uniform construction methods, we analyzed the quality of 12,000 initial solutions for the same 12 instances (Table 7). The column AVG shows the average makespan for all 1,000 solutions for each instance, and the column STDEV indicates the estimate of standard deviation computed as , where N = 1,000 and SOLi is the makespan of solution i. Viewing both solution generation methods as random variables with normal distribution, Figure 8 shows the probability density functions for both methods.

Table 7:
Solution makespans generated by uniform and reblocking construction methods.
UniformReblocking
InstancenmLBUBAVGSTDEVAVGSTDEV
ta001 20 1,278 1,278 1,518.267 61.798 1,446.283 47.072 
ta011 20 10 1,582 1,582 2,021.379 79.776 1,930.659 71.168 
ta021 20 20 2,297 2,297 2,774.693 87.493 2,673.144 77.547 
ta031 50 2,724 2,724 3,187.976 118.182 2,944.563 85.224 
ta041 50 10 2,991 2,991 3,832.380 112.489 3,590.893 91.385 
ta051 50 20 3,771 3,850 4,873.621 113.328 4,584.488 95.768 
ta061 100 5,493 5,493 6,145.775 151.132 5,902.593 116.241 
ta071 100 10 5,770 5,770 6,912.069 146.490 6,472.293 125.474 
ta081 100 20 6,106 6,202 7,810.853 149.940 7,395.927 118.465 
ta091 200 10 10,862 10,862 12,320.447 211.337 11,693.618 132.406 
ta101 200 20 11,152 11,195 13,582.383 197.857 12,951.095 162.652 
ta111 500 20 26,040 26,059 30,367.486 316.570 28,941.938 258.270 
Average 7,945.611 145.533 7,543.958 115.139 
Total time(s) 0.105 0.432 
UniformReblocking
InstancenmLBUBAVGSTDEVAVGSTDEV
ta001 20 1,278 1,278 1,518.267 61.798 1,446.283 47.072 
ta011 20 10 1,582 1,582 2,021.379 79.776 1,930.659 71.168 
ta021 20 20 2,297 2,297 2,774.693 87.493 2,673.144 77.547 
ta031 50 2,724 2,724 3,187.976 118.182 2,944.563 85.224 
ta041 50 10 2,991 2,991 3,832.380 112.489 3,590.893 91.385 
ta051 50 20 3,771 3,850 4,873.621 113.328 4,584.488 95.768 
ta061 100 5,493 5,493 6,145.775 151.132 5,902.593 116.241 
ta071 100 10 5,770 5,770 6,912.069 146.490 6,472.293 125.474 
ta081 100 20 6,106 6,202 7,810.853 149.940 7,395.927 118.465 
ta091 200 10 10,862 10,862 12,320.447 211.337 11,693.618 132.406 
ta101 200 20 11,152 11,195 13,582.383 197.857 12,951.095 162.652 
ta111 500 20 26,040 26,059 30,367.486 316.570 28,941.938 258.270 
Average 7,945.611 145.533 7,543.958 115.139 
Total time(s) 0.105 0.432 
Figure 8:

Estimated probability density functions for re-blocking and uniform construction.

Figure 8:

Estimated probability density functions for re-blocking and uniform construction.

Interesting observations can be made based on estimated probability functions. For instance, assuming that the optimal solution value is less than 7,000, for the uniform construction method the chance of finding such a solution can be calculated as , whereas this probability for the reblocking procedure is . Hypothetically, based on the assumption of having normal distribution, the reblocking method must be orders of magnitude faster than the uniform construction method in generating optimal solutions. However, as Table 7 shows, the uniform construction method is on average only four times faster than the reblocking method, and this speed is not enough to compensate its limited effectiveness.

6  Concluding Remarks

Local searches can be viewed as a mapping that maps various solutions from the domain of solution space, D, to a single point in the set of local optimal solutions as its range, R. In effect, each local optimal solution, r, attracts a subset of D, with all the members of this subset being mapped to r.

A subtle point with R is that its members are of high quality with a significantly small variance and normal distribution (Lourenco et al., 2003), whereas the probability density of costs for D can have a long tail. The issue is not simply that finding a high-quality solution through a random sampling of D is extremely time-consuming and this mapping is of paramount importance; the deeper issue is that since R is bell-shaped, the central limit catastrophe highly limits the applicability of the multistart local searches.

One way around the dilemma of the multistart scheme is to use local searches to fine-tune high-quality solutions produced by the approaches that lack the power of fine-tuning. Genetic algorithms belong to these approaches. The RAMP is a hybrid that effectively integrates local searches with a genetic algorithm in tackling the PFSP.

Moreover, for its genetic algorithm to perform more effectively, the RAMP uses an innovative construction method based on a reblocking mechanism to fill the initial pool. In effect, the rationale behind using a framework consisting of a genetic algorithm and a reblocking mechanism in improving the performance of the local searches is twofold. First, when used as a single-start, local searches require a great deal of computation time to improve solutions generated by poor construction methods. Second, the higher the quality of the initial solution, the better local optimal solution is expected to be obtained.

The computational experiments showed the effectiveness of the RAMP, and examining the effects of different components of the procedure indicated that the reblocking mechanism and the swap neighborhood were key ingredients contributing to such effectiveness. Unlike the swap neighborhood, however, the insert neighborhood was not highly promising. The reason is partly the lack of an effective evaluation procedure for this neighborhood. The three procedures developed for examining the effect of different components of the RAMP also revealed that the components of the procedure are synergetic, and none of the variants was able to outperform the others in all instances.

In effect, the synergetic integration of the construction method, the local search, and the employed genetic algorithm greatly affects the performance of the procedure in the sense that by exploiting the structure of the problem, the probabilistic features embedded in the RAMP, without concentrating on specific part of search space, become biased toward generating high-quality solutions. The following research directions for enhancing the performance of the RAMP are suggested.

First, a fast evaluation technique, which without significant calculation can filter out unpromising solutions, can be devised to contribute to the search efficiency and produce solutions with higher quality. Since it is not obvious what constitutes the encoding of fruitful solutions, it is difficult to do such evaluation with trifling calculation. However, the more the structure of the corresponding problem is exploited, the faster such evaluation can be. Both of the key ingredients contributing to the effectiveness of the RAMP were based on such a notion. Whereas the reblocking mechanism is aimed at preventing large processing times from appearing on the critical path, the swap neighborhood, with its fast evaluation technique, filters out unpromising solutions comparatively fast.

The importance of a fast evaluation technique can be further emphasized by stating that when a change, either by a crossover or mutation operator, occurs to an encoding, a decoder is required to convert the new encoding to a solution, and the faster this decoder is, the quicker unfruitful solutions can be identified and eliminated. After all, decreasing the computation time and significantly contributing to solution quality can mainly be obtained through selecting promising parts of the encoding space and evaluating each encoding extremely fast.

Second, for many optimization problems, including the PFSP, local optimal solutions are not distributed uniformly throughout the solution space but comparatively close to one another. For exploiting such a property, an effective search procedure should intensify its effort on the areas in which these local optimal solutions are located. In effect, good solutions tend to have a great deal of common structures, and it is the fixing of these common structures that enhances search efficiency.

One way to fix these common structures is to allow, in each iteration of the local search, more than one move to be considered, with the chance of canceling previous moves performed. In this way, the corresponding search could be involved with exponential node explosion, but a greedy criterion can control such exponential explosion of nodes.

Third, fast evaluation of encoding has considerable effect on the efficiency of both the local search and genetic algorithm components. Converting a permutation of jobs to a makespan may become faster by using even more memory. Currently, as facilitating memory, only two matrixes were used. Since memory is undoubtedly a facet of search efficiency, adding extra memory for keeping precalculated data may be a promising direction for further research.

Fourth, increasing the effectiveness of the employed genetic algorithm is the other measure that can be counted on. This can be performed by exploiting the structure of the corresponding problem through tailoring mutation and crossover operators. Since the idea of using crossover operators is based on the fact that good solutions tend to have many common structures, and fixing these common structures based on exploiting the structure of the problem can enhance the search procedure, employing a crossover operator based on the notion of the critical path may further increase the efficiency of the RAMP. This is of paramount importance and requires considerable innovative effort in designing the corresponding architecture.

Fifth, as the population converges and individuals become increasingly similar, the chance of selecting two significantly different parents falls, and this makes offspring unlikely to be significantly different from either of their parents. In such converging circumstances, using a critical-path-based mutation operator instead of the current mutation operator can be a better source of promoting diversity.

In the five directions mentioned for improving the performance of the RAMP, it should be noticed that in creating an effective search technique, it is not important how complicated each of these components are. In effect, the key point is that the more delicate balance these components can collectively strike between intensification and diversification, the better the resulting procedure performs.

References

Bonney
,
M. C.
, and
Gundry
,
S. W
. (
1976
).
Solutions to the constrained flowshop sequencing problem
.
Operational Research Quarterly
,
27
(
4
):
869
883
.
Davis
,
L
. (
1985
).
Job shop scheduling with genetic algorithms
. In
Proceedings of the 1st International Conference on Genetic Algorithms
, pp.
136
140
.
De Jong
,
K. A
. (
2006
).
Evolutionary computation: A unified approach
.
Cambridge, MA
:
MIT Press
.
Djerid
,
L.
,
Portmann
,
M.
, and
Villon
,
P
. (
1996
).
Performance analysis of permutation cross-over genetic operators
.
Journal of Decision Systems
,
4
(
1/2
):
157
177
.
Falkenauer
,
E.
, and
Bouffouix
,
S
. (
1991
).
A genetic algorithm for job shop
. In
Proceedings of the IEEE International Conference on Robotics and Automation
, pp.
824
829
.
Hansen
,
P.
, and
Mladenović
,
N.
(
2005
).
Variable neighborhood search
. In
E.
Burke
and
G.
Kendall
(Eds.),
Search methodologies
, pp.
211
238
.
New York
:
Springer
.
Iyer
,
S.
, and
Saxena
,
B
. (
2004
).
Improved genetic algorithm for the permutation flowshop scheduling problem
.
Computers and Operations Research
,
31
(
4
):
593
606
.
Kuo
,
I. H.
,
Horng
,
S.-J.
,
Kao
,
T.-W.
,
Lin
,
T.-L.
,
Lee
,
C.-L.
,
Terano
,
T.
, and
Pan
,
Y
. (
2009
).
An efficient flow-shop scheduling algorithm based on a hybrid particle swarm optimization model
.
Expert Systems with Applications
,
36
(
3, part 2
):
7027
7032
.
Li
,
X.
, and
Yin
,
M.
(
2013
).
An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure
.
Advances in Engineering Software
,
55:10
31
.
Lian
,
Z.
,
Gu
,
X.
, and
Jiao
,
B
. (
2006
).
A similar particle swarm optimization algorithm for permutation flowshop scheduling to minimize makespan
.
Applied Mathematics and Computation
,
175
(
1
):
773
785
.
Lourenco
,
H.
,
Martin
,
O.
, and
Stützle
,
T.
(
2003
).
Iterated local search
. In
F.
Glover
and
G.
Kochenberger
(Eds.),
Handbook of metaheuristics
, pp.
320
353
.
Berlin
:
Springer
.
Mitchell
,
M.
,
Holland
,
J. H.
, and
Forrest
,
S.
(
1994
).
When will a genetic algorithm outperform hill climbing?
Advances in Neural Information Processing Systems
,
6
, pp.
51
58
.
Moscato
,
P.
(
1989
).
On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms
, pp.
158
79
.
Caltech Concurrent Computation Program
.
Nawaz
,
M.
,
Enscore
,
E.
, and
Ham
,
I
. (
1983
).
A heuristic algorithm for the m-machine n-job flow-shop sequencing problem
.
Omega
,
11
(
1
):
91
95
.
Nowicki
,
E.
, and
Smutnicki
,
C
. (
1996
).
A fast tabu search algorithm for the permutation flow-shop problem
.
European Journal of Operational Research
,
91
(
1
):
160
175
.
Osman
,
I. H.
, and
Potts
,
C. N
. (
1989
).
Simulated annealing for permutation flow-shop scheduling
.
Omega
,
17
(
6
):
551
557
.
Rajendran
,
C.
, and
Ziegler
,
H
. (
2004
).
Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs
.
European Journal of Operational Research
,
155
(
2
):
426
438
.
Reeves
,
C.
, and
Yamada
,
T
. (
1998
).
Genetic algorithms, path relinking, and the flowshop sequencing problem
.
Evolutionary Computation
,
6
(
1
):
45
60
.
Rinnooy Kan
,
A.
(
1976
).
Machine scheduling problems: Classification
,
complexity and computations. Berlin
:
Springer
.
Röck
,
H
. (
1984
).
The three-machine no-wait flow shop is NP-complete
.
Journal of the ACM
,
31
(
2
):
336
345
.
Ruiz
,
R.
,
Maroto
,
C.
, and
Alcaraz
,
J
. (
2006
).
Two new robust genetic algorithms for the flowshop scheduling problem
.
Omega
,
34
(
5
):
461
476
.
Stützle
,
T.
(
1998
).
Applying iterated local search to the permutation flow shop problem
.
Report, FG Intellektik, TU Darmstadt, Darmstadt, Germany
.
Taillard
,
E
. (
1991
).
Robust taboo search for the quadratic assignment problem
.
Parallel Computing
,
17
(
4-5
):
443
455
.
Taillard
,
E
. (
1993
).
Benchmarks for basic scheduling problems
.
European Journal of Operational Research
,
64
(
2
):
278
285
.
Tasgetiren
,
M. F.
,
Liang
,
Y.-C.
,
Sevkli
,
M.
, and
Gencyilmaz
,
G
. (
2007
).
A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem
.
European Journal of Operational Research
,
177
(
3
):
1930
1947
.
Wang
,
L.
,
Pan
,
Q.-K.
, and
Tasgetiren
,
M. F
. (
2011
).
A hybrid harmony search algorithm for the blocking permutation flow shop scheduling problem
.
Computers and Industrial Engineering
,
61
(
1
):
76
83
.
Zamani
,
R
. (
2013a
).
A competitive magnet-based genetic algorithm for solving the resource-constrained project scheduling problem
.
European Journal of Operational Research
,
229
(
2
):
552
559
.
Zamani
,
R
. (
2013b
).
Integrating iterative crossover capability in orthogonal neighborhoods for scheduling resource-constrained projects
.
Evolutionary Computation
,
21
(
2
):
341
360
.
Zobolas
,
G.
,
Tarantilis
,
C. D.
, and
Ioannou
,
G
. (
2009
).
Minimizing makespan in permutation flow shop scheduling problems using a hybrid metaheuristic algorithm
.
Computers and Operations Research
,
36
(
4
):
1249
1267
.