Abstract

Due-date assignment plays an important role in scheduling systems and strongly influences the delivery performance of job shops. Because of the stochastic and dynamic nature of job shops, the development of general due-date assignment models (DDAMs) is complicated. In this study, two genetic programming (GP) methods are proposed to evolve DDAMs for job shop environments. The experimental results show that the evolved DDAMs can make more accurate estimates than other existing dynamic DDAMs with promising reusability. In addition, the evolved operation-based DDAMs show better performance than the evolved DDAMs employing aggregate information of jobs and machines.

1  Introduction

Job shop scheduling (JSS) has been one of the most popular topics in the scheduling literature due to its complexity and applicability in real-world situations. A large number of studies on JSS have focused on sequencing decisions that determine the order in which waiting jobs are processed on a set of machines (or resources) in a manufacturing system (shop). However, sequencing is only one of several steps in the scheduling process (Ahmed and Fisher, 1992). One of the other important activities in JSS is due-date assignment (DDA), sometimes referred to as estimation of job flowtimes (EJF). This activity arises when a manager need to promise a delivery date to a customer (Bookbinder and Noor, 1985). The objective of DDA is to determine the due dates for arriving jobs by estimating the job flowtimes (the time from the arrival until the completion of the job), and therefore DDA strongly influences the delivery performance, that is, the ability to meet promised delivery dates, of a job shop (Cheng and Gupta, 1989). In practice, both early and tardy jobs are undesirable (Cheng and Jiang, 1998) since early jobs increase the inventory costs (e.g., storage, insurance) while tardy jobs result in penalties, such as loss of customer goodwill and damaged reputation (Hino et al., 2005). In addition, accurate flowtime estimates (Sabuncuoglu and Comlekci, 2002) are needed for better management of shop floor activities, evaluation of the shop performance, and leadtime comparison, etc.

Many due-date assignment models (DDAMs) have been proposed in the job shop literature. The traditional DDAMs focus on exploiting the shop and job information to make a good flowtime estimation. Most of the early DDAMs are based on linear combinations of different terms (variables) and the coefficients of the models are then determined based on simulation results. Regression (linear and nonlinear) has been used very often in order to help find the best coefficients for the models employed (Ragatz and Mabert, 1984; Fry et al., 1989; Vig and Dooley, 1993; Veral, 2001; Sabuncuoglu and Comlekci, 2002; Sha et al., 2007; Joseph and Sridharan, 2011). Since the early 1990s, artificial intelligence methods have also been applied to deal with due-date assignment problems, for example, neural networks (Philipoom et al., 1994; Sha and Hsu, 2004; Patil, 2008), decision trees (Ozturk et al., 2006), regression trees (Sha and Liu, 2005), and a regression-based method with case-based tuning (Sha et al., 2007).

Even though experimental results with these DDAMs are promising, some limitations are still present. First, since a job can include several operations representing the processing steps of that job at particular machines, the operation-based flowtime estimation (OFE) method (Sabuncuoglu and Comlekci, 2002), which utilizes the detailed job, shop, and route information for operations of jobs, can help improve the quality of the prediction. However, this OFE method depends strongly on the determination of a large number of coefficients, which is not an easy task. Thus, there is a need to create a dynamic OFE method similar to dynamic total work content (DTWK), dynamic processing plus waiting (DPPW; Cheng and Jiang, 1998), and ADRES (Baykasoglu et al., 2008) to overcome this problem by replacing the coefficients with more general aggregate terms (job characteristics and states of the system). Second, there are no studies on the reusability of the DDAMs in the JSS literature, so it is questionable whether the models can be applied when there are changes in the shop without major revisions. Finally, various relevant factors need to be considered in order to make a good estimation of flowtime, which makes the design of a new DDAM a time-consuming and complicated task.

Genetic programming (GP; Koza, 1992) is an evolutionary computation method and it has been applied to evolve/train programs that are able to solve difficult computational problems. We see that GP is also a good candidate approach to helping overcome the three limitations discussed above because (1) the DDAMs can be easily represented by GP, (2) DDAMs can be automatically evolved/trained on different shop environments to provide some generality for the evolved DDAMs, and (3) the DDAMs evolved by GP can be interpreted.

This paper aims to develop a new approach that employs GP to evolve dynamic DDAMs for job shop environments. We expect the evolved DDAMs to outperform the existing models in terms of mean absolute percentage error and to be reusable for new (unseen) job shop simulation scenarios. Two types of DDAMs considered in this study are aggregate due-date assignment models (ADDAMs) and operation-based due-date assignment models (ODDAMs). The difference between these two models is that ADDAMs employ the aggregate information from jobs, machines, and the shop to predict the due date, while ODDAMs indirectly predict the due date by estimating the flowtime of each operation. The objectives for this study are:

  1. Developing GP methods to automatically evolve reusable ADDAMs and ODDAMs for the job shop environment.

  2. Comparing the evolved DDAMs obtained from the two GP methods with existing DDAMs.

  3. Analyzing the proposed GP methods and the evolved DDAMs to understand how these models can estimate flowtime accurately.

The rest of this paper is organized as follows. In the next section, the background on JSS and due-date assignment methods are given. Automatic heuristic design methods are also reviewed. The methodology is developed in Section 3 and the experimental setting is presented in Section 4. The experimental results and the comparison of DDAMs are provided in Section 5. Analysis of the proposed algorithm and evolved DDAMs is presented in Section 6. Further investigation into sophisticated dispatching rules is shown in Section 7. Finally, Section 8 gives some conclusions from this research and directions for future studies.

2  Literature Review

This section introduces the terminology of JSS used in this study and traditional methods for due-date assignment. Since the focus of this study is on automatic design of DDAMs, a brief literature review of hyper-heuristics for heuristic generation is included.

2.1  Job Shop Scheduling

In the JSS problem, a number of jobs are to be processed, each including one or more operations to be performed in a specified sequence on specified machines and requiring certain amounts of time (Ramasesh, 1990). In practical situations, jobs can arrive at random over time and the processing times of these jobs are not known prior to their arrivals. There are many related decisions that need to be made for jobs and machines in the shops such as due-date assignment, job order release, and job scheduling. In this study, we focus on due-date assignment and job scheduling decisions; job release is simplified by immediately releasing jobs to the shop upon arrival. An example of a job shop is shown in Figure 1. In this figure, the due-date will be assigned to a newly arriving job by some DDAM. Then, the job will be released to the shop and processed at the predetermined machines. If a job is transferred to a machine when it is busy, that job will have to wait in the corresponding queue. Meanwhile, when a machine completes a job (or operation), the next job in the queue will be selected based on some sequencing/scheduling (dispatching) rule (refer to Panwalkar and Iskander, 1977; Pinedo, 2008, for comprehensive reviews of dispatching rules).

Figure 1:

Job shop scheduling (shop with three machines).

Figure 1:

Job shop scheduling (shop with three machines).

2.2  Due-Date Assignment

Due-date assignment decisions are made whenever jobs (customer orders) are received from customers. Good due-date assignments are needed in order to maintain high delivery performance (delivery speed and delivery reliability). Generally, due dates can be set: (1) exogenously, or (2) endogenously (Cheng and Gupta, 1989; Ramasesh, 1990). In the former case, due dates are decided by independent agencies (sellers, buyers). In this study, we only focus on the second case, in which the due dates are internally set based on the characteristics of the jobs and shop (Ramasesh, 1990), to improve the delivery performance of job shops. Basically, the due date of a new job is calculated as:
formula
1
where dj is the due date, rj is the release time of the job (in our study, the release time is the arrival time of the job since the job is released to the shop immediately), and is the estimated (predicted) flowtime of job j. The task of a DDAM is to calculate a value for . In the ideal case, we want the calculated due date dj to be equal to the completion time of the job Cj. The performance (with respect to missing the due date) is normally measured by the error (lateness) between the completion time and due date , where fj is the actual flowtime.

Some criteria to evaluate the performance of DDAMs (Cheng and Jiang, 1998; Baykasoglu et al., 2008) in the JSS literature are shown in Table 1. In this table, is the set of jobs collected from the simulation runs to calculate the performance measures, ej is the lateness of job j, is the mean lateness, and is the set of tardy jobs (Cjdj>0). and measure the accuracy of the flowtime estimation. Smaller s or s indicate that the DDAM can make better predictions. measures the bias of the DDAM. If the DDAM results in a negative (positive) , it means that the DDAM tends to overestimate (underestimate) the due date. measures the delivery reliability of the DDAM. Smaller indicates that the estimated due dates are more reliable. Another delivery performance measure is , which shows the percentage of jobs that fail to meet the due date. Finally, measures the delivery speed of the scheduling system.

Table 1:
Performance measures of DDAMs.
Mean absolute percentage error  
Mean percentage error  
Mean absolute error  
Standard deviation of lateness  
Percent tardiness  
Mean flowtime  
Mean absolute percentage error  
Mean percentage error  
Mean absolute error  
Standard deviation of lateness  
Percent tardiness  
Mean flowtime  

Many DDAMs have been proposed in the JSS literature. The DDAMs in early studies are mainly based on creating a simple model that employs aggregate information from the new job and the shop. Examples of these methods are total work content (TWK) where dj=rj+kpj, number of operations (NOP) where dj=rj+kmj, and processing plus waiting (PPW) where dj=rj+pj+kmj. In these methods, pj and mj are the total processing time and number of operations of job j, and k is a coefficient that needs to be determined. Other more sophisticated models have also been proposed that incorporate more information of jobs and the shop to make better predictions of flowtimes. These include job in queue (JIQ), work in queue (WIQ), total work and number of operations (TWK + NOP), response mapping rule (RMR), and operations-based flowtime estimation (OFE). Comparisons of these DDAMs (Ragatz and Mabert, 1984; Fry et al., 1989; Philipoom et al., 1994; Chang, 1996; Cheng and Jiang, 1998; Sabuncuoglu and Comlekci, 2002) show that the DDAMs which employ more useful information can lead to better performance. However, the main drawback of these methods is that they depend strongly on the determination of the corresponding coefficients for factors used in the prediction models. The most popular method to determine the coefficients is using linear regression models.

Because of the complexity and stochastic nature of dynamic job shops, nonlinear models will be needed (Philipoom et al., 1994), which make it computationally more expensive to solve for regression methods. For this reason, many artificial intelligence methods have been applied to solve this problem. Philipoom et al. (1994) proposed a neural network (NN) method for due-date assignment and showed that their NN method can outperform conventional methods and nonlinear models. Also in this direction, Sha and Hsu (2004) developed an NN method for due-date assignment in a wafer fabrication system that showed very good results. Patil (2008) enhanced the NN method by using ensemble learning and bagging/boosting concepts. A genetic algorithm (GA) was also employed to search for NN architectures that develop a parsimonious model of flowtime prediction. The computational results showed that the enhanced NN method outperformed other simple NN methods. Although different shop environments were considered, the paper only focused on training and testing the NNs on the same shop environments, and the reusability of the obtained NNs on unseen different shop environments have not been examined. Baykasoglu and Gocken (2009) applied gene expression programming (GEP) to evolve a symbolic regression model for DDA in a specific multistage job shop. The results showed that the evolved DDAM was better than the previously proposed DDAMs. However, only aggregate information from the shop was employed to estimate the job flowtimes and detailed information of operations was not considered. Also, similar to Patil (2008), there is no analysis on the reusability of the evolved DDAMs. Other data-mining methods such as decision trees (Ozturk et al., 2006), regression trees (Sha and Liu, 2005), and a regression-based method with case-based tuning (Sha et al., 2007) have also been proposed, showing very promising results.

Although the DDAMs described above have shown good results in simulation studies, determining good model coefficients is not an easy task, especially with the dynamic changes in the shop floor. To overcome this problem, some dynamic DDAMs have been proposed, in which the coefficients are adjusted based on the information of the new job and states of the systems. Cheng and Jiang (1998) proposed dynamic total work content (DTWK) and dynamic processing plus waiting (DPPW) by applying Little's law (Little, 1961) from queueing theory:

  • DTWK:
    formula
    2
  • DPPW:
    formula
    3
    where is the number of jobs in the shop at the moment a new job arrives, is the average arrival rate of jobs, and are respectively the average processing time and average number of operations, pji is the processing time of the ith operation of job j, and is the total number of jobs in the queue of each machine.

In another study, Baykasoglu et al. (2008) developed ADRES, a new dynamic DDAM, which uses a simple smoothing method to estimate the wait time of the next job. In this model, the due date can be calculated as follows (assuming zero transportation times):
formula
4
where is the formula to estimate the wait time of job j at its ith operation, is the smoothed value, is the smoothing error, is the absolute smoothed error, is a smoothing constant, is the error of the wait time estimation when wji is the actual wait time, and is the sum of the mean processing times at the station on the route of job j.

Previous research has shown that DTWK, DPPW, and ADRES are very effective DDAMs as compared to static regression-based DDAMs. Another advantage of these DDAMs is that no preliminary runs to obtain the parameter estimations are necessary. Therefore, they have been used as good candidates for comparison purposes (Sha and Liu, 2005; Baykasoglu et al., 2008; Baykasoglu and Gocken, 2009, 2011; Vinod and Sridharan, 2011).

One of the problems with the dynamic DDAMs is that they are still mainly based on the aggregate information of jobs and the shop to make the prediction and ignore the detailed operation information, while it is shown that this information can help improve the quality of the prediction (Sabuncuoglu and Comlekci, 2002). However, development of such operation-based DDAMs would be very difficult, since these models involve many different factors (variables). Thus, there is a need to have an automatic method to facilitate the design of such models. Also, there is no previous study on the reusability of the proposed models in the JSS literature, so it is questionable whether the proposed/evolved models can be applied, without major revisions, when there are changes in the shop.

2.3  Automatic Design of Heuristics

Hyper-heuristics (HH; for heuristic generation) are a new methodology which aims at automating the design and tuning of heuristic methods to solve hard computational search problems (Burke et al., 2010b). In order to generate a new heuristic, the HH framework must be able to combine various small components (normally common statistics or operators used in preexisting heuristics), and these heuristics are trained on a training set and evolved to become more effective. Because genetic programming (GP) is able to represent and evolve complex programs or rules, GP has become popular in the field of HH and the subfield is known as genetic programming-based hyper-heuristics (GP-HH; Burke et al., 2009). Many GP-HH methods have been proposed in the literature. Fukunaga (2002, 2008) used GP to evolve variable selection heuristics in each application of a local search algorithm for the satisfiability (SAT) problem. The experimental results showed that the evolved heuristics are very competitive when compared with other standard heuristics. Burke et al. (2007a, 2007b, 2011) proposed a GP-HH framework to evolve construction heuristics for online bin packing. The results suggested that human designed heuristics can be rediscovered by GP. Keller and Poli (2007a, 2007b) proposed an effective grammar-based linear genetic programming method to solve the traveling salesman problem. Search performance and resource utilization are also investigated in their paper. Bader-El-Den et al. (2009) introduced a sophisticated grammar-based GP for evolving time-tabling heuristics. Their GP-HH is based on a grammar derived from a collection of graph coloring heuristics that have previously been shown effective in constructing time tables. Even though the proposed GP-HH produced competitive results when compared with some existing search methods in the literature, it was not shown whether the evolved heuristics can be reused on new problem instances. More recently, Burke et al. (2010a) proposed a GP-HH method to learn construction heuristics for two-dimensional strip packing problems. The evolved heuristics were very promising when they were compared with the best-fit heuristic and some meta-heuristic methods. They also provided some interesting insights on the generality of the evolved heuristics. A thorough review of applications of GP-HH is given in Burke et al. (2009).

Recently, GP-HH has been applied in several studies to evolve dispatching rules for JSS problems. Jakobovic and Budin (2006) applied GP to evolve dispatching rules for both single machine and job shop environments. The results for the single machine environment showed that the evolved rules were better than existing rules. For the job shop environment, a meta-algorithm is developed to show how the evolved rules are used to construct a schedule. This study also proposed an interesting way to provide adaptive behaviors for the evolved rules. They proposed a GP-3 system that evolves three components: a discriminant function and two dispatching rules. The discriminant function aims at identifying whether the considered machine to be scheduled is a bottleneck or not. This function serves as a binary classifier. Based on the classification decision obtained from the discriminant function, one of two dispatching rules is selected to sequence jobs in the queue of that machine. The results show that this GP-3 system performed better than traditional GP with a single tree representing a dispatching rule. An extension of this work was done in Jakobovic et al. (2007) to deal with parallel machines with different speeds and also showed very good results. Geiger et al. (2006) presented a learning system that combines GP with a simulation model of an industrial facility. Both static and dynamic environments are investigated in this study and the results showed that the evolved rules are very promising. The paper also proposed a method to learn dispatching rules for multiple machine problems in which GP evolves multiple trees simultaneously with modified crossover and mutation operators. Comparison with the optimal rule in a simple two-machine environment showed that the evolved rules are rather competitive.

Tay and Ho (2008) proposed a GP system to evolve dispatching rules for a multi-objective job shop environment. The multi-objective problem was converted into a single objective problem by linearly combining all objectives. The proposed GP program can be considered as a priority function which is used to calculate the priority of operations in the queue of a machine based on a set of static and dynamic variables. The set of instances was randomly generated and it was shown that the evolved dispatching rules outperform other simple dispatching rules. Hildebrandt et al. (2010) reexamined this system under different dynamic job shop scenarios and showed that the rules evolved by Tay and Ho (2008) are only slightly better than the earliest release date (ERD) rule and quite far away from the performance of the shortest processing time (SPT) rule. They explained that the poor performance of these rules is caused by the use of the linear combination of different objectives and the fact that the randomly generated instances cannot effectively represent the situations that happened in a long term simulation. For that reason, Hildebrandt et al. (2010) evolved dispatching rules by training them on four simulation scenarios (10 machines with two utilization levels and two job types) and only aimed at minimizing mean flowtime. The experimental results indicated that the evolved rules were quite complicated but effective when compared with other existing rules. Moreover, these evolved rules were also robust when tested with another environment (50 machines and different processing time distributions).

It is noted that most works on GP for scheduling problems only aim to evolve dispatching rules and simplify DDA by applying simple DDAMs (e.g., TWK). However, because DDA strongly influences the delivery performance, it needs to be considered seriously in order to ensure an effective scheduling system. The objective of this study is to focus on the use of GP for automatic design of DDAMs. Baykasoglu and Gocken (2009) can be considered to be the most relevant work in this research direction. However, their study only focused on a special shop, and the reusability (under different shop conditions) of the evolved models was not considered. In addition, they did not take advantage of the detailed job, shop, and route information for operations of jobs to enhance the accuracy of flowtime estimates. The operation-based flowtime estimation will be the key issue to be investigated in our work. We also examine the reusability of evolved DDAMs to see whether the evolved DDAMs are able to perform well on unseen scenarios. In the next section, the proposed GP methods are described to show how GP can be used to evolve ADDAMs and ODDAMs for job shop environments.

3  GP for Evolving DDAMs

In this section, we describe two GP methods, GP-ADDAM and GP-ODDAM (Nguyen et al., 2012), to evolve ADDAMs and ODDAMs, respectively. First, the representation and evaluation scheme are discussed. Then, a fitness function is provided to measure the performance of the evolved DDAMs. Finally, the proposed GP procedure to evolve DDAMs is described.

3.1  Representation

The purpose of the proposed GP-ADDAM and GP-ODDAM is to evolve dynamic ADDAMs and ODDAMs that estimate job flowtimes (i.e., due dates by using Equation (1)) by employing information from jobs and the shop similar to DTWK and DPPW. In this case, we use tree-based GP (Koza, 1992) to create mathematical combinations of these pieces of information in each GP individual. For this reason, the function set will consist of standard mathematical operators +, −, ×, and protected division %, along with a conditional function to allow GP to evolve sophisticated DDAMs. The protected division function % returns a value of 1 when division by 0 is attempted. Function includes three arguments, and if the value from the first argument is greater than or equal to zero, will return the value from the second argument; otherwise will return the value from the third argument. Since ADDAMs and ODDAMs need different types of information, GP-ADDAM and GP-ODDAM will use different terminal sets, as shown in Table 2. In this table, the first five terminals are the same for the two proposed GP methods. The next eight terminals are variables that characterize the state of operations/machines for GP-ODDAM and their aggregate counterparts for GP-ADDAM. The last terminal of each method provides extra information to estimate the flowtime. While provides the information of previous arriving jobs, PEF helps estimate the changes of the system through the period of time the new job spends in the system (more details are given in Section 3.2). and are calculated based on the 20 previous jobs processed at machine . On the other hand, and are calculated based on the arrivals of the last 100 jobs and 20 jobs, respectively.

Table 2:
Terminal sets for GP-ADDAM and GP-ODDAM ( is the new job, is the considered operation in GP-ODDAM, and is the machine that will process ).
GP-ADDAMGP-ODDAM
 Number of jobs in the shop 
 Sampled arrival rate 
 Number of operations of job  
 Number of machines 
Random number from 0 to 1 
 Total average processing time of job in queues of machines that will visit  Average processing times of jobs in the queue of the machine that processes  
 Total processing time of   Processing time of  
 Average for all machines that will visit  Time for to finish the leftover job 
 Average for all queues of machines that will visit  Percentage of jobs in queues of that require less processing time than  
 Average for all queues of machines that will visit  Percentage of sampled jobs processed at that require less processing time than  
 Total for all machines that will visit  Total processing time of jobs in the queue of  
 Total for all machines that will visit  Sampled average processing time of jobs processed at  
 Total for all machines that will visit  Total processing time of jobs that need to be processed at  
 Sampled average number of operations of jobs  Partial estimated flowtime 
GP-ADDAMGP-ODDAM
 Number of jobs in the shop 
 Sampled arrival rate 
 Number of operations of job  
 Number of machines 
Random number from 0 to 1 
 Total average processing time of job in queues of machines that will visit  Average processing times of jobs in the queue of the machine that processes  
 Total processing time of   Processing time of  
 Average for all machines that will visit  Time for to finish the leftover job 
 Average for all queues of machines that will visit  Percentage of jobs in queues of that require less processing time than  
 Average for all queues of machines that will visit  Percentage of sampled jobs processed at that require less processing time than  
 Total for all machines that will visit  Total processing time of jobs in the queue of  
 Total for all machines that will visit  Sampled average processing time of jobs processed at  
 Total for all machines that will visit  Total processing time of jobs that need to be processed at  
 Sampled average number of operations of jobs  Partial estimated flowtime 

3.2  Evaluation

An example of how an individual in GP-ADDAM is evaluated is shown in Figure 2(a). In this method, a GP individual represents a mathematical function and the output of this function is the estimated flowtime of the new job. The information used in this function is extracted from the new job and machines in the shop.

Figure 2:

DDAM evaluation scheme.

Figure 2:

DDAM evaluation scheme.

The GP individual in GP-ODDAM aims to estimate the flowtime of each operation of the new job. Therefore, instead of using the function obtained from the GP individual to estimate job flowtime , the output of this function is used to estimate the operation flowtime of each operation of the new job, starting from the first operation. When is obtained, a condition is checked to see whether the operation being considered is the last operation. If it is not the last operation of the new job, will be used to update the partial estimated flowtime (PEF), which will also be used as a terminal in the GP individual. Then, the GP individual is applied to estimate the flowtime for the next operation. In the case that the flowtime of the last operation has been estimated, will be added to the current PEF to obtain the estimated flowtime . The evaluation scheme for GP-ODDAM is shown in Figure 2(b) (noting that only the tree in the figure is evolved by the GP). The use of PEF (initially zero for the first operation) in the terminal set of GP-ODDAM also provides DDAMs a chance to predict the changes of the system, assuming that the partial estimated flowtime is predicted well.

3.3  Genetic Operators

Traditional genetic operators are employed by the proposed GP methods. For crossover, the GP system uses subtree crossover (Koza, 1992), which creates new individuals for the next generation by randomly recombining subtrees from two selected parents. Meanwhile, mutation is performed by subtree mutation (Koza, 1992), which randomly selects a node of a chosen individual and replaces the subtree rooted at that node by a new randomly generated subtree. For reproduction, an individual is selected from the population by the selection mechanism (e.g., tournament selection) and copied to the population of the next generation. In each generation of GP, the applications of each genetic operator are governed by the probability assigned to each type of genetic operator and referred to as crossover rate, mutation rate, and reproduction rate, respectively.

3.4  Fitness Function

As discussed in Section 2.2, the performance of a DDAM can be measured in many different ways, which indicate the delivery accuracy and delivery reliability. In this study, we will use to measure the quality of evolved DDAMs because it is a good indicator for both delivery accuracy and delivery reliability. A discrete-event simulation model of a job shop was implemented in order to evaluate the evolved DDAMs. In this model, the interarrival times of jobs, the processing times, and route information of jobs will follow some specified probability distributions. Upon the arrival of a job j, the DDAM will be applied to estimate the flowtime of that job. The error ej of this estimation is recorded when job j leaves the system and the errors of all recorded jobs will be used to calculate as shown in Table 1. Since we want to evolve DDAMs that can be reused on unseen conditions, the quality of the evolved DDAMs will be measured based on their performance on a number of simulation scenarios , which represent different shop characteristics. For a simulation scenario , the quality of a DDAM pi is indicated by . The fitness value of pi is calculated as follows:
formula
5
With this design, a smaller fitness(pi) indicates that the evolved DDAM pi produces more accurate estimations of jobs across different scenarios.

3.5  Evolution of DDAMs

Algorithm 1 shows how GP is used to evolve DDAMs in both GP-ADDAM and GP-ODDAM. A variety of simulation scenarios will be employed in this algorithm to provide the evolved (trained) DDAMs better generality, but it should be noted that a large number of scenarios also increases the computation time of the GP systems. The evolutionary process will be terminated when the maximum generation is reached and the algorithm will return the best found DDAM . It should be noted that GP-ADDAM and GP-ODDAM use the same algorithm, but the terminals used by these two methods are different, since they have a different focus (as mentioned in Sections 3.1 and 3.2).
formula

4  Experimental Setting

This section discusses the simulation environments in which the DDAMs are trained or evolved. Then, the details of the training and testing scenarios are provided. Finally, the settings of the GP systems are given.

4.1  Job Shop Simulation Environment

Simulation is the most popular method to evaluate the performance of a DDAM in the JSS literature. Since our goal is to design general DDAMs, a general job shop would be more suitable than a specific shop. The following factors characterize a job shop:

  • Number of machines

  • Utilization

  • Arrival process

  • Distribution of processing time

  • Distribution of number of operations (route length)

The number of machines will be the main factor that shows the scale of the shop; this may also influence the complexity of the JSS decisions. Utilization, on the other hand, indicates the congestion level of machines (and the shop). The performance of the JSS decisions under different utilization levels are of interest in most research in the JSS literature. The arrival process, distribution of processing times, and number of operations are factors that directly influence the difficulty of JSS decisions.

In our experiments, we employ a symmetrical (balanced) job shop model in which each operation of a job has equal probability to be processed at any machine in the shop (a job visits each machine at most once). Therefore, machines in the shop expect to have the same level of congestion in long simulation runs. This model has been used very often in the JSS literature (Chang, 1996; Cheng and Jiang, 1998; Sabuncuoglu and Comlekci, 2002; Land, 2004; Hildebrandt et al., 2010). Based on the discussion above, the scenarios for training and testing of DDAMs are designed and shown in Table 3.

Table 3:
Training and testing scenarios.
FactorTrainingTesting
Number of machines 4, 6 4, 5, 6, 10, 20 
Utilization 70%, 80%, 90% 60%, 70%, 80%, 90%, 95% 
Distribution of processing time Exponential Exponential, Erlang-2, uniform 
Distribution of number of operations Missing Missing, full 
FactorTrainingTesting
Number of machines 4, 6 4, 5, 6, 10, 20 
Utilization 70%, 80%, 90% 60%, 70%, 80%, 90%, 95% 
Distribution of processing time Exponential Exponential, Erlang-2, uniform 
Distribution of number of operations Missing Missing, full 

In these experiments, without loss of generality, the processing times are randomly generated based on a specific distribution with mean equal to 1, and the arrival of jobs will follow a Poisson process with the arrival rates adjusted based on the utilization level. For the distribution of number of operations, the missing setting is used to indicate that the number of operations will follow a discrete uniform distribution from 1 to the number of machines. Meanwhile, the full setting indicates the case that each job will have its number of operations equal to the number of machines in the shop. In each replication of a simulation scenario, we start with an empty shop and the interval from the beginning of the simulation until the arrival of the thousandth job is considered as the warmup time and the information collected from the next 5,000 completed jobs (set in Section 2.2) is used to evaluate the performance of DDAMs.

In the training stage, since the simulation is very time-consuming, we only perform one replication for each scenario. There are simulation scenarios used to evaluate the performance of the evolved DDAMs. For testing, the best DDAM obtained from a run of GP is applied to simulation scenarios and 30 simulation replications are performed for each scenario; therefore, we need simulation replications to test the performance of . The use of a large number of scenarios and replications in the testing stage will help us confirm the quality and reusability of the evolved DDAMs. For the shop floor level, first in first out (FIFO) is used as the dispatching rule to sequence jobs in queues of machines. By using FIFO, the earliest job that joins the queue of the machine will be processed first. We adopt FIFO in this study because it is one of the the most popular dispatching rules in the scheduling literature.

4.2  GP Parameters

The GP system for evolving DDAMs is developed based on the ECJ20 library (Luke, 2009). The parameter settings of the GP system used in the rest of this study are shown in Table 4. The initial GP population is created using the ramped-half-and-half method (Koza, 1992). Tournament selection is used to select individuals for genetic operators. Normally, a tournament selection size from four to seven is used and we use a tournament size of five in this study in order to maintain a balance between diversity and the convergence of the proposed GP methods (Koza, 1992; Banzhaf et al., 1998). Parameters in Table 4 are similar to those in other applications of GP (Koza, 1992; Banzhaf et al., 1998). Since the terminal set includes many different terminals, the mutation rate is set to 15% to provide sufficient genetic material through the evolution process of the proposed GP methods.

Table 4:
Parameters of the proposed GP systems.
Population size 1000 
Crossover rate 80% 
Mutation rate 15% 
Reproduction rate 5% 
Generations 50 
Maximum depth 10 
Population size 1000 
Crossover rate 80% 
Mutation rate 15% 
Reproduction rate 5% 
Generations 50 
Maximum depth 10 

5  Results

A comparison of the best evolved DDAMs with some existing DDAMs is provided to show the effectiveness of the evolved DDAMs. Then, we compare the performance of the two proposed GP methods.

5.1  Comparison of DDAMs

For each GP method, 30 independent runs are performed and the best ADDAMs and ODDAMs obtained from each run are recorded and compared with existing dynamic DDAMs (, , and ). Tables 5 and 6 show the comparison between evolved DDAMs and other DDAMs on 150 testing scenarios. In these tables, is the number of evolved DDAMs that are significantly better (by t-test with a significance level of 0.05) than , , and , respectively. It is easy to see that the evolved DDAMs dominate other DDAMs in most scenarios. These experimental results indicate the effectiveness of the proposed GP methods for evolving DDAMs. It is also interesting to see that the evolved DDAMs have very good reusability, since the evolved DDAMs can provide superior performance even on unseen scenarios (e.g., with the full setting).

Table 5:
Comparing the evolved ADDAM with existing DDAMs.
            
            
            
            
            
-            
            
            
            
            
            
            
            
            
            
            
            
            
            
            
-            
            
            
            
            
            
            
            
            
            
Table 6:
Comparing the evolved ODDAM with existing DDAMs.
            
            
            
            
            
-