## Abstract

The two-machine permutation flow shop scheduling problem with buffer is studied for the special case that all processing times on one of the two machines are equal to a constant $c$. This case is interesting because it occurs in various applications, for example, when one machine is a packing machine or when materials have to be transported. Different types of buffers and buffer usage are considered. It is shown that all considered buffer flow shop problems remain NP-hard for the makespan criterion even with the restriction to equal processing times on one machine. However, the special case where the constant $c$ is larger or smaller than all processing times on the other machine is shown to be polynomially solvable by presenting an algorithm (2BF-OPT) that calculates optimal schedules in $O(nlogn)$ steps. Two heuristics for solving the NP-hard flow shop problems are proposed: (i) a modification of the commonly used NEH heuristic (mNEH) and (ii) an Iterated Local Search heuristic (2BF-ILS) that uses the mNEH heuristic for computing its initial solution. It is shown experimentally that the proposed 2BF-ILS heuristic obtains better results than two state-of-the-art algorithms for buffered flow shop problems from the literature and an Ant Colony Optimization algorithm. In addition, it is shown experimentally that 2BF-ILS obtains the same solution quality as the standard NEH heuristic, however, with a smaller number of function evaluations.

## 1  Introduction

A flow shop problem is a scheduling problem where each job (or object) goes through a sequence of $m$ processing steps such that the $i$th processing step is done on machine $Mi$ for $i∈{1,…,m}$. The objective is to schedule the processing of jobs such that certain constraints are satisfied and certain criteria relating to efficiency, job throughput or incurred costs are optimized. A typical constraint is that the schedule has to be a permutation schedule where the sequence of jobs is the same on all machines. The most often studied optimization criterion is to minimize the makespan, that is, the total length of the schedule. In this article, we study the case of permutation scheduling for two machines under the makespan criterion.

The flow shop problems studied here have buffer constraints where each job takes up a given amount of buffer space and the jobs are to be scheduled such that the available buffer capacity is not exceeded. Buffer constraints occur in many practical applications where not only the time, but also the size of the processed objects has to be taken into account as, for example, in production facilities with limited storage space. In particular, we consider two types of buffer constraints that have been studied for flow shops in the literature and which differ by the time span in which the buffer is occupied by a job. Flow shops of these types occur, for example, in waste processing or the production of metal parts for the aerospace industry (Martinez et al., 2006), supply chains for mineral resources (Ernst et al., 2019), or in the production of chemicals (Liu and Kozan, 2009).

In this article, we study the special case of buffered two-machine flow shops where for one of the two machines the processing times for all jobs are equal to a constant $c$. This case is relevant because there exist many applications where in one stage of the flow shop the processing steps tend to be relatively similar for all objects. An example is a two-machine flow shop where different objects are produced on the first machine and packed on the second machine. In many applications of this type the production times might vary but packing is often a standardized process which approximately takes the same time for each object. Another example is the case of a robot (which corresponds to the first machine) that transports material for the production of different objects to the second machine where the manufacturing takes place. In many applications of this type the transportation times might be similar, for example, when all materials are stored in the same storage hall or loaded onto the same vehicle, whereas the production times might differ strongly. In other applications of this type the opposite might hold: the transportation times are different, whereas all production times are similar.

Application examples can also be found outside the area of production processes, for example, in the processing of multimedia files as described by Lin et al. (2013) and Kononova and Kochetov (2013), where the downloading of files with different sizes corresponds to different processing times on the first machine, followed by a second operation that takes a fixed amount of time, such as processing the files' metadata or creating file previews of fixed length while the available computer memory has to be taken into account.

The studied problem is also relevant from a theoretical perspective as it explores the computational complexity of buffer flow shops, especially the boundary between NP-hard and efficiently solvable problems, similar to the analyses of flow shops with buffers by Kononov et al. (2019) and Ernst et al. (2019). For the studied buffered two-machine flow shop problems it is shown that they remain NP-hard for the makespan criterion even under the restriction to equal processing times on one machine. In addition, it is shown that two special cases can be solved optimally in polynomial time, namely, the cases where the value of $c$ describing the constant processing time on one of the machines is smaller (or larger) than all processing times on the other machine. These special cases are relevant for various applications, for example, when the two machines are of different types, such as a production machine and a packaging machine that have different processing times. In this example, it is possible that the constant packaging times are relatively small (or large) in comparison to the production times.

To solve the general case we propose a modification of the well-known NEH heuristic as well as an Iterated Local Search heuristic (2BF-ILS) that uses the modified NEH heuristic for computing its initial solution. 2BF-ILS is studied experimentally and compared with two state-of-the-art heuristics for buffered flow shops from the literature as well as an Ant Colony Optimization algorithm. 2BF-ILS is also experimentally compared to the standard NEH heuristic.

Note that this article is an extension of Le et al. (2019). New is a detailed NP-completeness proof for the considered problems and the identification of the aforementioned efficiently solvable subcases by presenting an $O(nlogn)$ algorithm and proving its correctness. The extension also incorporates new experiments using an additional set of instances with characteristics commonly occurring in practical applications and features a more detailed comparison between the optimization behavior of the proposed algorithm 2BF-ILS and the other algorithms. Also, additional literature on more recent developments regarding heuristic, approximate methods and theoretical results for two-machine buffer flow shops and its variants, as well as extended explanations have been integrated.

The remainder of this article is structured as follows. A short overview of recent works on buffer-constrained flow shop problems is given in Section 2. A formal description of the considered flow shop problems with buffers is presented in Section 3. The NP-completeness results are shown in Section 4. The two special cases that are polynomial-time solvable are discussed in Section 5. The modified NEH heuristic is described in Section 6 and the Iterated Local Search heuristic is introduced in Section 7. Experimental results are presented in Section 8. Conclusions are given in Section 9.

## 2  Related Work

The buffered flow shop problems that are most often studied in the literature have a buffer between each two adjacent machines $Mi$ and $Mi+1$ and each job occupies the buffer after it finishes on $Mi$ and until its processing starts on $Mi+1$. This type of buffer is called intermediate buffer. It is known that the flow shop problem with intermediate buffers is NP-complete (Papadimitriou and Kanellakis, 1980) for two or more machines. Several authors have investigated heuristics for this problem, one of the earliest works being Leisten (1990) where a systematic overview for formalizing intermediate buffer flow shops problems is presented and several heuristics originating from infinite-buffer flow shops are tested with the NEH heuristic obtaining the best results.

Investigated heuristics also include a Tabu Search (Li and Tang, 2005), Variable Neighborhood Search (Moslehi and Khorasanian, 2014), Genetic Algorithms (Wang et al., 2006), methods based on Differential Evolution (Pan, Wang, Gao, and Li, 2011), and Particle Swarm Optimization (Liu et al., 2008). Other examples of biologically inspired algorithms that have been applied on buffer flow shops are a Discrete Artificial Bee Colony (Zhang and Gu, 2015) and Immune System algorithms (Hsieh et al., 2009). In most of these studies the performance of different heuristics for flow shops with intermediate buffers has been compared. Figure 1 summarizes the results of these comparisons.
Figure 1:

Overview of works containing comparisons between algorithms for buffered flow shops with intermediate buffers. The notation $A←B$ indicates that algorithm $A$ is outperformed by algorithm $B$ in the given reference. However, the arrows are to be interpreted with some care: It does not mean that $A$ is worse than $B$ for two-machine flow shops with buffer constraints. It only means that $B$ obtained better results (on average) than $A$ for the considered test instances in the respective work.

Figure 1:

Overview of works containing comparisons between algorithms for buffered flow shops with intermediate buffers. The notation $A←B$ indicates that algorithm $A$ is outperformed by algorithm $B$ in the given reference. However, the arrows are to be interpreted with some care: It does not mean that $A$ is worse than $B$ for two-machine flow shops with buffer constraints. It only means that $B$ obtained better results (on average) than $A$ for the considered test instances in the respective work.

Another type of buffer constraint that is considered in the literature is where the buffer between machines $Mi$ and $Mi+1$ is used by each job from its starting time on $Mi$ until its finishing time on $Mi+1$. This type of buffer is in the following referred to as spanning buffer. The flow shop problem with spanning buffer is also NP-complete (Lin et al., 2009). Examples for methods from the literature for this flow shop type are a Variable Neighborhood Search by Kononova and Kochetov (2013), where Integer Linear Programming was also used to solve small instances, a Branch-and-Bound algorithm used to calculate lower bounds and optimally solve small instances with up to 18 jobs (Lin et al., 2009) as well as a heuristic based on Lagrangian relaxation and bin packing (Kononov et al., 2019). This buffer type is also analyzed by Gu et al. (2018) for the total weighted completion time criterion where it is shown to be NP-hard, even if one of both permutations is fixed and Min et al. (2019) proved the NP-hardness for the case where the processing times on both machines are the same for each job. Another variant where the spanning buffer changes its capacity over time is also NP-hard even when all jobs have unit processing times, but smaller instances with up to 40 jobs can be optimally solved using Integer Linear Programming (Berlińska et al., 2019).

If the buffer size is infinite, the resulting flow shop is equivalent to a flow shop problem without buffer constraints which in the case with two machines can be optimally solved in polynomial time for the makespan criterion using Johnson's algorithm (Johnson, 1954) while it is NP-hard for more than two machines (Garey et al., 1976). If the buffer has no capacity such that finished jobs on $Mi$ immediately block $Mi$ if $Mi+1$ is busy, one obtains the blocking flow shop which can also be solved in polynomial time for up to two machines (Gilmore and Gomory, 1964), but is otherwise NP-hard for the makespan criterion (G. Hall and Sriskandarajah, 1996).

The buffer-constrained flow shop has also been investigated with further additional constraints in the literature. A combination of different target criteria is investigated by Qian et al. (2009) and Liang et al. (2019) using Differential Evolution algorithms, whereas Liu and Kozan (2009) consider the combination of different buffer constraints in the same instance. Other extensions of the flow shop with buffer constraints include the processing of jobs in batches, for which an exact Branch-and-Bound algorithm for up to 40 jobs (Agnetis et al., 1998), an approximation algorithm (Pranzo, 2004), and a Differential Evolution heuristic (Fu et al., 2012) have been proposed. Another extension is the use of machines that can process multiple jobs at the same time. This type is known as the parallel, flexible, or hybrid flow shop problem and for the buffer-constrained variant an Artificial Bee Colony algorithm is presented in Li and Pan (2015). Regarding the flexible flow shop with two machines and spanning buffer, it has been shown that this problem is NP-hard (Fung et al., 2016). It is even NP-hard to approximate within a factor of $4/3$, even when all processing times are equal, but becomes polynomially solvable if the spanning buffers of each flow shop have the same capacity (Ernst et al., 2019). A more general case consisting of multiple parallel flow shops is considered by Zhang and Xing (2019) where a Differential Evolution heuristic is proposed.

## 3  Formal Description of the Problem

Each two-machine flow shop problem studied in this article has a given set $J={J1,…,Jn}$ of $n$jobs and two machines$M1$ and $M2$. Each job $Ji$ has two processing times $ai>0$, $bi>0$ where $ai$ is the processing time on $M1$ and $bi$ is the processing time on $M2$. In flow shops of this type each job has to be finished on machine $M1$ before it can be started on machine $M2$. The processing of a job on a machine is nonpreemptive and a machine can process at most one job at a time. A schedule for a flow shop with two machines is denoted by $σ$ and specifies the starting time$Sij(σ)$ and the completion time$Cij(σ)$ of $Ji$ on $Mj$ for each job $Ji$ and each machine $Mj$ ($i∈{1,…,n}$, $j∈{1,2}$). Thus, a schedule $σ$ specifies permutations$π1(σ),π2(σ)$ describing the order of jobs on the machines $M1$ and $M2$, respectively. A schedule $σ$ is a permutation schedule if $π1(σ)=π2(σ)$.

Each buffered two-machine flow shop problem has a buffer with a given capacity $Ω>0$ and each job $Ji∈J$ has a size $si$ which specifies how many buffer units $Ji$ occupies when stored in the buffer.1 Two types of buffers are considered in this article which differ by the time span in which a job occupies the buffer for a given schedule $σ$: (i) an intermediate buffer is occupied by job $Ji$ from $Ci1(σ)$ until $Si2(σ)$ and (ii) a spanning buffer is occupied by $Ji$ from $Si1(σ)$ until $Ci2(σ)$. A schedule $σ$ is valid for flow shops with buffer constraints when at every time step the total amount of buffer that is occupied by the jobs does not exceed $Ω$. If we impose the condition that all jobs are processed as early as possible, it is sufficient to specify the permutations $π1(σ)$ and $π2(σ)$ in order to define a schedule $σ$. Figure 2 shows an example of a flow shop with an intermediate buffer (left) and a spanning buffer (right) together with a valid permutation schedule.2
Figure 2:

Examples of a flow shop with an intermediate buffer (left, $Ω=3$) and a flow shop with a spanning buffer (right, $Ω=7$) and $si=ai$ for $i∈{1,…,n}$. In each case a valid permutation schedule $σ$ is shown with $π(σ)=(J1,J2,J3,J4,J5)$. Blocks with diagonal lines indicate phases where $M1$ is idle and the next job has to wait until enough buffer capacity becomes available.

Figure 2:

Examples of a flow shop with an intermediate buffer (left, $Ω=3$) and a flow shop with a spanning buffer (right, $Ω=7$) and $si=ai$ for $i∈{1,…,n}$. In each case a valid permutation schedule $σ$ is shown with $π(σ)=(J1,J2,J3,J4,J5)$. Blocks with diagonal lines indicate phases where $M1$ is idle and the next job has to wait until enough buffer capacity becomes available.

In this article, we study the special case of buffered two-machine flow shops where the processing times on one of the machines are the same for all jobs, that is, $ai=c$ or $bi=c$ for $i∈{1,…,n}$ with a constant $c>0$. Regarding the buffer usage of the jobs, two common models are studied in the literature: (i) the buffer is interpreted as a “counter” for the number of stored jobs and thus $si=1$ for $i∈{1,…,n}$, similar to Zhang and Gu (2015) and Moslehi and Khorasanian (2014), and (ii) the amount of storage that is used by a job is equal to its processing time on the first machine, i.e., $si=ai$ for $i∈{1,…,n}$. In the second model, which is found in Fung and Zinder (2016), Kononova and Kochetov (2013), Lin et al. (2009), and Min et al. (2019), the job size $si$ is interpreted as being proportional to the amount of work during its initial processing step.

The criterion to be optimized is the makespan $Cmax$ (i.e., the time where the last job is completed on the last machine). This is also the criterion used by most of the studies mentioned in Section 2 and based on these works, we also restrict ourselves to permutation schedules (denoted by prmu) where the sequence of jobs on both machines is the same. Hence, a schedule $σ$ is determined by a single permutation $π(σ)$ of all jobs.

Depending on whether the buffer usage $si$ equals $ai$, $bi$, or 1 and whether the processing times on $M1$ or $M2$ are equal, it is possible to distinguish between six different cases. However, it can be shown that only two cases have to be considered since all other cases can be reduced to them. These two cases are (i) $bi=c$, $si=ai$ and (ii) $bi=c,si=1$. The reduction can be done by rescaling all values by $si$ (if $si$ is constant for all jobs) and/or by defining a new schedule $σ'$ with the same makespan that “mirrors” the processing times (i.e., $ai'=bi$, $bi'=ai$ and $si'=si$) as well as the time axis (see Figure 3 for an example).
Figure 3:

Example for “mirroring” schedules in order to obtain the case $bi=c$, $si=ai$.

Figure 3:

Example for “mirroring” schedules in order to obtain the case $bi=c$, $si=ai$.

It should be noted that permutation schedules impose a restriction for the cases where $si=ai$; that is, there exist instances for which the set of optimal schedules does not contain a permutation schedule. An example, is the flow shop in Figure 3 (right) where the shown schedule with makespan 28 is a best permutation schedule but there exists a schedule $σ$ with makespan 27 for $π1(σ)=(J2,J3,J4,J1)$ and $π2(σ)=(J2,J4,J3,J1)$ that is not a permutation schedule. For the case $si=1$, there always exists a permutation schedule that is optimal (Geser, 2017). Adopting the commonly used notation introduced by Graham et al. (1979), we denote the problems considered here by
$F2|prmu,bi=c,bufType,bufUsage|Cmax,$
(1)
with $bufType∈{spanningBuffer,intermediateBuffer}$ and $bufUsage∈{si=1,si=ai}$.

## 4  NP-Completeness Results

It is shown in this section that all four types of flow shops considered in Eq. (1) are NP-complete. Due to space limitations, the proof is given only for the case with intermediate buffer and $si=ai$. The NP-completeness for the other considered flow shops can be shown with similar arguments.

Theorem 4.1:

The decision problem if there exists a valid permutation schedule $σ*$ with $Cmax(σ*)≤L$ for a given integer $L$ is NP-complete for the following type of flow shops:

• $F2|prmu,bi=c,intermediateBuffer,si=ai|Cmax$,

• $F2|prmu,bi=c,spanningBuffer,si=ai|Cmax$,

• $F2|prmu,bi=c,intermediateBuffer,si=1|Cmax$,

• $F2|prmu,bi=c,spanningBuffer,si=1|Cmax$.

Proof (only for ($i$)):

The problem is in NP since it can be checked in polynomial time if a schedule $σ*$ is valid and satisfies $Cmax(σ*)≤L$. To show the NP-hardness consider an instance of 3Partition: Given are positive integers $x1,x2,…,x3m$ and an integer $B>0$ such that $B/4 for all $j$ and $∑j=13mxj=mB$. The question is if there exists a partition of ${x1,x2,…,x3m}$ into $m$ subsets $S1,S2,…,Sm$ such that each set $Sk$ satisfies $|Sk|=3$ and $∑x∈Skx=B$. We denote the given instance from 3Partition as $I3P$. The corresponding flow shop instance $IF2$ of $F2|prmu,bi=c,intermediateBuffer,si=ai|Cmax$ is constructed with a set $J={g0}∪G∪H$ of $n=4m+1$ jobs where

• $g0$ is a job with $ag0=B/4$,

• $G$ is a set of $m$ jobs $g1,g2,…,gm$ with $agk=3B$ for $k∈{1,2,…,m}$, and

• $H$ is a set of $3m$ jobs $h1,h2,…,h3m$ with $ahk=xk$ for $k∈{1,2,…,3m}$.

The additional parameters are $Ω=3B/4$, $c=B$, $L=4mB+B+B/4$, and $si=ai$ for all jobs. This instance can be constructed in polynomial time. Now it is shown that $I3P$ has a solution if and only if $IF2$ has a permutation schedule $σ*$ with $Cmax(σ*)≤L$.

$⇐$”: Assume that $IF2$ has a permutation schedule $σ*$ with $Cmax(σ*)≤L$. It is shown in the following that $σ*$ satisfies several properties that lead to a structure shown in Figure 4. These properties allow the construction of a solution for $I3P$. The first property results from the sum of all processing times on both machines, using $∑j=13mxj=mB$ and comparing the values to $L$:
$∑j∈Jaj=B4+m·3B+mB=4mB+B4=L-B∑j∈Jbj=B+mB+3mB=4mB+B=L-B4.$
This shows that $M1$ cannot have a total idle time of more than $B$ time units and $M2$ cannot be idle for more than $B/4$ time units without exceeding the maximum makespan $L$. Since $M1$ is always idle for at least $B$ time units (while $M2$ is processing its last job), it follows that $M1$ cannot have any idle times before it finishes its last job.
Figure 4:

Visualization for Theorem 1.

Figure 4:

Visualization for Theorem 1.

As for $M2$, the initial idle time interval (while $M1$ processes its first job) cannot be shorter than $B/4$ time units and its length is equal to $B/4$ if and only if $g0$ is the first job on both machines (a job $hk∈H$ is not possible as the first job due to $B/4 being required for the corresponding 3Partition instance). Thus, the job $g0$ has to be the first job on both machines and no idle time is allowed on $M2$ between $g0$ and the last job on $M2$.

The second property is that it is not possible for the buffer to store any job $g∈G$ since $sg=3B>3B/4=Ω$. Thus, the machine $M2$ must immediately start any job $g∈G$ after $g$ is finished on $M1$: $Cg1(σ*)=Sg2(σ*)$. It is also necessary that $Cg01(σ*)=Sg02(σ*)$ or else the initial idle time would be exceeded. Since all jobs in $G$ are identical, we can assume without loss of generality that the $G$-jobs are processed in the order $g1,…,gm$.

We now consider the sets $Hk⊆H$ of jobs processed between $gk$ and $gk+1$ (for $k∈{0,1,2,…,m-1}$) and the set $Hm$ of jobs processed after $gm$ on both machines. Note that the condition $B/4 for $k∈{1,…,3m}$ implies that it is not possible for the buffer to store more than two jobs from $H$ at any time. It is now shown that $|Hk|=3$ and $∑j∈Hkaj=B$ for $k∈{0,1,2,…,m-1}$. Consider first $k=0$. Since no additional idle times are allowed on both machines, the equations $Sg12(σ*)=Cg01(σ*)+bg0+∑h∈H0bh=ag0+(1+|H0|)B$ and $Cg11(σ*)=ag0+∑h∈H0ah+ag1$ as well as $Cg11(σ*)=Sg12(σ*)$ lead to $∑h∈H0ah=(|H0|-2)B$ which cannot be negative or zero, implying $|H0|≥3$.

Assume $|H0|>3$ and denote as $h1,h2,h3,h4∈H$ the first four jobs in $H0$ processed on $M1$. Machine $M2$ also has to process these jobs immediately after finishing $g0$ or else it would be idle. Using the condition $B/4 for all $h∈H$, one obtains
$Ch41(σ*)=Cg01(σ*)+∑i=14ahi=Sg02(σ*)+∑i=14ahi
This means that four jobs in $H01$ are finished on $M1$ before $M2$ frees the buffer space occupied by $h2$ such that at least three jobs from $H0$ have to be stored in the buffer at time $t=Ch41(σ*)$. This exceeds the buffer capacity implying that $σ*$ is not a feasible solution. It thus follows that $|H0|=3$.
In order to show $∑j∈H0aj=B$, we assume all other cases: If $∑j∈H0aj, then the three jobs in $H0$ would be finished on $M1$ before $M2$ finishes $g0$ implying that they would have to be stored in the buffer. This is not possible since $∑j∈H0sj>(3/4)B$ and this would exceed the buffer capacity $Ω$. For the case $∑j∈H0aj>B$, let $h3$ be the job in the set $H0$ that $M2$ processes last. It follows that
$Ch32(σ*)=Sg02(σ*)+B︸bg0+3B︸∑j∈H0bj=Cg01(σ*)+B+3B
which means that $M2$ is idle (waiting for $g1$) after finishing $h3$. This is a contradiction since there are no idle times between the jobs on $M2$.

Hence, $|H0|=3$ and $∑j∈H0aj=B$ holds. This implies that $M1$ finishes the set $H0$ at the same time as $M2$ is finishing $g0$ and that the interval from $Cg02(σ*)=Sg11(σ*)$ to $Sg12(σ*)=Cg11(σ*)$ has a length of $3B$ time units. Thus, $M2$ processes the jobs in $H0$ during this interval or else it would be idle.

It can also be concluded that the buffer contains no jobs by the time $M1$ finishes the job $g1$. This allows us to apply the same arguments used in the case $k=0$ for $k∈{1,2,…,m-1}$ to show that $|Hk|=3$ as well as $∑j∈Hkaj=B$. By doing so, one obtains $m$ sets $H0,H1,H2,…,Hm-1$ that each contain three jobs from $H$. Due to $|H|=3m$, it follows that $|Hm|=0$. Since $∑j∈Hkaj=B$ for $k∈{0,1,2,…,m-1}$ and since the jobs in $H$ correspond to the numbers of the 3Partition problem $I3P$, it is possible to construct a solution for $I3P$.

$⇒$”: Given the subsets $S1,S2,…,Sm$ satisfying $|Sk|=3$ and $∑x∈Skx=B$ for $k∈{1,2,…,m}$, it is possible to construct a schedule $σ*$ as shown in Figure 4 where the $H$-jobs are corresponding to the subsets that are scheduled between the $G$-jobs. This leads to a schedule $σ*$ with $Cmax(σ*)=L$ and a solution for $IF2$.

## 5  Polynomial-Time Solvable Subcases

As shown in Section 4, the buffered flow shops with $bi=c$ are NP-complete for all buffer models and buffer usage values considered here. However, there exist special cases that are solvable in polynomial time, such as $F2|spanningBuffer,si=ai|Cmax$ with the additional condition that both $maxai$ and $maxbi$ do not exceed $Ω/5$ (Kononov et al., 2019) and $F2|spanningBuffer,si=ai|Cmax$ with the additional constraint $maxai≤minbi$ (Min et al., 2019).

In this work, the subcases where the constant $c$ is either larger or smaller than all run times on $M1$ are shown to be polynomially solvable. Note that in comparison to Min et al. (2019), we additionally consider both buffer types as well as the case $minai≥maxbi$ when all $bi$ are equal to $c$. In order to show this property for $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$ the following lemmata are needed.

Lemma 5.1:

For each schedule $σ$ for an instance of $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$, there exists a schedule $σ'$ with $π(σ)=π(σ')$ and $Cmax(σ')≤Cmax(σ)$ satisfying $Ck1(σ')=Sk2(σ')$ for all $k∈{1,2,…,n}$.

Proof:

For a given schedule $σ$, assume that there exists a job $Jk$ for which $Ck1(σ). W.l.o.g. let $Jk$ be the rightmost such job. Then, since $c≥ai$ for all $i∈{1,2,…,n}$, it follows that each job which comes after $Jk$ in $σ$ starts later than $Sk2(σ)$. Hence, $Jk$ can be rescheduled on $M1$ to a new schedule $σ'$ such that $Ck1(σ')=Sk2(σ)=Sk2(σ')$ holds without increasing the makespan. By iteration the lemma follows.

For the schedule $σ'$ of Lemma 2 it holds that for each job $Jk$ ($k∈{1,2,…,n}$) the predecessor job finishes at time $t≤Sk2(σ')$ and the successor job of $Jk$ starts at time $t≥Ck1(σ')=Sk2(σ')$. Hence, the schedule $σ'$ has the property that at most two jobs are in the buffer at any given time. A permutation schedule is minimal if it has the smallest makespan out of all permutation schedules with the same order of jobs. The following lemma holds for minimal permutation schedules.

Lemma 5.2:

For any permutation $π=(Jπ1,Jπ2,…,Jπn)$ of the jobs $J$ in an instance of $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$ there exists a minimal permutation schedule $σ$ where for each job $Jπk$ and its successor $Jπk+1$ ($k∈{1,2,…,n-1}$) the permutation schedule $σ$ satisfies either

• $Cπk2(σ)=Sπk+12(σ)$ and $sπk+sπk+1≤Ω$ or

• $Cπk2(σ)=Sπk+11(σ)$ and $sπk+sπk+1>Ω$.

Proof:

The lemma is a direct consequence of Lemma 2 and the facts that it can be assumed that $σ$ is a minimal permutation schedule and $aπk+1≤c$.

Let $J$ be the set of jobs of an instance of $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$. If $Ck2(σ)=Sℓ2(σ)$ holds for two jobs $Jk,Jℓ∈J$ in a permutation schedule $σ$ we say that the processing time $aℓ$ of $Jℓ$ on $M1$ is hidden. Two jobs $Jk,Jℓ$ are compatible if $sk+sℓ≤Ω$. For a permutation schedule $σ$ and its corresponding job permutation $π(σ)$ let $I(σ)⊆{1,2,…,n}$ be the set of indices $i$ of jobs $Ji$ which are not compatible with their predecessor, i.e., $sπi-1(σ)+sπi(σ)>Ω$. The following corollary is a direct consequence of Lemma 3.

Corollary 5.1:

Let $σ$ be a minimal permutation schedule for an instance of $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$ and $π(σ)=(Jπ1(σ),Jπ2(σ),…,Jπn(σ))$ its corresponding permutation of jobs in $J$. Then, it holds that $Cmax(σ)=aπ1(σ)+∑i∈I(σ)ai+n·c$.

In the following we present Algorithm 1 which computes an optimal schedule for an instance of $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$. It first requires that the jobs are sorted in order of decreasing $ai$ which can be done in $O(nlogn)$ steps. In the case where the jobs are already sorted, the resulting algorithm runs in linear time.

In order to solve $F2|prmu,bi=c,spanningBuffer,si=ai,c≥ai|Cmax$, Algorithm 1 considers two special cases first: (i) all jobs in $J$ are pairwise not compatible (lines 1 and 2) and (ii) all jobs in $J$ are pairwise compatible (lines 3 and 4). In case of (i) it is clear that none of the processing times on $M1$ can be hidden. Hence, any minimal schedule $σ$ has the makespan $Cmax(σ)=∑i=1nai+n·c$ and is thus optimal. In case $(ii)$, it follows that $I(σ)=⌀$. Therefore, Corollary 4 implies that any minimal schedule $σ$ which has $Jn$ as its first job has the makespan $Cmax(σ)=an+n·c$ and is optimal since $an≤ai$ for $i∈{1,2,…,n-1}$.

If none of the cases (i) and (ii) hold, there exists a minimal $k∈{2,3,…,n-1}$ such that $Jk+1$ and $Jk$ are compatible. Since $ak+1≥ai$ for $i>k+1$, all jobs in ${Jk+1,Jk+2…,Jn}$ are compatible with $Jk$ (line 6). When the processing time $aℓ$ of a job $Jℓ∈{J1,…,Jk}$ can only be hidden, then a job from $R:={Jk+1,…,Jn}$ (line 7) is its predecessor. Clearly, in an optimal schedule the total processing time of jobs ${J1,…,Jk-1}$ on $M1$ that can be hidden has to be maximal. To determine this maximum total processing time the following greedy approach is taken in lines 8–14. The jobs $J1,…,Jk-1$ are considered in this order and we introduce a function $f$ that assigns jobs from $R$ to these jobs as predecessors as follows.

For the next job $Ji$ the smallest job in $R$ that has so far not been assigned as predecessor to one of the jobs $J1,…,Ji-1$ and that is compatible with $Ji$ is always assigned as the predecessor of $Ji$ (if it exists). If all jobs in $R$ have been assigned (line 15), the assignment $f$ defines the only possible assignment of jobs in $R$ to jobs in ${J1,…,Jk}$ such that for the maximum number of jobs in ${J1,…,Jh}$ the processing time on $M1$ can be hidden and also the total hidden processing time on $M1$ of these jobs is maximized. That the greedy algorithm gives the optimal solution follows from the fact that the set of all subsets of $J1,…,Jk-1$ which can be hidden by assigning jobs from $R$ as predecessors (where each job in $R$ is assigned to at most one job in $J1,…,Jk-1$) is an independence system over $J1,…,Jk-1$ and forms a matroid together with the used weighting function $w(Jℓ)=aℓ$ (see standard textbooks on greedy algorithms for details). It follows that any minimal schedule $σ$ for the order $(Jk,$$f(Jk-1),$$Jk-1,…,f(J1),J1)$ has the makespan $Cmax(σ)=ak+∑i∈I(σ)ai+n·c$ and is optimal (line 16). An analogous result holds for jobs in ${J1,…,Jk}$ if there exists an $i∈{1,2,…,k-1}$ with $f(Ji)=∅$ (line 18) and $h$ is the maximum such $i$ (line 19). In this case there exists an optimal schedule $σ$ where $(J(h),f(Jh-1),$$Jh-1,…,f(J1),J1)$ is a suffix of the permutation $π(σ)$.

In line 23 of Algorithm 1 it holds for the remaining jobs $J'={Jk,…,Jh+1}$ that the run time on $M1$ can be hidden with jobs from $R':=R∖{f(Ji)∣i≤h}$. However, in order to find an assignment for which the resulting makespan $aℓ+(|R'|+k-h)·c$ where $Jℓ$ is the first job is minimal the assignment $f$ might have to be redefined for the subset $J'$. For this we need the following lemma.

Lemma 5.3:

If there exists an assignment of some jobs in $R'$ to the jobs ${Jk,Jk-1,$$…,$$Jh+1}$ where each job in ${Jk,Jk-1,…,Jh+1}$ gets assigned exactly one job $f(Ji)∈R'$ such that $f(Ji)$ is compatible with $Ji$ then there also exists such an assignment $f$ where $f(Jk)$ is the smallest possible job. This assignment $f$ satisfies the following property: a minimal permutation schedule for the order $f(Jk),R,Jk,f(Jk-1),Jk-1,…,f(Jh+1),Jh+1$ where $R$ is any order of the jobs $R'∖{f(Ji)∣i∈{h+1,h+2,…,k}$ is an optimal permutation schedule for the jobs in ${Jk,…,Jh+1}∪R'$.

Proof:
Assume there exists no permutation schedule $σ$ with an assignment $f$ such that the schedule $f(Jk),R,Jk,f(Jk-1),Jk-1,…,f(Jh+1),Jh+1$ is optimal; that is, all optimal schedules use an assignment $f$ and an order of the jobs in $J'$ that only coincides with the $t$ rightmost elements $Jh+t,Jh+t-1,…,Jh+1$ after which a different order $Ji1,Ji2,…,Jik-h-t$ is used for the remaining jobs $Jk,Jk-1,…,Jh+t+1$ in $J'$:
$f(Ji1),R,Ji1,f(Ji2),Ji2,…,f(Jik-h-t),Jik-h-t,f(Jh+t),Jh+t,…,f(Jh+2),Jh+2,f(Jh+1),Jh+1.$
(2)
Let $t$ be maximal such that the resulting permutation (2) is optimal and has the highest number of jobs coinciding with the sequence $Jk,Jk-1,…,Jh+2,Jh+1$. The makespan of the corresponding schedule is equal to $aℓ+(|R'|+k-h)·c$ where $ℓ$ is the index corresponding to the first job $f(Ji1)$ in the schedule.

We now show that we can construct a new permutation schedule with the same makespan where the $t+1$ rightmost elements coincide with this sequence, thereby contradicting that a maximal $t$ exists: Since $h+t, the order of the jobs in $J'$ is only identical for $Jh+t,Jh+t-1,…,Jh+1$ whereas the job $Jh+t+1$ is at a different position $ij$ in the permutation (2): $Jh+t+1=Jij$. In addition, the first job $Jik-h-t$ at which the two permutations differ corresponds to a job $Js∈J'$ with $s≥h+t+2$, that is, $ik-h-t≥h+t+2$. Since the schedule is optimal it must hold that $f(Jij)$ and $f(Jij+1)$ are compatible with $Jij$. Since $Jij$ is the largest job in $Ji1,Ji2,$$…,Jik-h-t$ it holds that $f(Jij)$ and $f(Jij+1)$ are compatible with each job in $Ji1,Ji2,$$…,Jik-h-t$. Therefore, the minimal schedule for the order $f(Ji1),R,$$Ji1,$$…,$$f(Jij),$$Jij+1,…,$$f(Jik-h-t),Jik-h-t,$$f(Jij+1)$$Jh+t+1$, $f(Jh+t),Jh+t,$$…,f(Jh+2),Jh+2,f(Jh+1),$$Jh+1$ has the same makespan $aℓ+(|R'|+k-h)·c$ as $σ$ and coincides with $t+1$ jobs from the sequence $Jk,Jk-1,…,Jh+2,Jh+1$ contradicting the maximality of $t$.

Thus, there exists a permutation schedule with an assignment $f$ such that the sequence $f(Jk),R,Jk,f(Jk-1),Jk-1,…,f(Jh+1),Jh+1$ is optimal.

Lemma 5 shows that an optimal permutation schedule can be found with a compatible predecessor assignment for the jobs in ${Jk,$$…,$$Jh+1}$ such that $f(Jk)$ is minimal. This is done in lines 23–26 of Algorithm 1 where always the largest possible job from $R'$ is assigned to the next job in ${Jk+1,$$…,$$Jh+1}$ and the smallest remaining job is afterwards assigned to $Jk$. In addition, it can be seen that Algorithm 1 always constructs schedules $σ$ with the maximum number of neighbored pairs of compatible jobs. Thus, it is also optimal for the case where $c≤ai$ for all $i∈{1,2,…,n}$ in which case the length of the schedule is $∑i=1nai+|I(σ)|·c$.

Theorem 5.1:

Given an instance of the type $F2|prmu,bi=c,bufType,bufUsage|Cmax$ with $bufType∈{spanningBuffer,intermediateBuffer}$ and $bufUsage∈{si=1,si=ai}$, an optimal schedule can be constructed in time $O(nlogn)$ if $c≥maxiai$ or $c≤miniai$.

Proof:

We consider all possible values for $bufType$ and $bufUsage$. It was already shown that Algorithm 1 computes an optimal schedule in $O(nlogn)$ steps for the case of a spanning buffer and $sj=aj$.

For the case of an intermediate buffer and $c≥maxiai$ it is possible to construct a schedule $σ$ as shown in Figure 5 which starts with a job $Jk∈J$ satisfying $ak≤aj$ for all $j∈{1,2,…,n}$ and where each job is immediately processed on $M2$ after being finished on $M1$ leaving the intermediate buffer unoccupied. This schedule $σ$ has a makespan of $Cmax(σ)=ak+nc$, which is a trivial lower bound for the makespan such that $σ$ is optimal.
Figure 5:

Optimal schedule for the case $F2|prmu,bi=c$, intermediateBuffer, $si=ai,c≥ai|Cmax$.

Figure 5:

Optimal schedule for the case $F2|prmu,bi=c$, intermediateBuffer, $si=ai,c≥ai|Cmax$.

Next, assume that the flow shop instance has an intermediate buffer and satisfies $c≤miniai$. In this case every schedule $σ$ where all jobs are processed as early as possible has a structure similar to the schedule shown in Figure 6 where the machine $M2$ is always idle by the time $M1$ finishes a job $J∈J$ allowing it to immediately start $J$ and leaving the intermediate buffer unoccupied. These schedules have a makespan of $Cmax(σ)=∑jaj+c$ which is also a trivial lower bound for the makespan so that all schedules of this type are optimal for this case.
Figure 6:

Optimal schedule for the case $F2|prmu,bi=c,intermediateBuffer,si=ai,c≤ai|Cmax$.

Figure 6:

Optimal schedule for the case $F2|prmu,bi=c,intermediateBuffer,si=ai,c≤ai|Cmax$.

It remains to consider the case of a spanning buffer and $si=1$. Consider first that $Ω=1$. Then no two jobs are processed at the same time on both machines so that every schedule $σ$ (where every job starts as early as possible) is optimal with a makespan of $Cmax(σ)=∑iai+nc$. If $Ω≥2$, then it is possible to construct the same schedules as in Figures 5 and 6 (depending on whether $c≥maxiai$ or $c≤miniai$) since a maximum of two jobs occupy the buffer at any time.

## 6  A Modification of the NEH Heuristic

The NEH algorithm (Nawaz et al., 1983) is a very successful and the most famous heuristic for makespan minimization in permutation flow shop problems. The algorithm starts with an empty permutation building it up by iteratively inserting jobs (in decreasing order of the sums of their processing times) into the positions which lead to the smallest increases in makespan. It can be shown that this heuristic also calculates an optimal solution for all special cases considered in Theorem 6 since it also performs greedy insertions maximizing the number of hidden jobs as well as the sum of their processing times.

Several variants of the NEH heuristic have been studied in the literature. Most of them deal with ties (Fernandez-Viagas and Framinan, 2015) or use different orders of job insertions, for example, based on the moments of processing times (Liu et al., 2017) or using genetic programming (Vázquez-Rodríguez and Ochoa, 2011). A disadvantage of the NEH heuristic is its relatively large run time since it checks $Θ(n2)$ insertion points. In particular, when the solution of the NEH heuristic is further used by improvement heuristics, for example, as the starting solution for an Iterated Local Search heuristic, it is desirable to have a faster heuristic for the computation of a starting solution.

Here, we introduce a modification of the NEH heuristic (mNEH) that is faster and particularly suitable for the considered two-machine flow shops with buffers where all processing times on the second machine $M2$ are equal. In this case, the jobs differ only by their processing time on $M1$. Therefore, it is likely to occur that many jobs are similar or even identical. This property is used in the mNEH heuristic to reduce the number of positions that are checked for the insertions of the jobs and thus the total number of evaluated schedules. The main idea is to split the $n$ jobs into $G(n)$ groups of similar jobs and to maintain for each group a list of $L(n)$ “good” candidate positions for the insertion. This is based on the assumption that insertions of similar jobs at the same positions lead to similar changes in the resulting makespan.

A pseudocode of the modified heuristic is given in Algorithm 2. The number of evaluations to be performed can then be adjusted by appropriate choice of $L(n)$ and $G(n)$. In this article, we memorize $L(n)=2n$ positions, that is, potentially $n$ before and $n$ after the newly inserted job for each of the $G(n)=n$ groups of equal size (an exception is the last group as described later) and it follows that the resulting algorithm performs $O(nn)$ evaluations. It should be noted that in this case the incomplete permutation $π$ contains less than $L(n)$ jobs during the scheduling of the first two groups $S1$ and $S2$. For the jobs in these two groups, the mNEH heuristic tests all possible insertion positions in the same way as in the standard NEH heuristic. For the last group $SG(n)$, all possible positions are checked since this group contains the shortest jobs allowing for a finer optimization of the partial permutation $π$. Regarding the time complexity (including the time to evaluate a permutation as well as the time to build and update the list) it can be shown that the mNEH heuristic with the used parameter values is faster than the standard NEH heuristic by a factor of $O(n)$.

## 7  Iterated Local Search

In this section, we propose an Iterated Local Search heuristic for the two-machine flow shop problems with buffers. The heuristic is denoted by 2BF-ILS. The local search operations that are considered for 2BF-ILS are the following:

• inserting a job on position $i$ into another position $j$ (insert)

• inserting a pair of adjacent jobs at the positions $(i,i+1)$ into the positions $(j,j+1)$ (pairInsert)

• swapping two jobs at the positions $i$ and $j$ (swap)

The naming of these operations is based on Zhang and Gu (2015) and Moslehi and Khorasanian (2014). The actual selection of these operations and their order used in the experiment is later determined by the algorithm configurator irace in Section 8.3. As such, the following description is based on the generalized case where a sequence $op1,op2,…,opℓ$ is given with $opi∈{insert,pairInsert,swap}$ for $i∈{1,2,…,ℓ}$. Each of the considered operations $op∈{insert,pairInsert,swap}$ takes two parameters $i$ and $j$ so that we denote the result of $op$ applied on $π$ with these parameters as $op(π,i,j)$. The resulting neighborhood of permutations around $π$ is denoted $Nop(π)={π'|∃i,j:op(π,i,j)=π'}$, that is, the set of all permutations $π'$ that can be obtained by a single application of $op$ on $π$. The size of the neighborhoods for the considered operations is quadratic with respect to the problem size $n$ since the number of possible values for the parameters $i$ and $j$ linearly increases with $n$. One way to reduce the size is to fix a parameter $i$:
$Niop(π)={π'|∃j:op(π,i,j)=π'}.$
This lowers the number of checked permutations leading to potentially good solutions being missed. However, the linear size of the resulting neighborhood allows for more local search steps to be performed in the same amount of time.

The main steps of our proposed method are shown in Algorithm 3: It starts with the solution obtained from the mNEH heuristic proposed in Section 6 (with $G(n)=n$ and $L(n)=2n$). Then 2BF-ILS iterates through a sequence of operations $(op1,op2,…,opℓ)$ and repeatedly performs local search steps with changing neighborhoods. The choice of which parameter $i$ to fix in $Niop(π)$ is based on a random permutation $πrand$ calculated beforehand. The search steps with the current operation $op$ are repeated until the permutation $πcur$ obtained so far cannot be improved further. Afterwards, the local search procedure is restarted with the next given operation.

If the best-known solution obtained so far did not improve after all operations, the next iteration of the algorithm uses a perturbed variant of the best-known solution. The perturbation we use is based on a geometric distribution: We repeatedly draw (uniformly) random numbers $r$ from [0,1] and then randomly apply one of the given operations with random parameters on the permutation until $r$ is greater than $ps$. This distribution favors a small number of perturbations. The strength of perturbation $ps$ increases additively (here set as $ɛ=0.05$ with $ps$ maxed out at 0.99) if successive iterations do not yield any improvement of the best-known solution, otherwise it is reset to the initial value $psinit$.

## 8  Computational Evaluation

In this section, we describe the experiments and their results.

### 8.1  Algorithms for Comparison

Based on the literature overview given in Section 2 and Figure 1, we selected the Hybrid Variable Neighborhood Search (HVNS) and the Discrete Artificial Bee Colony (DABC) for a comparison with our proposed method since these algorithms are described in fairly recent works (Zhang and Gu, 2015; Moslehi and Khorasanian, 2014) and have not been outperformed by any other algorithm to the best of our knowledge. For the implementation, we asked the authors of both algorithms for the source code but did not receive a reply. Thus, we reimplemented both algorithms.

In addition to HVNS and DABC, we also used an Ant Colony Optimization algorithm (ACO) for comparison. The reason is that ACO generally achieves good results for many scheduling problems. It is thus a promising algorithm which so far has not been applied to flow shops with buffers. The ACO in this work is a standard ACO without heuristic components where each pheromone value $τij$ encodes the desirability that job $Jj$ is the succesor of job $Ji$. At each iteration the pheromone values are updated by the solution that is obtained by 2BF-ILS applied to the permutation generated by the best ant in the current iteration (i.e., the iteration best solution becomes $πcur$ in line 4 in Algorithm 3). Each application of 2BF-ILS was limited to 1000 search steps.

### 8.2  Generation of Problem Instances

The commonly used benchmark instances for flow shop problems (from Taillard, 1993, Reeves, 1995 and the recent VRF benchmark by Vallada et al., 2015) work with at least five machines and contain no buffer constraints so that they cannot be directly applied to the flow shops with buffers considered here. Authors who studied flow shop problems with buffers and two machines (e.g., Kononova and Kochetov, 2013 and Lin et al., 2009) generated instances with random processing times uniformly drawn over the set ${1,2,…,100}$. Based on the studies performed by the these authors, we generated instances for our experimental study as follows.

We chose $n∈{50,100,150}$ as the number of jobs for small, medium and large instances and created three “incomplete instances” for each size that contain only the $M1$ processing times $aj$ drawn randomly from a uniform distribution over the set ${1,2,…,100}$. Each of these incomplete instances was then used to build three subordinate instances by choosing the constant processing times $c$ on $M2$ as the 25%, 50% or the 75% percentile $q0.25,q0.50,q0.75$ of the values $ai$. Each value of $c$ was then used to create (complete) instances for the intermediate buffer (spanning buffer) by setting $si=1$ and $Ω=1$ ($Ω=3$) or $si=ai$ and $Ω=q0.25$ ($Ω=maxai+q0.25$). Note that the buffer capacity $Ω$ cannot be smaller than $maxsi$ for the spanning buffer model or else there exists no valid schedule. The resulting set of instances contained 108 flow shop problems.

In addition, we generated a set of instances based on studies conducted by Amar and Gupta (1986) and Watson et al. (2002) where it was argued that problem data generated from uniform distributions do not contain characteristics of problem instances commonly occurring in practical applications, namely gradients or correlations with respect to job processing times. In instances with the former property, the processing times $ai$ on $M1$ are shorter (or longer) than the processing time $bi$ on the second machine for all jobs $Ji$. Regarding flow shops satisfying $bi=c$ for all jobs $Ji$, job gradients lead to the cases analyzed in Section 5 which are efficiently solvable. Thus, we considered the latter characteristic by drawing the processing times $ai$ for $M1$ from normal distributions $N(μ,σ2)$ with mean value $μ=c$. Higher values for $c$ then lead to tendentially longer processing times on $M1$ and vice versa. In particular, we calculated $n$ integer values for processing times $ai$ ($n∈{50,100,150}$) by drawing $n$ random numbers $r1,r2,…,rn$ from normal distributions $N(μ,σ2)$ with $μ=c$, $σ=10$ and $c∈{25,50,75}$ and setting $ai=max{⌈ri⌉,1}$ for all $i∈{1,2,…,n}$. The values for the buffer usage $si$ are calculated in the same way as for the first set of instances leading to 108 additional flow shop problems.

### 8.3  Parameter Values

For the algorithms 2BF-ILS and ACO the parameter values were determined with the algorithm configurator irace (López-Ibáñez et al., 2016). The configuration was performed on a separate set of instances with 100 jobs (generated using the method described in Subsection 8.2 for uniformly distributed processing times) with the standard irace parameter values and a budget of 250 runs for each algorithm. The limits used by irace for the parameter values were chosen as $psinit∈(0.01,0.99)$, $ρ∈(0.01,0.05)$, $Nants∈{1,2,…,20}$. As possible sequences for operations we considered all sequences of length 1, 2, or 3 that can be formed by the operations insert, pairInsert, or swap (without repetition leading to 15 possible sequences in total). The algorithms HVNS and DABC were applied with the parameters given by the respective authors as well as tuned parameters calculated by irace on the same instances. In the following, the tuned versions of these algorithms are referred to as HVNS-T and DABC-T.

The resulting parameter values are given in Table 1. Due to page limitations, we refer the reader to https://github.com/L-HT/TwoMachineFlowShopBuffers where the resulting parameter values for HVNS-T and DABC-T as well as the source code for each algorithm and the following evaluations (written in R and C++) are available. After the configuration, each algorithm was executed on all 216 instances and the resulting values were averaged over 10 replications. All test runs were performed on a computer with eight 3.4-GHz-cores (each run being executed on one core) and 32 GB RAM with time limits of 5, 10, and 15 minutes for the small, medium, and large problems, respectively.

Table 1:

Parameters for ACO and 2BF-ILS calculated by irace.

AlgorithmParameterResultDescription
ACO $psinit$ 0.914 initial perturbation strength
$op$ $(swap,insert,pairInsert)$ sequence of operations
$Nants$ 19 number of ants
$ρ$ 0.018 pheromone evaporation
2BF-ILS $psinit$ 0.240 initial perturbation strength
$op$ $(pairInsert,insert,swap)$ sequence of operations
AlgorithmParameterResultDescription
ACO $psinit$ 0.914 initial perturbation strength
$op$ $(swap,insert,pairInsert)$ sequence of operations
$Nants$ 19 number of ants
$ρ$ 0.018 pheromone evaporation
2BF-ILS $psinit$ 0.240 initial perturbation strength
$op$ $(pairInsert,insert,swap)$ sequence of operations

### 8.4  Comparison of 2BF-ILS with Other Metaheuristics

The performance of each algorithm was evaluated based on (i) the number of performed function evaluations$FE$ to calculate the makespan and (ii) the elapsed absolute runtime. The latter was used to calculate the normalized runtime$NT$ as a time measure where the reference runtime for a given instance was chosen as the mean runtime of 10 runs of the standard NEH heuristic.

For each instance and each time measure, we generated (similar to the study performed by Weise et al., 2014) the progress curve (PC, the quality of the best-known solution over time), the estimated running time (ERT, indicating the time necessary to reach a given solution quality), and the empirical cumulative distribution function (ECDF, describing the percentage of runs reaching a given solution quality over time) with respect to the relative percentage difference $RPD=(F(π)-F^)/F^$ between the solution quality $F(π)$ of the permutation $π$ and the best solution quality $F^$ found in all runs on this instance. The target value of the ECDF was chosen as a 1%-deviation of $F^$ such that these diagrams show how consistent an algorithm reaches the performance $RPD≤0.01$.

Since an individual evaluation of over 1300 diagrams is not feasible, we calculated the area under curve (AUC) as an aggregate quality measure for the PC, ERT, and ECDF diagrams (with respect to both time measures) since algorithms with low AUC values (for PC and ERT diagrams) and high AUC values (for ECDF diagrams) tend to find better solutions faster (Weise et al., 2014). As such, these values quantify the performance of an algorithm for a given instance over time. The relative AUC values, averaged over all instances used in this study, are shown in Table 2 for both sets of instances.

Table 2:

Relative area under curve (AUC) values for each algorithm, evaluation diagram, and time measure (indicated by the subscript $NT$ or $FE$) with respect to both sets of flow shop instances.

HVNSHVNS-TDABCDABC-TACO2BF-ILS
uniform $PCFE$ 2.22 2.13 1.81 1.66 2.01 1.03
$PCNT$ 2.41 2.40 1.54 1.71 2.06 1.04
$ERTFE$ 2.62 2.59 3.07 3.02 2.58 1.11
$ERTNT$ 2.49 2.48 1.91 2.36 2.60 1.25
$ECDFFE$ 1.28 1.27 1.23 1.23 1.44 1.00
$ECDFNT$ 1.35 1.35 1.21 1.22 1.50 1.00
normal $PCFE$ 1.90 1.93 1.95 1.98 1.89 1.04
$PCNT$ 2.00 2.03 1.61 1.89 1.94 1.03
$ERTFE$ 2.47 2.58 3.16 3.29 2.16 1.11
$ERTNT$ 2.40 2.54 1.89 2.36 2.13 1.26
$ECDFFE$ 1.05 1.07 1.08 1.12 1.24 1.00
$ECDFNT$ 1.08 1.09 1.05 1.11 1.25 1.00
HVNSHVNS-TDABCDABC-TACO2BF-ILS
uniform $PCFE$ 2.22 2.13 1.81 1.66 2.01 1.03
$PCNT$ 2.41 2.40 1.54 1.71 2.06 1.04
$ERTFE$ 2.62 2.59 3.07 3.02 2.58 1.11
$ERTNT$ 2.49 2.48 1.91 2.36 2.60 1.25
$ECDFFE$ 1.28 1.27 1.23 1.23 1.44 1.00
$ECDFNT$ 1.35 1.35 1.21 1.22 1.50 1.00
normal $PCFE$ 1.90 1.93 1.95 1.98 1.89 1.04
$PCNT$ 2.00 2.03 1.61 1.89 1.94 1.03
$ERTFE$ 2.47 2.58 3.16 3.29 2.16 1.11
$ERTNT$ 2.40 2.54 1.89 2.36 2.13 1.26
$ECDFFE$ 1.05 1.07 1.08 1.12 1.24 1.00
$ECDFNT$ 1.08 1.09 1.05 1.11 1.25 1.00

For each instance, the area under the curve was calculated for each diagram and the best obtained value was chosen as the “reference” to calculate the relative values for the other algorithms. The obtained values were averaged over all 108 instances. A low number indicates that the curves for the respective algorithm show a high similarity (on average) to the best performing algorithm on each instance. The lowest values are shown in bold for each criterion.

It can be seen that 2BF-ILS outperformed the other algorithms in all evaluation measures on both sets of flow shop problems. ACO showed a competitive performance when compared to HVNS and DABC with respect to the values for the PC diagrams which indicates that it quickly improves at the beginning when the first solutions are created. The irace-tuned parameters for HVNS obtained better results than the untuned parameters for uniformly random processing times on $M1$ with respect to nearly all evaluation measures whereas the results for DABC-T in comparison to DABC are mixed. It can also be seen for HVNS, HVNS-T, and ACO that the values in Table 2 for the instances with normally distributed processing times are smaller than for uniformly random instances for almost all evaluation criteria which indicates that these methods can obtain “good” solutions for instances with processing times $ai$ similar to $c$ more easily than for uniformly randomized instances.

In addition, we observed for most instances with the makespan criterion and $si=1$ (for all problem sizes, both buffer types, and both sets of instances) that the algorithms quickly reached a state of convergence with the same solution quality (with an example shown in Figure 7 left). A comparison with the lower bound given by the algorithm of Johnson (1954) for flow shops without buffer constraints showed that in most cases an optimal solution was reached. We performed further experiments using additional instances of this type with similar results which indicates that this special case could be “easier” to solve than other problems (even though in theory it is still NP-complete).
Figure 7:

Progress curve for an instance using the intermediate buffer, $si=1$, $c=q0.50$, uniformly random processing times and 50 jobs (left) and an instance with spanning buffer, $si=ai$, $c=q0.50$, normally distributed processing times and 100 jobs (right).

Figure 7:

Progress curve for an instance using the intermediate buffer, $si=1$, $c=q0.50$, uniformly random processing times and 50 jobs (left) and an instance with spanning buffer, $si=ai$, $c=q0.50$, normally distributed processing times and 100 jobs (right).

To compare the performance of the algorithms at specific points in time the sign test for paired samples was applied. This non-parametric test neither requires the given data to be normally distributed nor the difference distributions between the methods to be symmetric. Using this test, we compared the performance of the algorithms during the run (at 100 000 evaluations) and the performance that was reached at the end of the time limit. The results are shown in Table 3 for both sets of instances. Note that the tables are symmetric since the 15 possible pairwise comparisons for each of the two points in time were performed with two-sided tests. Similar to Table 2, it can be seen that 2BF-ILS obtained a significantly better performance than the other algorithms at both points indicating a high and consistent performance over time. The only difference is that in the case for normally distributed processing times the tests showed fewer statistically significant differences (i.e., values $p<0.05/15$) between the competitor algorithms than for instances with uniformly random processing times.

Table 3:

Results of the pairwise comparisons between the algorithms using the two-sided sign test for both sets of flow shop problems (each containing $n=108$ instances).

HVNSHVNS-TDABCDABC-TACO2BF-ILS
uniform HVNS  - / - $▴$/$▴$ $▴$/$▴$ - / - $▴$/$▴$
HVNS-T - / -  - /$▴$ $▴$/$▴$ - / - $▴$/$▴$
DABC $◂$/$◂$ - /$◂$  - / - - /$◂$ $▴$/$▴$
DABC-T $◂$/$◂$ $◂$/$◂$ - / -  $◂$/$◂$ $▴$/$▴$
ACO - / - - / - - /$▴$ $▴$/$▴$  $▴$/$▴$
2BF-ILS $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$
normal HVNS  - / - - / - - / - - / - $▴$/$▴$
HVNS-T - / -  - / - - / - - / - $▴$/$▴$
DABC - / - - / -  - /$◂$ - / - $▴$/$▴$
DABC-T - / - - / - - /$▴$  - / - $▴$/$▴$
ACO - / - - / - - / - - / -  $▴$/$▴$
2BF-ILS $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$
HVNSHVNS-TDABCDABC-TACO2BF-ILS
uniform HVNS  - / - $▴$/$▴$ $▴$/$▴$ - / - $▴$/$▴$
HVNS-T - / -  - /$▴$ $▴$/$▴$ - / - $▴$/$▴$
DABC $◂$/$◂$ - /$◂$  - / - - /$◂$ $▴$/$▴$
DABC-T $◂$/$◂$ $◂$/$◂$ - / -  $◂$/$◂$ $▴$/$▴$
ACO - / - - / - - /$▴$ $▴$/$▴$  $▴$/$▴$
2BF-ILS $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$
normal HVNS  - / - - / - - / - - / - $▴$/$▴$
HVNS-T - / -  - / - - / - - / - $▴$/$▴$
DABC - / - - / -  - /$◂$ - / - $▴$/$▴$
DABC-T - / - - / - - /$▴$  - / - $▴$/$▴$
ACO - / - - / - - / - - / -  $▴$/$▴$
2BF-ILS $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$ $◂$/$◂$

The first value in each cell shows the test result with respect to the performance at 100 000 evaluations and the second value refers to the performance reached at the end of the time limit. A triangle indicates that the measured difference is statistically significant ($p<0.05/15$ due to Bonferroni correction) and that the algorithm at which the triangle is pointed at is significantly better according to the test statistic.

Plotting the RPD values of the final solutions on each instance in relation to 2BF-ILS as points $(RPD,RPD2BF-ILS)$ due to the paired nature of the measured data (see Figure 8) also shows the high performance of 2BF-ILS. It can be seen that for larger problem sizes the gap in solution quality to the other algorithms increases. In particular, the average RPD ratio between 2BF-ILS and the other algorithms is significantly smaller for instances with 50 jobs than for instances with 150 jobs ($p=0.02$ for uniformly distributed processing times and $p=0.003$ for normally distributed processing times according to the Wilcoxon rank-sum test). In addition, it can be seen that there are instances where ACO obtained worse results than the other algorithms---most often on the larger instances with normally distributed processing times. One reason for this is that there exist many similar or equal jobs in the studied types of flow shops leading to problems for the ants. In addition, Figure 8 (right) shows that the majority of points are gathered around lower RPD values for 2BF-ILS, HVNS(-T), and DABC(-T) indicating that these algorithms tend to obtain solutions with higher and more similar quality for instances with correlations between the processing times $ai$ and $c$ than for problems with uniformly random processing times. This observation as well as the differences between the two sets of instances noted previously for Table 2 support the results from Amar and Gupta (1986) and Watson et al. (2002) stating that instances containing structural features are easier to solve than arbitrarily random problems.
Figure 8:

Scatter plot of RPD values of the final solution calculated by 2BF-ILS in relation to the respective RPD values of the other algorithms for the flow shop instances with uniformly distributed processing times (left) and normally distributed processing times (right) on $M1$ for each problem size. The grey line marks the diagonal line $y=x$ such that points above (below) the line indicate that the algorithm obtained a better (worse) final solution than 2BF-ILS. HVNS* and DABC* refer to the minimum of HVNS/HVNS-T and DABC/DABC-T, respectively.

Figure 8:

Scatter plot of RPD values of the final solution calculated by 2BF-ILS in relation to the respective RPD values of the other algorithms for the flow shop instances with uniformly distributed processing times (left) and normally distributed processing times (right) on $M1$ for each problem size. The grey line marks the diagonal line $y=x$ such that points above (below) the line indicate that the algorithm obtained a better (worse) final solution than 2BF-ILS. HVNS* and DABC* refer to the minimum of HVNS/HVNS-T and DABC/DABC-T, respectively.

### 8.5  Comparison of 2BF-ILS with NEH

In this section, we compare the standard NEH with 2BF-ILS which uses mNEH with the parameter values $G(n)=n$ and $L(n)=2n$ for constructing its starting solution. In particular, the time was measured for 2BF-ILS to obtain the same solution quality $CNEH$ as the standard NEH. This experiment was performed using the same instances described in Subsection 8.2. For an example instance, Figure 9 left shows the solution quality of the standard NEH heuristic as well as the performance over time for 2BF-ILS (averaged over 10 replications). It can be seen that mNEH needed approximately half as many evaluations as NEH to finish (though initially obtaining a slightly worse solution quality). The reduced number of evaluations then allowed 2BF-ILS to obtain the quality $CNEH$ with fewer evaluations than NEH.
Figure 9:

Left: Comparison between the standard NEH and 2BF-ILS for an instance with 150 jobs, spanning buffer, $si=1$, $c=q0.50$ and uniformly distributed processing times on $M1$. Dashed lines indicate when the standard NEH finishes (vertical lines) and the attained solution quality $CNEH$ (horizontal lines). A cross marks the quality of the initial mNEH solution and a circle marks the number of evaluations at which a better quality than NEH was reached for the first time. Right: Box plot for 2BF-ILS showing the number of evaluations (averaged over 10 replications on each instance) until $CNEH$ was reached for the considered problem sizes and both sets of instances. A red line indicates the number of evaluations at which the standard NEH heuristic finished.

Figure 9:

Left: Comparison between the standard NEH and 2BF-ILS for an instance with 150 jobs, spanning buffer, $si=1$, $c=q0.50$ and uniformly distributed processing times on $M1$. Dashed lines indicate when the standard NEH finishes (vertical lines) and the attained solution quality $CNEH$ (horizontal lines). A cross marks the quality of the initial mNEH solution and a circle marks the number of evaluations at which a better quality than NEH was reached for the first time. Right: Box plot for 2BF-ILS showing the number of evaluations (averaged over 10 replications on each instance) until $CNEH$ was reached for the considered problem sizes and both sets of instances. A red line indicates the number of evaluations at which the standard NEH heuristic finished.

Visualizing the number of evaluations for both algorithms (see Figure 9 right) shows that there is a small number of outliers (at most 4 out of 36 instances per problem size and processing time distribution) where mNEH takes a noticeably high number of evaluations before reaching $CNEH$. However, the median values for 2BF-ILS were still smaller than the number of evaluations performed by the standard NEH heuristic for all problem sizes and both sets of instances. In addition, the sign test for flow shops with uniformly distributed processing times $(n=108)$ showed that 2BF-ILS needed significantly fewer evaluations to reach $CNEH$ than NEH ($p<0.001$). The differences were also significant for the instances with normally distributed processing times on $M1$ ($p=0.013$, $n=108$).

## 9  Conclusion

Algorithms and complexity results for two-machine permutation flow shops with buffers were presented in this article for the specific case where all processing times on one of the machines are equal to a constant $c$. For such problems, two types of buffers (intermediate buffers and spanning buffers) and two types of buffer usage (all jobs occupy the same amount of buffer and the amount of buffer usage for each job equals its processing time on the machine with non-constant times) were considered. It was shown that all four resulting cases of buffered two-machine permutation flow shops are NP-hard under the makespan criterion. A modified version of the NEH heuristic (mNEH) was proposed that utilizes characteristics of the considered flow shop problems. Also, an Iterated Local Search heuristic (2BF-ILS) was proposed that uses mNEH for computing its starting solution. It was experimentally shown that 2BF-ILS obtains better schedules over time outperforming a standard Ant Colony Optimization algorithm as well as two state-of-the-art algorithms for flow shop problems with buffers from the literature. The results show that the compared algorithms perform better if the flow shop instances contain correlations between processing times, a structural feature that commonly occurs in practical applications. In addition, the experimental results show that 2BF-ILS obtains the same solution quality as the standard NEH heuristic but uses a smaller number of function evaluations.

For all four types of buffered two-machine permutation flow shops two polynomial-time solvable subcases were identified, namely, the cases where the constant $c$ is either larger or smaller than all processing times on the other machine. The most interesting of these subcases is the spanning buffer when $c$ is larger than all processing times on the other machine and for each job the amount of buffer usage equals its run time on the non-constant time machine. For this case we presented an algorithm 2BF-OPT which calculates optimal schedules in $O(nlogn)$ steps.

## Acknowledgments

This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project number 392050753.

## Notes

1

The case $Ω=0$ leads to the blocking flow shop which is optimally solvable in polynomial time for the makespan criterion with two machines (Gilmore and Gomory, 1964) and thus not considered here.

## References

Agnetis
,
A.
,
Rossi
,
F.
, and
Gristina
,
G
. (
1998
).
An exact algorithm for the batch sequencing problem in a two-machine flow shop with limited buffer
.
Naval Research Logistics
,
45
(
2
):
141
164
.
Amar
,
A. D.
, and
Gupta
,
J. N. D
. (
1986
).
Simulated versus real life data in testing the efficiency of scheduling algorithms
.
IIE Transactions
,
18
(
1
):
16
25
.
Berlińska
,
J.
,
Kononov
,
A.
, and
Zinder
,
Y.
(
2019
). Two-machine flow shop with a dynamic storage space and UET operations. In
H. A.
Le Thi
,
H. M.
Le
, and
T. Pham
Dinh
(Eds.),
Optimization of complex systems: Theory, models, algorithms and applications
, pp.
1139
1148
.
Cham
:
Springer International Publishing
.
Brucker
,
P.
,
Heitmann
,
S.
, and
Hurink
,
J
. (
2003
).
Flow-shop problems with intermediate buffers
.
Operations Research Spektrum
,
25
(
4
):
549
574
.
Ernst
,
A.
,
Fung
,
J.
,
Singh
,
G.
, and
Zinder
,
Y.
(
2019
).
Flexible flow shop with dedicated buffers
.
Discrete Applied Mathematics
,
261:148
163
.
Fernandez-Viagas
,
V.
, and
Framinan
,
J. M.
(
2015
).
NEH-based heuristics for the permutation flowshop scheduling problem to minimise total tardiness
.
Computers & Operations Research
,
60:27
36
.
Fu
,
Q.
,
Sivakumar
,
A. I.
, and
Li
,
K
. (
2012
).
Optimisation of flow-shop scheduling with batch processor and limited buffer
.
International Journal of Production Research
,
50
(
8
):
2267
2285
.
Fung
,
J.
, and
Zinder
,
Y
. (
2016
).
Permutation schedules for a two-machine flow shop with storage
.
Operation Research Letters
,
44
(
2
):
153
157
.
Fung
,
J.
,
Zinder
,
Y.
, and
Singh
,
G.
(
2016
).
Flexible flow shop with storage: Complexity and optimisation methods
.
IFAC-PapersOnLine
,
49:237
242
.
G. Hall
,
N.
, and
Sriskandarajah
,
C.
(
1996
).
A survey of machine scheduling problems with blocking and no-wait in process
.
Operations Research
,
44:510
525
.
Garey
,
M. R.
,
Johnson
,
D. S.
, and
Sethi
,
R
. (
1976
).
The complexity of flowshop and jobshop scheduling
.
Mathematics of Operations Research
,
1
(
2
):
117
129
.
Geser
,
P.
(
2017
). Zur Komplexität des 2-Maschinen-Flow-Shop-Problems mit Zwischenspeicher. Diploma thesis, Leipzig University.
Gilmore
,
P. C.
, and
Gomory
,
R. E
. (
1964
).
Sequencing a one state-variable machine: A solvable case of the traveling salesman problem
.
Operations Research
,
12
(
5
):
655
679
.
Graham
,
R.
,
Lawler
,
E.
,
Lenstra
,
J.
, and
Kan
,
A. R.
(
1979
).
Optimization and approximation in deterministic sequencing and scheduling: A survey
.
Annals of Discrete Mathematics
,
5:287
326
.
Gu
,
H.
,
Kononov
,
A.
,
Memar
,
J.
, and
Zinder
,
Y.
(
2018
).
Efficient Lagrangian heuristics for the two-stage flow shop with job dependent buffer requirements
.
Journal of Discrete Algorithms
,
52-53:143
155
.
Hsieh
,
Y.-C.
,
You
,
P.-S.
, and
Liou
,
C.-D
. (
2009
).
A note of using effective immune based approach for the flow shop scheduling with buffers
.
Applied Mathematics and Computation
,
215
(
5
):
1984
1989
.
Johnson
,
S. M
. (
1954
).
Optimal two- and three-stage production schedules with setup times included
.
Naval Research Logistics Quarterly
,
1
(
1
):
61
68
.
Kononov
,
A.
,
Memar
,
J.
, and
Zinder
,
Y.
(
2019
). Flow shop with job–dependent buffer requirements—A polynomial–time algorithm and efficient heuristics. In
M.
Khachay
,
Y.
Kochetov
, and
P.
Pardalos
(Eds.),
Mathematical optimization theory and operations research
, pp.
342
357
,
Cham
:
Springer International Publishing
.
Kononova
,
P. A.
, and
Kochetov
,
Y. A
. (
2013
).
The variable neighborhood search for the two machine flow shop problem with a passive prefetch
.
Journal of Applied and Industrial Mathematics
,
7
(
1
):
54
67
.
Le
,
H. T.
,
Geser
,
P.
, and
Middendorf
,
M.
(
2019
). An iterated local search algorithm for the two-machine flow shop problem with buffers and constant processing times on one machine. In
A.
Liefooghe
and
L.
Paquete
(Eds.),
Evolutionary computation in combinatorial optimization
, pp.
50
65
,
Cham
:
Springer International Publishing
.
Leisten
,
R
. (
1990
).
Flowshop sequencing problems with limited buffer storage
.
International Journal of Production Research
,
28
(
11
):
2085
2100
.
Li
,
J.
, and
Pan
,
Q.-K.
(
2015
).
Solving the large-scale hybrid flow shop scheduling problem with limited buffers by a hybrid artificial bee colony algorithm
.
Information Sciences
,
316:487
502
.
Li
,
S.
, and
Tang
,
L
. (
2005
).
A tabu search algorithm based on new block properties and speed-up method for permutation flow-shop with finite intermediate storage
.
Journal of Intelligent Manufacturing
,
16
(
4
):
463
477
.
Liang
,
J.
,
Wang
,
P.
,
Guo
,
L.
,
Qu
,
B.
,
Yue
,
C.
,
Yu
,
K.
, and
Wang
,
Y
. (
2019
).
Multi-objective flow shop scheduling with limited buffers using hybrid self-adaptive differential evolution
.
Memetic Computing
,
11
(
4
):
407
422
.
Lin
,
F.-C.
,
Hong
,
J.-S.
, and
Lin
,
B. M.
(
2013
).
Sequence optimization for media objects with due date constraints in multimedia presentations from digital libraries
.
Information Systems
,
38
(
1
).
Lin
,
F.-C.
,
Hong
,
J.-S.
, and
Lin
,
B. M. T
. (
2009
).
A two-machine flowshop problem with processing time-dependent buffer constraints—An application in multimedia presentations
.
Computers & Operations Research
,
36
(
4
):
1158
1175
.
Liu
,
B.
,
Wang
,
L.
, and
Jin
,
Y.-H
. (
2008
).
An effective hybrid PSO-based algorithm for flow shop scheduling with limited buffers
.
Computers & Operations Research
,
35
(
9
):
2791
2806
.
Liu
,
S. Q.
, and
Kozan
,
E
. (
2009
).
Scheduling a flow-shop with combined buffer conditions
.
International Journal of Production Economics
,
117
(
2
):
371
380
.
Liu
,
W.
,
Jin
,
Y.
, and
Price
,
M.
(
2017
).
A new improved NEH heuristic for permutation flowshop scheduling problems
. International Journal of Production Economics,
193:21
30
.
López-Ibáñez
,
M.
,
Dubois-Lacoste
,
J.
,
Pérez Cáceres
,
L.
,
Stützle
,
T.
, and
Birattari
,
M.
(
2016
).
The irace package: Iterated racing for automatic algorithm configuration
.
Operations Research Perspectives
,
3:43
58
.
Martinez
,
S.
,
Dauzère-Pérès
,
S.
,
Guéret
,
C.
,
Mati
,
Y.
, and
Sauer
,
N.
(
2006
).
Complexity of flowshop scheduling problems with a new blocking constraint
.
European Journal of Operational Research
,
169:855
864
.
Min
,
Y.
,
Choi
,
B. C.
, and
Park
,
M. J.
(
2019
).
Two-machine flow shops with an optimal permutation schedule under a storage constraint
.
Journal of Scheduling
Moslehi
,
G.
, and
Khorasanian
,
D.
(
2014
).
A hybrid variable neighborhood search algorithm for solving the limited-buffer permutation flow shop scheduling problem with the makespan criterion
.
Computers & Operations Research
,
52:260
268
.
Nawaz
,
M.
,
Emory Enscore
,
E.
, and
Ham
,
I.
(
1983
).
A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem
.
Omega
,
11:91
95
.
Pan
,
Q.-K.
,
Wang
,
L.
, and
Gao
,
L.
(
2011
).
A chaotic harmony search algorithm for the flow shop scheduling problem with limited buffers
.
Applied Soft Computing
,
11:5270
5280
.
Pan
,
Q.-K.
,
Wang
,
L.
,
Gao
,
L.
, and
Li
,
W. D
. (
2011
).
An effective hybrid discrete differential evolution algorithm for the flow shop scheduling with intermediate buffers
.
Information Science
,
181
(
3
):
668
685
.
,
C. H.
, and
Kanellakis
,
P. C
. (
1980
).
Flowshop scheduling with limited temporary storage
.
Journal of the Association for Computing Machinery
,
27
(
3
):
533
549
.
Pranzo
,
M
. (
2004
).
Batch scheduling in a two-machine flow shop with limited buffer and sequence independent setup times and removal times
.
European Journal of Operational Research
,
153
(
3
):
581
592
.
Qian
,
B.
,
Wang
,
L.
,
Huang
,
D.-X.
,
Wang
,
W.-L.
, and
Wang
,
X
. (
2009
).
An effective hybrid de-based algorithm for multi-objective flow shop scheduling with limited buffers
.
Computers & Operations Research
,
36
(
1
):
209
233
.
Reeves
,
C
. (
1995
).
A genetic algorithm for flowshop sequencing
.
Computers & Operations Research
,
22
(
1
):
5
13
.
Sang
,
H.-Y.
, and
Pan
,
Q.-K
. (
2013
). An effective invasive weed optimization algorithm for the flow shop scheduling with intermediate buffers. In
25th Chinese Control and Decision Conference (CCDC)
, pp.
861
864
.
Taillard
,
E.
(
1993
).
Benchmarks for basic scheduling problems
.
European Journal of Operational Research
,
64:278
285
.
,
E.
,
Ruiz
,
R.
, and
Framinan
,
J. M.
(
2015
).
New hard benchmark for flowshop scheduling problems minimising makespan
.
European Journal of Operational Research
,
240:666
677
.
Vázquez-Rodríguez
,
J. A.
, and
Ochoa
,
G
. (
2011
).
On the automatic discovery of variants of the NEH procedure for flow shop scheduling using genetic programming
.
Journal of the Operational Research Society
,
62
(
2
):
381
396
.
Wang
,
L.
,
Zhang
,
L.
, and
Zheng
,
D.-Z.
(
2006
).
An effective hybrid genetic algorithm for flow shop scheduling with limited buffers
.
Computers & Operations Research
,
33:2960
2971
.
Watson
,
J.-P.
,
Barbulescu
,
L.
,
Whitley
,
L. D.
, and
Howe
,
A. E
. (
2002
).
Contrasting structured and random permutation flow-shop scheduling problems: Search-space topology and algorithm performance
.
INFORMS Journal on Computing
,
14
(
2
):
98
123
.
Weise
,
T.
,
Chiong
,
R.
,
Lassig
,
J.
,
Tang
,
K.
,
Tsutsui
,
S.
,
Chen
,
W.
,
Michalewicz
,
Z.
, and
Yao
,
X
. (
2014
).
Benchmarking optimization algorithms: An open source framework for the traveling salesman problem
.
IEEE Computational Intelligence Magazine
,
9
(
3
):
40
52
.
Zhang
,
G.
, and
Xing
,
K.
(
2019
).
Differential evolution metaheuristics for distributed limited-buffer flowshop scheduling with makespan criterion
.
Computers and Operations Research
,
108:33
43
.
Zhang
,
S.-J.
, and
Gu
,
X.-S
. (
2015
).
An effective discrete artificial bee colony algorithm for flow shop scheduling problem with intermediate buffers
.
Journal of Central South University
,
22
(
9
):
3471
3484
.