Subproblem of Pm prec p j 1 C max Outline Introduction Approach CP and LNS heuristics HLF heuristics Numerical results Pm prec p j 1 C max Problem find the ID: 371394
Download Presentation The PPT/PDF document "Evaluating Heuristics for the Fixed-Pred..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Evaluating Heuristics for the Fixed-PredecessorSubproblem of Pm | prec, pj = 1 | CmaxSlide2
OutlineIntroductionApproachCP and LNS heuristicsHLF heuristicsNumerical resultsSlide3
Pm | prec, pj = 1 | CmaxProblem: find the makespan-minimizing schedule for a set of unit-length jobs with arbitrary precedence constraintsEfficient algorithms exist for m = 2
Unknown complexity for fixed m >= 3Slide4
QuestionCan we discover anything by restricting to subproblems with more structured precedence constraints?Are any approaches we know optimal for these subproblems?Slide5
MotivationIf a subproblem is found to be easy:More information about boundary between easy and hard problemsCan easily schedule such instances in real worldIf a subproblem is found to be hard:General case is also hard, resolving open problemEasier to reason about a problem with more structureSlide6
SubproblemsWe saw two such subproblems in class:In-tree
Out-treeSlide7
HeuristicsCritical path (CP), largest number of successors (LNS) optimalIn-tree
Out-treeSlide8
HeuristicsCritical path: prioritize nodes at the head of the longest path of jobs that still need to runLargest number of successors: prioritize nodes which are a predecessor (direct or indirect) of the most nodesSlide9
GeneralizationCan we find other precedence structures for which these heuristics are optimal?Slide10
GeneralizationIn-tree: each node has one successorOut-tree: each node has one predecessorBoth are planarIn-tree
Out-treeSlide11
GeneralizationGeneralize out-tree: allow arbitrary number K of predecessors per nodePictured: K = 3Slide12
MethodGenerate many instance with the K-predecessor structureSolve each instance with each algorithm, several timesSlide13
MethodGenerate many instance with the K-predecessor structureSolve each instance with each algorithm, several timesIf algorithm performs worse than another algorithm on an instance, cannot be optimalIf algorithm’s schedule differs in makespan across trials, algorithm cannot be optimalSlide14
Instance GenerationAdd K root nodes to the graphIteratively add nodes, randomly choosing K predecessors each timeAny valid instance has a chance to be generated using this algorithmSlide15
Results: CP and LNSNeither CP nor LNS is optimal for the K-predecessor problemOn some instances, both result in inconsistent CmaxSlide16
Results: CPBoth algorithms fail on this graph. Here are the CP schedules:Time123
4
Machine 1
1
6
8
10
Machine 2
0
5
4
9
Machine 3
2
3
7
11
Time
1
2
3
4
5
Machine 1
1
6
8
9
7
Machine 2
2
5
3
10
--
Machine 3
0
4
--
11
--Slide17
Results: LNSBoth algorithms fail on this graph. Here are the LNS schedules:Time123
4
Machine 1
2
3
4
11
Machine 2
1
6
8
10
Machine 3
0
5
7
9
Time
1
2
3
4
5
Machine 1
0
3
6
8
11
Machine 2
1
4
10
9
--
Machine 3
2
5
7
--
--Slide18
Results: Planar GraphsEven if we restrict to planar 2-predecessor graphs, there are instances for which both CP and LNS have inconsistent Cmax Time123
4
5
Machine 1
2
3
8
6
7
Machine 2
0
--
5
9
11
Machine 3
1
--
4
10
12
Time
1
2
3
4
5
6
Machine 1
2
3
5
4
7
10
Machine 2
0
--
8
9
11
--
Machine 3
1
--
6
--
12
--
Each schedule satisfies both the CP and LNS heuristicsSlide19
Other Heuristics?P2 | prec, pj = 1 | Cmax has been solved efficientlyAll highest level first (HLF) schedules are optimalProof, almost-linear algorithm given by [Gabow 1982]Level of a node: length of critical path starting at that nodeSlide20
HLF HeuristicA restriction of CP schedulesProcess nodes from highest to lowest levelWhen finishing a level, if a machine remains available, “jump” a runnable job from the highest possible levelIf there are multiple candidates, choose the one that allows future jumps to be to as high level as possibleSlide21
JumpsWith 2 machines, 16 is processed and a machine is freeNode 15 or 10 can be “jumped” to run on the free machineChoose 15, since higher levelImage: [Gabow 1982]Slide22
Generalizing to M >= 3Not obvious how to generalize Gabow’s algorithm, since it assumes only one job is jumped each timeFor M = 3, can jump either 1 or 2 jobs each timeCan still generate HLF schedules, though less efficiently, and observe performanceAssumption: given a choice, we want to jump as many nodes as possible, minimizing idle timeSlide23
HLF Self-ConsistencyCheck if HLF schedules for the same instance agree on CmaxGenerate 20,000,000 instances, each with 20 nodes, K = 3 Compare makespan of all HLF schedulesSlide24
HLF Self-ConsistencyResult: no disagreements occurredCaveats:Does not mean no disagreement is possibleEven if it is consistent, does not mean it is optimalSlide25
HLF Non-OptimalThere are (very rare) instances where some CP schedules beat HLFSlide26
HLF Non-OptimalTime12345
6
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide27
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide28
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide29
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide30
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide31
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
Highest level at the bottom. Level is in [brackets].
Green nodes are executed on their level; blue nodes are jumped.
CP Schedule: Superior
HLF Schedule: InferiorSlide32
HLF Non-OptimalTime123456
7
M1
1
3
4
8
13
14
19
M2
0
5
7
10
11
16
17
M3
2
6
--
12
9
15
18
Time
1
2
3
4
5
6
7
8
M1
0
4
3
7
10
11
17
15
M2
1
5
8
9
13
14
18
--
M3
2
6
12
--
--
16
19
--
Note:
It’s actually optimal to execute fewer nodes early on, so that the critical job #7 can finish earlier.
CP Schedule: Superior
HLF Schedule: InferiorSlide33
HLF Non-OptimalIn the 2-processor case, you always jump 1 nodeWith 3 processors, can choose between jumping 1 or 2Sometimes, it’s better to jump 1 to allow a critical job to run earlierSlide34
Numerical ResultsAverage makespan for each algorithm given M, N, KM: number of machinesN: number of jobsK: branching factorMNK
CP
LNS
HLF
RANDOM
3
20
3
3
25
3
3
100
3
6
40
3
6
100
3
6
100
4Slide35
Numerical AnalysisHLF is better than CP and LNS (by a tiny fraction)HLF implementation too inefficient to run on large graphsCP and LNS near identical, some divergence when K changedMNKCP
LNS
HLF
RANDOM
3
20
3
7.844
7.846
7.843
8.154
3
25
3
9.320
9.321
9.319
9.814
3
100
3
33.968
33.968
N/A
34.544
6
40
3
19.814
19.814
N/A
20.988
6
100
3
21.309
21.309
N/A
22.585
6
100
4
16.344
16.354
N/A
17.068Slide36
Numerical AnalysisRandom performed admirably, but scaled less well as number of machines increased (though difference seemed to shrink with branching factor)MNKCPLNSHLF
RANDOM
3
20
3
7.844
7.846
7.843
8.154
3
25
3
9.320
9.321
9.319
9.814
3
100
3
33.968
33.968
N/A
34.544
6
40
3
19.814
19.814
N/A
20.988
6
100
3
21.309
21.309
N/A
22.585
6
100
4
16.344
16.354
N/A
17.068Slide37
End