/
Scenario Trees and Scenario Trees and

Scenario Trees and - PowerPoint Presentation

briana-ranney
briana-ranney . @briana-ranney
Follow
378 views
Uploaded On 2017-01-13

Scenario Trees and - PPT Presentation

Metaheuristics for Stochastic Inventory Routing Problems DOMinant Workshop Molde Norway September 2122 2009 Lars Magnus Hvattum Norwegian University of Science and Technology Trondheim ID: 509220

tree stp solution scenario stp tree scenario solution node problem inventory grasp deliveries trees instances methods number based problems solutions time hedging

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Scenario Trees and" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Scenario Trees and Metaheuristics forStochastic Inventory Routing Problems

DOMinant Workshop, Molde, Norway, September 21.-22., 2009

Lars Magnus Hvattum

Norwegian

University of Science and

Technology

, Trondheim,

Norway

Arne Løkketangen

Molde University College, Molde,

Norway

Gilbert Laporte

HEC, Montréal, CanadaSlide2

Outline of the presentationSlide3

Inventory Routing Problems extend the VRP, and look at a larger part of a supply chain

Supplier

Customer

VRP

IRPSlide4

The dynamic aspect of inventory

routing can be handled in various ways

1) Single

Period

models

2)

Multi-Period

models

3) Infinite horizon

models

(...)

(...)

Kleywegt

,

Nori

, and

Savelsbergh

(2002, 2004)

Adelman

(2004)Slide5

The problem is modelled as a Markov Decision

Process, where each epoch corresponds

to a

daySlide6

The problem is modelled as a Markov Decision

Process, where each epoch corresponds

to a

daySlide7

The

infinite

horizon

discounted

reward Markov Decision Problem can only be solved for tiny instancesA large

number of states: all possible inventory levelsA large number of actions: all possible delivery

patternsA large number

of transitions: all possible demand

realizations

(...)

(...)

(...)Slide8

Previous solution methods for these

SIRPs are based on approximating

the

value functions

Standard algorithms for MDPs can solve some instances: 4 customers,1 vehicle, and 9

inventory levels5 customers, 5 vehicles, 5 inventory levels, but only direct

delivery

Kleywegt

et al. propose approximations

of

the

value

function

for

instances

with

direct

delivery or at most 3

deliveries per routeAdelman proposes approximations

of

the

value

function

for

instances

with

an

unlimited

number

of

vehiclesSlide9

To

simplify, we only look at

finite

scenario

trees to generate a stochastic policy

The infinite horizon is approximated by a finite treeThe large number of transitions is

approximated by sampled realizationsThe action space remains unchangedState transitions are

based on current

inventory level, delivered

quantities and sampled demand

realizationsSlide10

The problem of finding optimal actions conditional on the sampled tree is formalized as an integer programSlide11

We have examined three methods

to solve the scenario tree problem (STP)

CPLEX

Solve

IPSolve

MIP, where integrality constraints are kept only for root node

GRASPConstruct solutions to the STP in a randomized adaptive fashion, but with added learning

mechanisms

Progressive Hedging Algorithm

Decompose problem over scenarios, solve each

scenario

using

modified

GRASP

Gradually

enforce

an

implementable

solution

by

penalties

, including a quadratic termSlide12

u = 0,2,0,0

y =

0,0,0,0

d = 1,2,2,1

d = 1,1,2,2

y =

0,0,0,0

y =

0,0,0,0

u =

0,0,0,0

u = 0,1,0,0

In GRASP,

we

start

with

a

solution

without

any

deliveries

,

then

add

more

deliveries

if

profitable

(...)

(...)

(...)

(...)

No

deliveries

scheduled

in

any

nodeSlide13

u = 0,2,0,0

y =

0,0,

2

,0

d = 1,2,2,1

d = 1,1,2,2

y =

0,0,0,0

y =

0,0,0,0

u =

0,0,0,0

u = 0,1,0,0

In GRASP,

we

start

with

a

solution

without

any

deliveries

,

then

add

more

deliveries

if

profitable

(...)

(...)

(...)

(...)

First

iteration

:

add

2

units

of

delivery

to

customer

3

using

vehicle

1 in node 1

No

increase

in

inventory

,

but

reduction

in

stock-outsSlide14

u = 0,2,0,0

y =

0,0,2,

2

d = 1,2,2,1

d = 1,1,2,2

y =

0,0,0,0

y =

0,0,0,0

u =

0,0,0,

1

u = 0,1,0,0

In GRASP,

we

start

with

a

solution

without

any

deliveries

,

then

add

more

deliveries

if

profitable

(...)

(...)

(...)

(...)

Second

iteration

:

add

2

units

of

delivery

to

customer

4

using

vehicle

1 in node 1

Increase

in

inventory

in node 3 for

customer

4Slide15

u = 0,2,0,0

y =

0,0,2,2

d = 1,2,2,1

d = 1,1,2,2

y =

0,0,

2

,0

y =

0,0,0,0

u =

0,0,0,1

u = 0,1,0,0

In GRASP,

we

start

with

a

solution

without

any

deliveries

,

then

add

more

deliveries

if

profitable

(...)

(...)

(...)

(...)

Continue

making

insertions

:

y

additional

units

to

customer

i

using

vehicle

k

in node

v

Stop

when

no

profitable,

feasible

insertions

can

be

madeSlide16

Several variations of the GRASP

are examined, to find a robust version that

can

handle different types of instances

Restricted Candidate List (RCL) based on either value or rank [former is more robust]Size

of RCL is controlled by a parameter that is adjusted dynamically [more robust than a fixed

value]

Build solution node by node (recursively

) or all nodes simultaneously [former is

much

faster]

Use

learning

based

on

analysing

completed

solutions to find potential improvements

[

increases

robustness

of

building

solution

node by node]Slide17

The progressive hedging algorithm is based

on decomposing the problem over scenariosSlide18

Single scenarios are solved separately,

until implementability is enforced through penalties

Each

scenario is

solved using

GRASPObjective function is modified to penalize deviations from an averaged solution:Slide19

Several variations of the progressive hedging

algorithm are examined, to find a robust

version

that can handle different types of instances

A progressive hedging heuristic is employed to generated feasible solutions to the

original scenario tree problem [ensures that good

feasible solutions

are

found even

if

the

method

has

insufficient

time to

converge

]

Penalty

parameter is

updated

dynamically

[

better

balance

of progress

towards

an

implementable

solution

]

Use

multiple

penalty

parameters and

weights

[

better

than

using

a single parameter]

Use

intermediate

heuristic

solutions

for guiding

the

search [does not work,

quicker convergence but to

worse solutions]Lock variables for

which concensus seems to have been reached [

does not work, induces

cycling behavior?]Slide20

The search of the resulting progressive hedging algorithm has a fairly similar behavior across instances

Left: the averaged solution

gives

a

heuristic upper bound, and the

progressive hedging heuristic gives actual lower boundsRight: we

can measure the distances in the solution space and the parameter space between

iterations, as well as

the dynamic penalty parameterSlide21

There are different ways

of controlling the computational effort used by

the

methodsIncreasing the

effort gives improved resultsRight: profit as a function of the number of GRASP

iterations per epochSlide22

The size of the scenario trees is

crucial both for the computational time and the

simulation

resultsIncreasing

the size of the scenario trees increases the computational effort as well as

the profits observedSlide23

Several scenario tree problems were studied

separately to study the GRASP, the PHA, and

the

other methods

NameLP(-STE)

DSExact

IP4

MIP

TDG

ANG

PHA:

TDG

PHA

:

ANG

STP 01

720.2

-

651.0

687.1

691.0

690.5

690.5

692.7

693.0

STP

04

798.2

-

724.0

753.7

756.7

757.1

757.2

754.2

755.3

STP

05

157626.0

-

-

-

73652.9

151958.0

151378.0

146832.0

139283.0

STP

07

157352.0

-

-

126497.0

148095.0

151960.0

151458.0

151458.0

143851.0

STP

11

-266.8

-617.7

-

-

-

-473.3

-473.4

-478.7

-461.9

STP

15

-420.7

-869.4

-

-

-

-793.3

-772.9

-763.1

-748.9

STP

16

-

-984.7

-

-

-

-938.1

-909.4

-933.3

-925.0

STP

21

-270.2

-623.9

-

-

-

-479.7

-478.7

-487.5

-470.4

STP

26

-113.3

-909.2

-

-

-

-563.4

-514.0

-518.8

-536.7

STP

31

67452.7

-

-

-

62012.1

62678.4

62620.6

63283.6

63336.2

STP

41

4570.6

-

4137.8

-

4195.1

4111.6

4181.3

4116.0

4161.1

STP

42

-

-

-

-

-

-160.5

89.5

-900.1

-1278.2

STP

43

1563.0

-

-

-

-

-1155.3

-1149.9

-1119.1

-1683.9

STP

51

33309.4

-

-

32537.7

32537.7

32365.2

32365.2

32365.2

32365.2

STP

52

22268.1

-

-

21740.9

21740.9

21433.7

21433.7

21411.7

21411.7

STP

55

1375.0

-

1071.9

1232.6

1213.5

1229.9

1229.9

1231.7

1232.6Slide24

Simulations are run over many epochs

(600), and quick heuristics for the scenario tree problems must be

selected

Our

methods: no initial time required, but should allocate some time for the

daily problem (time consuming when evaluating, but ok in practice)Other methods: high

effort initially (days

), but fairly quick

for the daily problem (ok

when

evaluating

,

practice

requires

a stable

situation

)Slide25

GRASP is quicker than PHA, but PHA is

better on some instances (produces

solutions

with different structure)Slide26

Some of the observations made

during the work have potential for leading to

future

research

1) Scenario trees are used to represent the stochastic and dynamic aspects: how to

incorporate these in the most efficient way?2) Scenario tree problems must be solved to generate decisions: how

is this best done? Slide27

Scenario trees are not often used in (

meta-) heuristics, and several questions remain

as to

how

they should be generated

We generate them using random sampling to cover a specified tree structure

The size of the tree is determined by specifying the branching factors for each

level of the treeSlide28

Can we save computational time by using

a clever tree structure?

We

use lower branching

factors lower in the tree (more important to represent stochasticity that is close in time?)

We should vary the depth/width of the tree based on

the instance solved?

We use the

same length for every scenario

We

had

to limit

the

size

of

the

tree

to be

able

to

evaluate the methods with simulationsSlide29

We can improve results

for a given tree size by sampling differently?

We

also tried a moment matching

method, but a requirement of the implementation was that for a single parent node, the number

of children must be at least the number of customers (distributions)Research question: can

we determine a suitable

objective function to be used by a metaheuristic

that constructs scenario trees

?

We

use

random

sampling: by

using

larger

trees

we

get

closer

to

the

true

distributionSlide30

We

have tested GRASP and PHA for solving the scenario tree

problems

Local

search

based methods frequently perform better and faster than GRASPProblem: difficult to find local

moves in a scenario tree, as the interconnectedness of decisions creates feasibility issues(

how to find a suitable

move evaluation? how

to guide the search back to feasible

space

if

allowing

capacity/inventory

violations

?)Slide31

One idea is to hybridize local

search and construction heuristics

Solution

:

Do

local

search

moves

only

in

this

part of

the

solution

(

representation

)

To

evaluate

each

move

,

this

part of

the

solution

must be

completed

using

a

construction

heuristic

(

Root

node)

(Rest of

tree

)Slide32

Concluding

remarksStochastic and

dynamic

problems

may become increasingly important

(with better technology and access to data)The stochastic inventory routing problem is an interesting

playground for testing how one can deal with stochastic and dynamic problemsUsing

scenario trees to represent stochasticity

is relatively untested in combination

with metaheuristicsSeveral

directions

for

future

research

have

been

found

but

not

yet

pursued