/
CS B553: Algorithms for Optimization and Learning CS B553: Algorithms for Optimization and Learning

CS B553: Algorithms for Optimization and Learning - PowerPoint Presentation

test
test . @test
Follow
405 views
Uploaded On 2016-12-05

CS B553: Algorithms for Optimization and Learning - PPT Presentation

Linear programming quadratic programming sequential quadratic programming Key ideas Linear programming Simplex method Mixedinteger linear programming Quadratic programming Applications Radiosurgery ID: 497703

quadratic polytope constraints feasible polytope quadratic feasible constraints programming ijk tumor linear bcx objective norm min dose programs dijk

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "CS B553: Algorithms for Optimization and..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

CS B553: Algorithms for Optimization and Learning

Linear programming, quadratic programming, sequential quadratic programmingSlide2

Key ideas

Linear programming

Simplex method

Mixed-integer linear programming

Quadratic programming

ApplicationsSlide3

Radiosurgery

3

CyberKnife (Accuray)Slide4

Tumor

Tumor

Normal tissue

Radiologically

sensitive tissueSlide5

Tumor

TumorSlide6

Tumor

TumorSlide7

Optimization Formulation

Dose cells (

x

i

,y

j

,zk) in a voxel gridCell class: normal, tumor, or sensitiveBeam “images”: B1,…,Bn describing dose absorbed at each cell with maximum power

Optimization variables: beam powers x1,…,xnConstraints:Normal cells: Dijk 

DnormalSensitive cells: Dijk  Dsensitive

Tumor cells: Dmin  Dijk  Dmax0 

x

b

1

Dose calculation:

Objective: minimize total dose

 Slide8

Linear Program

General form

min

f

T

x+g

s.t. A x  b C x = d

A convex polytope

A slice through the polytopeSlide9

Three cases

Infeasible

Feasible, bounded

?

f

x

*

f

Feasible, unbounded

f

x

*Slide10

Simplex Algorithm (Dantzig

)

Start from a vertex of the feasible

polytope

“Walk” along

polytope

edges while decreasing objective on each stepStop when the edge is unbounded or no improvement can be madeImplementation details:How to pick an edge (exiting and entering)Solving for vertices in large systemsDegeneracy: no progress made due to objective vector being perpendicular to edgesSlide11

Computational Complexity

Worst case exponential

Average case polynomial (perturbed analysis)

In practice, usually tractable

Commercial software (e.g., CPLEX) can handle millions of variables/constraints!Slide12

Soft Constraints

Dose

Penalty

Normal

Sensitive

TumorSlide13

Soft

Constraints

Dose

Auxiliary variable

z

ijk

: penalty at each cell

z

ijk

z

ijk

c(

D

ijk

D

normal

)

z

ijk

0

D

ijkSlide14

Soft Constraints

Dose

Auxiliary variable

z

ijk

: penalty at each cell

z

ijk

z

ijk

c(

D

ijk

D

normal

)

z

ijk

0

f

ijk

Introduce term in

objective

to minimize

z

ijkSlide15

Minimizing an Absolute Value

Absolute value

m

in

x

|x

1

|

s.t.Ax  bCx = d

x

1

Objective

min

v,x

v

s.t.

Ax

 b

Cx

= d

x

1

 v

-x

1

v

x

1

ConstraintsSlide16

Minimizing an L-1 or L-inf

norm

L

1

norm

L

 normminx

||Fx-g||1 s.t.Ax  bCx = d

minx ||Fx-g|| s.t.

Ax  bCx = d

Feasible

polytope

,

projected thru F

g

Fx

*

Feasible

polytope

,

projected

thru

F

g

Fx

*Slide17

Minimizing an L-1 or L-

inf

norm

L

1

norm

m

inx ||Fx-g||1 s.t.Ax  bCx

= d

Feasible

polytope

,

projected thru F

g

Fx

*

min

e,x

1

T

e

s.t.

F

x

+

Ie

 g

Fx

-

Ie

g

Ax

 b

Cx

=

d

eSlide18

Minimizing an L-2 norm

L

2

norm

m

in

x ||Fx-g||2 s.t.Ax  b

Cx = d

Feasible

polytope,projected thru F

Fx

*

g

Not a linear program!Slide19

Quadratic Programming

General form

min

½

x

T

Hx + gTx + h s.t. A x  b C x = d

Objective: quadratic form

Constraints: linearSlide20

Quadratic programs

Feasible

polytope

H positive definite

H

-1

gSlide21

Quadratic programs

Optimum can lie off of a vertex!

H positive definite

H

-1

gSlide22

Quadratic programs

Feasible

polytope

H negative definiteSlide23

Quadratic programs

Feasible

polytope

H positive

semidefiniteSlide24

Simplex Algorithm For QPs

Start from a vertex of the feasible

polytope

“Walk” along

polytope

facets while decreasing objective on each stepStop when the facet is unbounded or no improvement can be madeFacet: defined by mn constraintsm=n: vertexm=n-1: line

m=1: hyperplanem=0: entire spaceSlide25

Active Set Method

Active inequalities S=(i

1

,…,

i

m

)Constraints ai1Tx = bi1, … aimTx

= bimWritten as ASx – bS = 0

Objective ½ xTHx + gTx + fLagrange multipliers  = (

1,…,m)Hx + g + AST  = 0A

s

x

-

b

S

= 0

Solve linear system:

 

If x violates a different constraint not in S, add it

If

k

<0 , then drop

i

k

from SSlide26

Properties of active set methods for QPs

Inherits properties of simplex algorithm

Worst case: exponential number of facets

Positive definite H: polynomial time in typical case

Indefinite or negative definite H: can be exponential time!

NP complete problemsSlide27

Applying QPs to Nonlinear Programs

Recall: we could convert an equality constrained optimization to an unconstrained one, and use Newton’s method

Each Newton step:

Fits a quadratic form to the objective

Fits

hyperplanes

to each equalitySolves for a search direction (x,) using the linear equality-constrained optimization

How about inequalities?Slide28

Sequential Quadratic Programming

Idea: fit half-space constraints to each inequality

g(x)

 0 becomes

g(

x

t) + g(xt

)T(x-xt)  0

g(x)

 0

x

t

g(

x

t

) +

g(

x

t

)

T

(x-

x

t

)

0Slide29

Sequential Quadratic Programming

Given nonlinear minimization

min

x

f(x)

s.t.gi(x)  0, for i=1,…,mh

j(x) = 0, for j=1,…,pAt each step xt, solve QPminx ½

xTx2L(xt,

t,t)x + xL(xt,t,

t

)

T

x

s.t.

g

i

(

x

t

) +

g

i

(

x

t

)

T

x

0

for

i=1,…,m

h

j

(

x

t

) +

h

j

(

x

t

)

T

x

=

0

for j=1,…,p

To derive the search direction

x

Directions  and  are taken from QP multipliersSlide30

Illustration

g(x)

 0

x

t

g(

x

t

) +

g(

x

t

)

T

(x-

x

t

)

0

xSlide31

Illustration

g(x)

 0

x

t+1

g(x

t+1

)

+

g(x

t

+1

)

T

(x-x

t

+1

)

0

xSlide32

Illustration

g(x)

 0

x

t+2

g(x

t+2

)

+

g(x

t

+2

)

T

(x-x

t+2

)

0

xSlide33

SQP Properties

Equivalent to Newton’s method without constraints

Equivalent to Lagrange root finding with only equality constraints

Subtle implementation details:

Does the endpoint need to be strictly feasible, or just up to a tolerance?

How to perform a line search in the presence of inequalities?

Implementation available in Matlab. FORTRAN packages too =(