/
Lecture 13   L 1  ,  L ∞ Lecture 13   L 1  ,  L ∞

Lecture 13 L 1 , L ∞ - PowerPoint Presentation

SocialButterfly
SocialButterfly . @SocialButterfly
Follow
353 views
Uploaded On 2022-08-02

Lecture 13 L 1 , L ∞ - PPT Presentation

Norm Problems and Linear Programming Syllabus Lecture 01 Describing Inverse Problems Lecture 02 Probability and Measurement Error Part 1 Lecture 03 Probability and Measurement Error Part 2 ID: 932602

linear lecture norm problem lecture linear problem norm problems programming aeq outliers data inverse part solution equations zeros transformation

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Lecture 13 L 1 , L ∞" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Lecture 13

L

1

,

L

Norm Problems

and

Linear Programming

Slide2

Syllabus

Lecture 01 Describing Inverse Problems

Lecture 02 Probability and Measurement Error, Part 1

Lecture 03 Probability and Measurement Error, Part 2

Lecture 04 The L

2

Norm and Simple Least Squares

Lecture 05 A Priori Information and Weighted Least Squared

Lecture 06 Resolution and Generalized Inverses

Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and Variance

Lecture 08 The Principle of Maximum Likelihood

Lecture 09 Inexact Theories

Lecture 10

Nonuniqueness

and Localized Averages

Lecture 11 Vector Spaces and Singular Value Decomposition

Lecture 12 Equality and Inequality Constraints

Lecture 13 L

1

, L

Norm Problems and Linear Programming

Lecture 14 Nonlinear Problems: Grid and Monte Carlo Searches

Lecture 15 Nonlinear Problems: Newton’s Method

Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals

Lecture 17 Factor Analysis

Lecture 18

Varimax

Factors,

Empircal

Orthogonal Functions

Lecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s Problem

Lecture 20 Linear Operators and Their

Adjoints

Lecture 21

Fr

é

chet

Derivatives

Lecture 22 Exemplary Inverse Problems, incl. Filter Design

Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location

Lecture 24 Exemplary Inverse Problems, incl.

Vibrational

Problems

Slide3

Purpose of the Lecture

Review Material on Outliers and Long-Tailed Distributions

Derive the

L

1

estimate of the

mean and variance of an exponential distribution

Solve the Linear Inverse Problem under the

L

1

norm

by Transformation to a Linear Programming Problem

Do the same for the L

problem

Slide4

Part 1

Review Material on Outliers and Long-Tailed Distributions

Slide5

Review of the

L

n

family of norms

Slide6

higher norms give

increaing

weight to largest element of

e

Slide7

limiting case

Slide8

but which norm to use?

it makes a difference!

Slide9

outlier

L

1

L

2

L

Slide10

B)

A)

Answer is related to the distribution of the error. Are outliers common or rare?

lon

g tails

outliers

common

outliers

unimportant

use low norm

gives low weight to outliers

short

tails

outliers

uncommon

outliers

important

use high norm

gives high weight to outliers

Slide11

as we showed previously …

use L

2

norm

when data has

Gaussian-distributed error

as we will show in a moment …

use L

1

norm

when data has

Exponentially-distributed error

Slide12

comparison of

p.d.f.’s

Gaussian

Exponential

Slide13

to make realizations of an exponentially-distributed random variable in

MatLab

mu =

sd

/

sqrt

(2);

rsign

= (2*(random('unid',2,Nr,1)-1)-1);

dr

=

dbar

+

rsign

.* ...

random('exponential',mu,Nr,1);

Slide14

Part 2

Derive the

L

1

estimate of the

mean and variance of an exponential distribution

Slide15

use of Principle of Maximum Likelihood

maximize

L

= log

p

(

d

obs

)

the log-probability that the observed data was in fact observed

with respect to unknown parameters in the

p.d.f

.

e.g. its mean

m

1

and variance

σ

2

Slide16

Previous Example: Gaussian

p.d.f

.

Slide17

solving the two equations

Slide18

solving the two equations

usual formula for the

sampl

e mean

almost

the

usual formula for the

sampl

e standard deviation

Slide19

New Example: Exponential

p.d.f

.

Slide20

solving the two equations

m

1

est

=median(

d

) and

Slide21

solving the two equations

m

1

est

=median(

d

) and

more robust than sample mean

since outlier moves it only by “one data point”

Slide22

E(m)

(A)

(C)

(B)

E(m)

E(m)

m

est

m

est

m

est

Slide23

observations

When the number of data are even, the solution in non-unique but bounded

The solution exactly satisfies one of the data

these

properties carry over to the general linear problem

In certain cases, the solution can be non-unique but bounded

The solution exactly satisfies

M

of

the data equations

Slide24

Part 3

Solve the Linear Inverse Problem under the

L

1

norm

by Transformation to a Linear Programming Problem

Slide25

the

Linear Programming

problem

review

Slide26

Case A

The Minimum

L

1

Length Solution

Slide27

minimize

subject to the constraint

Gm

=

d

Slide28

minimize

subject to the constraint

Gm

=

d

weighted

L

1

solution

length

(weighted by

σ

m

-1

)

usual data

equations

Slide29

transformation to an equivalent linear programming problem

Slide30

Slide31

all variables are required to be positive

Slide32

usual data

equations

with

m=m’-m’’

Slide33

“slack variables”

standard trick in linear programming

to allow

m

to have any sign while

m

1

and

m

2

are non-negative

Slide34

same

as

Slide35

then

α

(

m

-<

m

>)

since

x

≥0

if +

can always be satisfied by choosing an

appropriate

x

Slide36

if -

can always be satisfied by choosing an

appropriate

x

then

α

≥ -(

m

-<

m

>)

since

x

≥0

Slide37

taken together

then

α

≥|

m

-<

m

>|

Slide38

minimizing

z

same

as minimizing weighted solution length

Slide39

Case B

Least

L

1

error solution

(analogous to least squares)

Slide40

transformation to an equivalent linear programming problem

Slide41

Slide42

same as

α

x

=

Gm

d

α

x’

= -(

Gm

d

)

so previous argument applies

Slide43

MatLab

% variables

% m = mp -

mpp

% x = [mp',

mpp

', alpha', x',

xp

']'

% mp,

mpp

len

M and alpha, x,

xp

,

len

N

L = 2*M+3*N;

x = zeros(L,1);

f = zeros(L,1);

f(2*M+1:2*M+N)=1./

sd

;

Slide44

% equality constraints

Aeq

= zeros(2*N,L);

beq

= zeros(2*N,1);

% first equation G(mp-

mpp

)+x-alpha=d

Aeq

(1:N,1:M) = G;

Aeq

(1:N,M+1:2*M) = -G;

Aeq

(1:N,2*M+1:2*M+N) = -eye(N,N);

Aeq

(1:N,2*M+N+1:2*M+2*N) = eye(N,N);

beq

(1:N) = dobs;

% second equation G(mp-

mpp

)-

xp+alpha

=d

Aeq

(N+1:2*N,1:M) = G;

Aeq

(N+1:2*N,M+1:2*M) = -G;

Aeq

(N+1:2*N,2*M+1:2*M+N) = eye(N,N);

Aeq

(N+1:2*N,2*M+2*N+1:2*M+3*N) = -eye(N,N);

beq

(N+1:2*N) = dobs;

Slide45

% inequality constraints A x <= b

% part 1: everything positive

A = zeros(L+2*M,L);

b = zeros(L+2*M,1);

A(1:L,:) = -eye(L,L);

b(1:L) = zeros(L,1);

% part 2; mp and

mpp

have an upper bound.

A(L+1:L+2*M,:) = eye(2*M,L);

mls

= (G'*G)\(G'*dobs); % L2

mupperbound

=10*max(abs(

mls

));

b(L+1:L+2*M) =

mupperbound

;

Slide46

% solve linear programming problem

[x,

fmin

] =

linprog

(

f,A,b,Aeq,beq

);

fmin

=-

fmin

;

mest

= x(1:M) - x(M+1:2*M);

Slide47

z

i

d

i

outlier

Slide48

the mixed-determined problem of

minimizing

L+E

can also be solved via transformation

but we omit it here

Slide49

Part 4

Solve the Linear Inverse Problem under the

L

norm

by Transformation to a Linear Programming Problem

Slide50

we’re going to skip all the details

and just show the transformation

for the

overdetermined

case

Slide51

minimize

E=

max

i

(

e

i

/

σ

di

)

where

e

=

d

obs

-

Gm

Slide52

note

α

is a scalar

Slide53

Slide54

z

i

d

i

outlier