/
Lecture 1 Lecture 1

Lecture 1 - PowerPoint Presentation

mitsue-stanley
mitsue-stanley . @mitsue-stanley
Follow
367 views
Uploaded On 2016-03-19

Lecture 1 - PPT Presentation

Describing Inverse Problems Syllabus Lecture 01 Describing Inverse Problems Lecture 02 Probability and Measurement Error Part 1 Lecture 03 Probability and Measurement Error Part 2 Lecture 04 The L ID: 262216

theory lecture problems inverse lecture theory inverse problems model data quantitative part length error discrete volume average density linear

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Lecture 1" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Lecture 1

Describing Inverse ProblemsSlide2

Syllabus

Lecture 01 Describing Inverse Problems

Lecture 02 Probability and Measurement Error, Part 1

Lecture 03 Probability and Measurement Error, Part 2

Lecture 04 The L

2

Norm and Simple Least Squares

Lecture 05 A Priori Information and Weighted Least Squared

Lecture 06 Resolution and Generalized Inverses

Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and Variance

Lecture 08 The Principle of Maximum Likelihood

Lecture 09 Inexact Theories

Lecture 10

Nonuniqueness

and Localized Averages

Lecture 11 Vector Spaces and Singular Value Decomposition

Lecture 12 Equality and Inequality Constraints

Lecture 13 L

1

, L

Norm Problems and Linear Programming

Lecture 14 Nonlinear Problems: Grid and Monte Carlo Searches

Lecture 15 Nonlinear Problems: Newton’s Method

Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals

Lecture 17 Factor Analysis

Lecture 18

Varimax

Factors,

Empircal

Orthogonal Functions

Lecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s Problem

Lecture 20 Linear Operators and Their

Adjoints

Lecture 21

Fr

é

chet

Derivatives

Lecture 22 Exemplary Inverse Problems, incl. Filter Design

Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location

Lecture 24 Exemplary Inverse Problems, incl.

Vibrational

ProblemsSlide3

Purpose of the Lecture

distinguish forward and inverse problems

categorize inverse problems

examine a few examples

enumerate different kinds of solutions to inverse problemsSlide4

Part 1

Lingo for discussing the relationship between observations and the things that we want to learn from themSlide5

three important definitionsSlide6

things that are measured in an experiment or observed in nature…

data,

d

= [

d

1

, d

2

, … dN]T

things you want to know about the world …

model parameters, m = [m1, m2, … mM]T

relationship between data and model parameters

quantitative model (or

theory

)Slide7

gravitational accelerations

travel time of seismic waves

data,

d

= [

d

1

, d

2, … dN]T

model parameters,

m = [m1, m2, … mM]T quantitative model (or

theory)

density

seismic velocity

Newton’s law of gravity

seismic wave equationSlide8

Quantitative Model

Quantitative Model

m

est

d

pre

m

est

d

obs

Forward Theory

Inverse Theory

estimates

predictions

observations

estimatesSlide9

Quantitative Model

Quantitative Model

m

true

d

pre

d

obs

m

est

due to observational errorSlide10

Quantitative Model

Quantitative Model

m

true

d

pre

d

obs

m

est

due to observational error

due to error propagationSlide11

Understanding the effects of

observational

error

is central

to Inverse TheorySlide12

Part 2

types of quantitative models

(or

theories

)Slide13

A. Implicit Theory

L

relationships between the data and the model are knownSlide14

Example

mass = density

length

⨉ width ⨉ height

M

H

M

=

ρ

L

⨉ W ⨉ H

L

ρSlide15

weight = density

volume

measure

mass,

d

1

size,

d2,

d3, d4,want to know density, m

1

d

1

=

m

1

d

2

d

3

d

4

or

d

1

-

m

1

d

2

d

3

d

4

= 0

d

=[

d

1

,

d

2

,

d

3

,

d

4

]

T

and

N=4

m

=[

m

1

]

T

and

M=1

f

1

(

d,m

)

=0

and

L=1Slide16

note

No guarantee that

f

(

d

,

m

)=0

contains enough informationfor unique estimate mdetermining whether or not there is enoughis part of the inverse problemSlide17

B. Explicit Theory

the equation can be arranged so that

d

is a function of

m

L

=

N

one equation per datum

d = g(m) or d - g(m) = 0Slide18

Example

Circumference = 2

length

+ 2 ⨉ height

L

rectangle

H

Area = length

⨉ heightSlide19

C = 2L+2H

measure

C=

d

1

A=

d2

want to know

L=m1 H=m2

d=[

d

1

,

d

2

]

T

and

N=2

m

=[

m

1

,

m

2

]

T

and

M=2

Circumference = 2

length

+ 2 ⨉ height

Area = length

⨉ height

A=LH

d

1

= 2m

1

+ 2m

2

d

2

m

1

m

2

d=g(m)Slide20

C. Linear Explicit Theory

the function

g

(

m

) is a matrix

G

times

m

G

has N rows and M columnsd = GmSlide21

C. Linear Explicit Theory

the function

g

(

m

) is a matrix

G

times

m

G

has N rows and M columnsd = Gm

“data kernel”Slide22

Example

M

=

ρ

g

V

g

+

ρq ⨉ V qgold

quartz

total mass = density of gold

volume of gold

+ density of quartz

volume of quartz

V

= V

g

+ V

q

total volume = volume of gold

+ volume of quartzSlide23

M

=

ρ

g

V

g

+ ρq ⨉ V q

V

= V g+ V qmeasure V = d1

M =

d

2

want to know

V

g

=

m

1

V

q

=

m

2

assume

ρ

g

ρ

g

d

=[

d

1

,

d

2

]

T

and

N=2

m

=[

m

1

,

m

2

]

T

and

M=2

d

=

1

1

ρ

g

ρ

q

m

knownSlide24

D. Linear Implicit Theory

The

L

relationships between the data are linear

L

rows

N+M

columnsSlide25

in all these examples

m

is discrete

one

could have a continuous

m(x)

instead

discrete

inverse theory

continuous

inverse theorySlide26

as a discrete vector

m

in this course

we will usually approximate a continuous

m(x)

m

= [

m(

Δ

x

), m(2Δx), m(3Δx) … m(MΔ

x)]

T

but we will spend some

time later in the course dealing with the continuous problem directly Slide27

Part 3

Some ExamplesSlide28

time, t (calendar years)

temperature anomaly, T

i

(deg C)

A. Fitting a straight line to data

T = a +

btSlide29

each data point

is predicted by a straight lineSlide30

matrix formulation

d = G m

M=2Slide31

B. Fitting a parabola

T = a

+

bt

+ ct

2Slide32

each data point

is predicted by a

strquadratic

curveSlide33

matrix formulation

d = G m

M=3Slide34

straight line

note similarity

parabolaSlide35

in

MatLab

G=[ones(N,1), t, t.^2];Slide36

C. Acoustic Tomography

1

2

3

4

5

6

7

8

13

14

15

16

h

h

source,

S

receiver,

R

travel

time = length

slownessSlide37

collect data along rows and columnsSlide38

matrix formulation

d = G m

M=16

N=8Slide39

In

MatLab

G=zeros(N,M);

for

i

= [1:4]

for j = [1:4]

% measurements over rows

k = (i-1)*4 + j;

G(

i,k)=1; % measurements over columns k = (j-1)*4 + i;

G(i+4,k)=1;

end

end

Slide40

D. X-ray Imaging

S

R

1

R

2

R

3

R

4

R

5

enlarged lymph node

(A)

(B)Slide41

theory

I

= Intensity of x-rays (data)

s

= distance

c

= absorption coefficient (model

parameters)Slide42

Taylor Series

approximationSlide43

Taylor Series

approximation

discrete pixel

approximationSlide44

Taylor Series

approximation

discrete pixel

approximation

length of beam

i

in pixel j

d = G m Slide45

d = G m

matrix formulation

M

10

6

N

10

6Slide46

note that

G

is huge

10

6

⨉10

6

but it is sparse

(mostly zero)since a beam passes through only a tiny fraction of the total number of pixelsSlide47

in

MatLab

G =

spalloc

( N, M, MAXNONZEROELEMENTS);Slide48

E. Spectral Curve FittingSlide49

single spectral peak

area,

A

position,

f

width,

c

z

p(z)Slide50

q

spectral peaks

Lorentzian

d

=

g

(m)Slide51

e

1

e

2

e

3

e

4

e

5

e

1

e

2

e

3

e

4

e

5

s

1

s

2

ocean

sediment

F. Factor AnalysisSlide52

d

=

g

(m)Slide53

Part 4

What kind of solution are we looking for ?Slide54

A: Estimate of model parameters

meaning numerical values

m

1

= 10.5

m

2

= 7.2Slide55

But we really need confidence limits, too

m

1

= 10.5 ± 0.2

m

2

= 7.2 ± 0.1

m

1 = 10.5 ± 22.3

m2

= 7.2 ± 9.1orcompletely different implications!Slide56

B: probability density functions

if

p(m

1

)

simple

not so different than confidence intervalsSlide57

m is about 5

plus or minus 1.5

m is either

about 3

plus of minus 1

or about 8

plus or minus 1

but that’s less likely

we don’t really know anything useful about mSlide58

C: localized averages

A = 0.2m

9

+ 0.6m

10

+ 0.2m

11

might be better determined than either

m9 or m10 or

m11 individuallySlide59

Is this useful?

Do we care about

A = 0.2m

9

+ 0.6m

10

+ 0.2m

11

?Maybe …

Slide60

Suppose

if

m

is a discrete approximation of

m(x)

m(x)

x

m

10

m

11

m

9Slide61

m(x)

x

m

10

m

11

m

9

A= 0.2m

9

+ 0.6m

10

+ 0.2m

11

weighted average of

m(x)

in the vicinity of

x

10

x

10Slide62

m(x)

x

m

10

m

11

m

9

average “localized’

in the vicinity of

x

10

x

10

weights

of weighted averageSlide63

Localized average mean

can’t determine

m(x)

at

x=10

but can determine

average value of m(x) near x=10

Such a localized average might very well be useful