/
Environmental Data Analysis with Environmental Data Analysis with

Environmental Data Analysis with - PowerPoint Presentation

lindy-dunigan
lindy-dunigan . @lindy-dunigan
Follow
350 views
Uploaded On 2018-11-16

Environmental Data Analysis with - PPT Presentation

MatLab 2 nd Edition Lecture 7 Prior Information Lecture 01 Using MatLab Lecture 02 Looking At Data Lecture 03 Probability and Measurement Error Lecture 04 Multivariate Distributions ID: 730024

prior lecture data information lecture prior information data squares error model normal covariance weighted point passing defines generalized observations lines linear gtg

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Environmental Data Analysis with" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Environmental Data Analysis with MatLab2nd Edition

Lecture 7:

Prior InformationSlide2

Lecture 01 Using MatLab

Lecture 02 Looking At Data

Lecture 03 Probability and Measurement ErrorLecture 04 Multivariate DistributionsLecture 05 Linear ModelsLecture 06 The Principle of Least SquaresLecture 07 Prior InformationLecture 08 Solving Generalized Least Squares Problems Lecture 09 Fourier SeriesLecture 10 Complex Fourier SeriesLecture 11 Lessons Learned from the Fourier Transform Lecture 12 Power SpectraLecture 13 Filter Theory Lecture 14 Applications of Filters Lecture 15 Factor Analysis Lecture 16 Orthogonal functions Lecture 17 Covariance and AutocorrelationLecture 18 Cross-correlationLecture 19 Smoothing, Correlation and SpectraLecture 20 Coherence; Tapering and Spectral Analysis Lecture 21 Interpolation Lecture 22 Linear Approximations and Non Linear Least Squares Lecture 23 Adaptable Approximations with Neural NetworksLecture 24 Hypothesis testing Lecture 25 Hypothesis Testing continued; F-TestsLecture 26 Confidence Limits of Spectra, Bootstraps

SYLLABUSSlide3

Goals of the lectureunderstand the advantages and limitations of supplementing observations withprior informationSlide4

when least-squares failsSlide5

x

d

x1xdx*

d

1

d

*

fitting of straight line

cases

were there’s more than one solution

E

exactly 0

for any lines passing through point

one point

E

minimum

for all

lines passing through point

x*Slide6

when determinant of [GTG] is zero

that is,

D=0 [GTG]-1 is singularSlide7

x

d

x1d1N=1 caseE exactly 0for any lines passing through pointone pointSlide8

x

d

x*d*xi =x*

case

E

minimum

for any lines passing through point

x*Slide9

least-squares fails when the data do notuniquelydetermine the solutionSlide10

if [GTG]

-1

is singularleast squares solution doesn’t existif [GTG]-1 is almost singularleast squares is uselessbecause it has high variancevery largeSlide11

guiding principle for avoiding failureadd information to the problem that guarantees that matrices like [G

T

G] are never singularsuch information is calledprior informationSlide12

examples of prior informationsoil has density will be around 1500 kg/m3give or take 500 or so

chemical components sum to 100%

pollutant transport is subject to the diffusion equationwater in rivers always flows downhill Slide13

prior informationthings we know about the solutionbased on our knowledge and experiencebut not directly based on dataSlide14

simplest prior informationm is near some value, m

m

≈ mwith covariance CmpSlide15

use Normal p.d.f. to prepresentprior informationSlide16

m

1

m20401002040pp(m)prior informationexample:m1 = 10 ± 5m2 = 20 ± 5

m

1

and m

2

uncorrelated Slide17

Normal p.d.f.defines an“error in prior information”

individual

errors weighted by their certaintySlide18

linear prior information

with covariance

ChSlide19

example relevant to chemical constituents

H

hSlide20

use Normal p.d.f. to representprior informationSlide21

Normal p.d.f.defines an“error in prior information”

individual

errors weighted by their certaintySlide22

(Technically, the p.d.f.’s are only proportional when the Jacobian

Determinant is constant, which it is in this case).

pp(m) ∝ so we can view this formula as a p.d.f. for the model parameters, msince m ∝ h pp(m) ∝ pp(h) Slide23

now suppose that we observe some data:d = dobs

with covariance

CdSlide24

use Normal p.d.f. to represent the observationsd

with covariance

CdSlide25

now assume that the mean of the data is predicted by the model:d = GmSlide26

represent the observations with aNormal p.d.f.

mean of data predicted by the model

observations weighted by their certaintyp(d) =Slide27

Normal p.d.f.defines an“error in data”

p(

d) =weighted least-squares errorSlide28

think of p(d) as a

conditional

p.d.f.probability that a particular set of data values will be observedgiven aparticular choice of model parametersSlide29

m

2

0401002040p(d|m)m1example:one datum2 model parametersmodeld1=m1

–m

2

one observation

d

1

obs

= 0 ± 3Slide30

now use Bayes theorem toupdatethe prior information

with the

observationsSlide31

ignore for a momentSlide32

Bayes Theorem in wordsSlide33

so the updated p.d.f. for the model parameters is:

data part

prior information partSlide34

this p.d.f. defines a“total error”

weighted least

squares error in the dataweighted error in the prior information withSlide35

Generalized Principle of Least Squaresthe best m

est

is the one thatminimizes the total error with respect to mwhich is the same one as the one thatmaximized p(m|d) with respect to mSlide36

m

1

m20401002040

m

2

0

40

10

0

20

40

m

2

0

40

0

15

40

13

A) p

p

(

m

)

B) p(

d

|

m

)

C) p(

m

|

d

)

m

1

m

1

continuing the example …

best estimate of the model parametersSlide37

generalized least squaresfind the m

that minimizesSlide38

generalized least squaressolution

pattern same as ordinary least squares

but with more complicated matrices