/
GPS  time   series   construction GPS  time   series   construction

GPS time series construction - PowerPoint Presentation

bikershobbit
bikershobbit . @bikershobbit
Follow
344 views
Uploaded On 2020-09-22

GPS time series construction - PPT Presentation

Least Squares fitting Helmert transformations Free network solutions Covariance projection Practical demonstration LS is a mathematical procedure for finding the bestfitting curve to a given set of points by minimizing the sum of the squares of the offsets the residua ID: 811941

model matrix noise covariance matrix model covariance noise observables time parameters functional frame squares errors coordinates helmert data network

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "GPS time series construction" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

GPS time series construction

Least

Squares

fitting

Helmert

transformations

Free network

solutions

Covariance

projection

Practical

demonstration

Slide2

LS is a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets (“the residuals") of the points from the curve.

Least Squares fitting

The functional model

Design matrix

Covariance matrix

Variance factor

In particular we focus on linear least squares fitting technique, where the observables are linear functions of unknown parameters:

(polynomial of degree one or less)

Here we will treat the following items:

Slide3

The

Functional

Model

links

the observables to the parameters:

where

e describes the discrepancy

between

y

and

Ax

. It models the random nature of the variability in the measurements (noise), therefore it should be ‘zero’ on the average:

Given the n-vector of observables y and the m-vector of unknown parameters x

This system of equations is termed as the Functional Model, it is defined once the Design Matrix A of order nxm is specified.Note, the functional model is linear in x !

Functional

Model

Slide4

Example 1: the arithmetic mean M

A

ssume

three

observables (y1, y

2, y3)

o

r in matrix

notation

:

The

functional

model is:Here the design matrix is:

y = M

Slide5

Example 2: the linear regression (

straight line)

A

ssume

three

observables (y1,

y2, y

3)

o

r in

matrix

notation

:

The functional model is:Here the design matrix

is:y = rt +

c

Slide6

The variability in the outcomes of measurements

is modelled through

the

probability

density

function of y. In case of the normal distribution,

it is completely captured by

its dispersion:

Stochastic

model

&

covariance

matrixCy being the n x n variance-covariance

matrix of the observables y. The matrix elements are:where:

Slide7

We

assume that the covariance matrix

of

the

observables is already known, e.g. in 3-dimensions it happens to

be:

Cy is

symmetric i.e.:

The

diagonal

elements

are the

variances of each element yi:

so that only six elements define the whole

matrix

Slide8

Least squares solution

If the measurements

have

been collected, the functional model and the stochastic model have

been specified, then the unknown parameters can

be estimated in a least-squares

sense:

depends

on the design

matrix

A, the covariance matrix Cy and the vector of

observables y. The covariance matrix of the adjusted parameters is:

The errors of the unknown parameters depend on the design matrix A, the covariance matrix of the observables Cy , but do NOT depend on the observables y itselfNote:

Slide9

Example 1: the arithmetic mean

y

=

M

f

unctional model:

stochastic model

:

(

equally

weighted

observables)The least squares estimator of

the parameter m is:design matrix:

Slide10

Example 2: the linear regression

f

unctional

model

:design matrix:

stochastic model:

(

equally weighted

observables

)

y =

rt

+

cThe least squares estimator of the parameters r and c is:

Slide11

Goodness of fit

To evaluate the goodness of fit, it is usefull

to construct the chi-squared statistic

:

n

: n. of observables

m

: n. of estimates

having a chi-squared distribution

,

its

expected

value equals the degree of freedom. The reduced chi-squared statistic is defined as:

Slide12

Goodness of fit

indicates that the fit has not fully captured the data (or that the error variance has been underestimated)

indicates a good match between observations and fitted model i.e.

gaussian

residuals.

indicates that the model is 'over-fitting' the data: either the model is improperly fitting noise, or the error variance has been overestimated

Slide13

Variance Factor σ

02

The absolute value of standard errors of observations (

σ

i

) may not be known, but their relative weights may be known , so that:

the scale

factor

σ

0

2

,

is

termed the variance factor (VF).The VF does not

affect the estimation of the results for the parameters or the residuals, but it does scale the covariance matrices of the results.

Slide14

Variance Factor σ

02

The VF

is

often used to scale the

covariance matrix of estimated parameters and

adjusted observations. This most

often occurs when

σ

0

2

is

poorly known, as is the case with new measurement techniques.

It can be shown that the estimate of the VF equals:where:

Slide15

Example: linear regression

Covariance

of

estimates:

Variance factor:

Functional

model

:

LS

estimates

:

Scaled

errors of estimates:y = rt

+ cRed: scaled uncertainty region for the straight lineGreen: unscaled uncertainty regionSimulated observables

Slide16

Data editing

Data editing is important

when

some data points are far away from the sample mean (data

outliers).NEVE toolbox uses an automatic data editing criteria in which the

observations y are rejected when:

where

k

is

the

editing

factor (default 3.5)

Slide17

The Helmert transformation is used in geodesy to transform a set of points (coordinates

X), into another set of points (coordinates X’ )

by a reference frame

rotation

,

scaling and translation.Helmert transformations

Translation vector:

Rotation matrix:

Scaling matrix:

Slide18

Rotations in Euclidean space

A finite rotation

ϑ

about

the x-axis is described by the

matrix:The reference frame rotates

counterclockwise by an

angle ϑ or the

vector

has

rotated

clockwise by the same angle.Note, in general the rotation matrices do not commute!

X

X’ϑ

Slide19

In case of infinitesimal angles:

Thus

the rotation

matrix

:

In

this approximation the rotation matrices commute

:

Rotations

in

Euclidean

space

This

leads to an important simplification: the order in which infinitesimal rotations are applied does

not influence the final result.

Slide20

Helmert

transformations cont’d

Transformation

from

one reference frame to another

is traditionally accomplished with a 7-parameter Helmert

transformation.The 7 parameters

represent 3 translations, 3 rotations and 1 scale.

with

Slide21

Estimating the Helmert parameters

Functional model

:

Probl

:

we have two sets of station

coordinates (X and X’) and

want to find the seven

Helmert parameters that

transforms

X

into

X’ (translation+rotation+scale).where the observables are the coordinate differences: y=X’-X

The least squares estimates of the Helmert parameters are:The Helmert transformation equation may be

rewritten as:

Slide22

Free Network Solutionsnotions

GPS data are also sensitive

to

the

baseline

length and geocentric radius, which are rotationally invariant quantities

.

Baselines

(

black

) and

radii

(

red

) can

be

used to construct a rigid closed polyhedron with a center of mass well defined by the GPS observations. The differences between two geocentric radii, opening angles and distances from the geocenter can be examined without fixing the reference frame.

The satellite force

model

(Newton’s

law

)

defines

the

instantaneous

inertial

frame.

Therefore

GPS

observations

provide

direct

access

to

origin

at the

Earth

’s center

of

mass and global scale

factor

(radio

propagation

model

).

Slide23

Free Network Solutionsloose constraints

The

absolute

orientation

of the polyhedron is poorly determined by the data itself

. Without constraints the geodetic solution is

ill-defined and cannot be

inverted to obtain the station

coordinates

.

Suppose

we

apply very loose a priori constraints to the station coordinates by placing 10-km a priori standard deviation to each station position. Mathematically speaking the estimated station

coordinates will have large errors, but all errors will be correlated so that rotationally invariant quantities (baseline length and geocentric radius

) are still well determined.The intrinsic structure of the polyhedron is completely independent of its orientation … and the obtained solution is independent from external constraints!

Slide24

Covariance projection

It is useful

to

consider the coordinate errors of a free network solution as having a component due

to internal geometry, and external frame definition

.Problem

: how can we compute a

covariance

matrix

that

represents the internal errors?Find the functional relationship between internal noise (e) and coordinate

noisePropagate the covariance matrix to find the relation between the “systematic” errors (frame noise) and the “internal” errors (coordinate noise).

Slide25

Consider the functional model that

rotates the coordinates into

another

frame:

We

know

that the unknowns may be rearranged

in a column vector

θ:

Covariance

projection

The

least

squares estimator of θ is:

Whereas the least squares estimator of e (frame noise) is:

or: and

Slide26

Propagation

of covariance matrices

:

It

follows:

or:

This

is called

projecting

the

covariance

matrix onto the space of errors (Blewitt et al. 1992). The projected covariance matrix is a formal computation of the geometric

precision without us having to define a frame.

Slide27

We

can write the original covariance

matrix

for coordinates as the sum of two contributions:

This shows explicitly that

the coordinate covariance can be decomposed

into a covariance due to

internal

geometrical

errors

, and an external term which depends on the level of frame attachment.“frame noise” “

geometric noise” “loosely

constrained”

Slide28

Time series noise 1/3

Several

studies have

demonstrated that daily GPS positions are temporally correlated and not simply independent

measurements. A

source of noise in

coordinate time series derive

from random motions occurring at the connection

with the ground. It is thought that

the geodetic monument

follow an approximately random walk process, i.e. a process in which the monument position is expected to increase relative to an initial position as the square-root of time.

Slide29

Noise processes are conveniently described in the frequency domain, in a log-log plot, the noise spectrum shows a general negative slope as frequency increases.

f

or

random walk noise

the slope is -2,

for

an intermediate noise called flicker noise

, the slope is -1,

whereas for uncorrelated white noise

the slope is 0.

Time

series

noise 2/3

Slide30

Time series noise

3/

3

Different

approaches

have been

proposed

to cope

with

non-gaussian

noise (e.g. Zhang et al., 1997; Langbein & Johnson, 1997; Mao et al., 1999; Johnson & Agnew, 2000; Langbein

, 2004; Williams et al., 2004; Beavan, 2005; Langbein, 2008)Mao et al. 1999 give

an empirical expression to account for correlated noise on velocity error estimates:where

: g

is

the

number

of

measurements

per

year

,

T

is

the total

time

span

in

years

σ

w

and

σ

f

are the

magnitudes

of

white

and

flicker

noise

in mm

σ

w

i

is

the

random

walk

noise

in mm/

√yr

a

and

b

are

empirical

constants

Slide31

PPDS-toolbox(Project &

Process Daily Solutions,

ppds.m

)

Here

you can:perform

daily

Helmert

transformations

(

snx

projection

)project covariance matrix

correct fiducial stations by constant offsets

extract coordinates of a regional sub-network

Slide32

PPDS-toolbox

ViewTimeSeries.m

Visualize

projected

time

series

Select fiducial stations

Estimate

offsets

Set a

new

offset

epoch

Re-compute uploaded offsetsSave offsets

Slide33

NEVE-toolboxSetup

file:

(

NEtwork

Velocity)SITE, ---------BOR2, CONSTANT 080101, RATE, ANNUALSITE, 19515M001BORC, CONSTANT 080101, RATE, STEP C UEN 110530

SITE, ---------BORG, CONSTANT 080101, RATE, ANNUAL, STEP C UEN 100113SITE, ---------BOSC, CONSTANT 080101, RATE, ANNUAL, STEP C UEN 070101, STEP C UEN 090101

SITE, ---------BOVE, CONSTANT 080101, RATE, STEP C UEN 080111

Comma-separated ascii file, contains

all

instructions

to estimate the velocity field. station name

ckrk

t0Offset,

t

l

Slide34

NEVE-toolbox

(

NEtwork

VElocity

, neve.m)setup manager:

settings of model parameters

input

path

: directory

of

input SINEX

files

time

span

: time series windowing editing: options for outlier detection

output: output directory and plot types

Slide35

NEVE-toolboxFunctional

Model:

(

NEtwork

VElocity)

Cartesian

station

coordinates

k

=1,2,3 at

time

tConstant and rate

Amplitude and phase of annual and/or semi-annual signalsOffset in k-th coordinate

Heaviside step function

Slide36

NEVE-toolboxTime

series inspector (

nevefigures.m

)

Outliers

(yellow

bars)

Offset (green

bars)

Fitted

model

(

red

curve)

Time span (magenta)Here you can:visualize absolute

and residual time seriesvisualize detrended and plate ref. time seriesset new offset

bars and

save

it

in the

setupfile