/
Estimation & Value
Estimation & Value

Estimation & Value - PowerPoint Presentation

giovanna-bartolotta
giovanna-bartolotta . @giovanna-bartolotta
Follow
135 views | Public

Estimation & Value - Description

of Ambiguity in Ensemble Forecasts Tony Eckel National Weather Service Office of Science and Technology Silver Spring MD Mark Allen Air Force Weather Agency Omaha NE Eckel FA MS Allen and MC ID: 540355 Download Presentation

Tags :

ambiguity forecast error ensemble forecast ambiguity ensemble error pdf risk true probability user uncertainty analysis random event forecasts control order members calibrated

Please download the presentation from below link :


Download Presentation - The PPT/PDF document "Estimation & Value" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Share:

Link:

Embed:

Presentation on theme: "Estimation & Value"— Presentation transcript

Slide1

Estimation & Value

of Ambiguity in Ensemble Forecasts

Tony EckelNational Weather Service Office of Science and Technology, Silver Spring, MDMark AllenAir Force Weather Agency, Omaha, NE

Eckel

, F.A., M.S. Allen, and M.C.

Sittel

, 2012: Estimation of ambiguity in ensemble forecasts.

Weather and Forecasting

, in press.

Allen, M.S. and F.A.

Eckel

, 2012: Value from ambiguity in ensemble forecasts.

Weather and Forecasting

, in press.Slide2

Part I.

Estimation of AmbiguitySlide3

Ambiguity

Ambiguity

-- 2nd order uncertainty, or the uncertainty in a statement of uncertainty“Ambiguity is uncertainty about probability, created by missing information that is relevant and could be known.”

--

Camerer

and Weber (

Journal

of Risk and Uncertainty, 1992)

Risk: Probability of an unfavorable outcome from occurrence of a specific event.

Clear Risk

: Probability (or uncertainty) of the event known precisely. EX: Betting at roulette

Ambiguous Risk

:

Probability (or uncertainty) of the event

known

vaguely.

EX: Betting on a

horse

raceSlide4

NCEP SREF

27h Fcst, 12Z, 8 Oct 20112-m Temperature (

F)Ensemble Mean & SpreadEnsemble Standard Deviation (

F)

35

%

15

%

36

F

44

F28FSpokane, WAAmbiguity in Ensemble ForecastsProbability of Freezing @ Surface%25%Spokane, WASlide5

2)

Random PDF Error from Ensemble Design Limitations EX: Ensemble omits perturbations for soil moisture error

1) Good IC and/or no error sensitivity  Good forecast PDF

2) Bad IC and/or high error sensitivity

Underspread

forecast PDF

Can’t distinguish,

so PDF error

appears random

Causes of Ambiguity(the “…missing information that is relevant and could be known.”)1)

Random PDF Error from Limited Sampling

0 5 10 15 20 25

Wind Speed (m/s)

True

Forecast PDF

ensemble members

Ensemble’s

Forecast PDF

Ambiguity

 Random error in

1

st

order uncertainty estimate

Not

dependent on:

How much

1

st

order uncertainty exists

Systematic error

in 1

st

order uncertainty estimate, which

1

&

2

can also produce Slide6

Shift & Stretch

Calibration

Shift each member (

e

i

) over by a

shift factor

(opposite

of mean error in ensemble mean) to correct for bias in PDF location.

for

i

= 1…n membersSTEP 1 (1st Moment Calibration)

Use the adjusted members to calculate calibrated predictions (mean, spread, probability)STEP 3 (Forecast)Stretch (compress) the shifted members about their mean, , by a stretch factor (inverse sqrt of variance bias) to correct for small (large) ensemble spread:STEP 2 (2nd Moment Calibration)

for

i

= 1…

n

membersSlide7

5.0E4

4

3

2

1

0

BSS

= 0.764

(0.759…0.768)

rel

= 7.89E-4

res = 0.187 unc = 0.245# of Forecasts

5.0E4

4

3

2

1

0

BSS

= 0.768

(0.764…0.772)

rel

= 4.20E-5

res

= 0.188

unc

= 0.245

# of Forecasts

Observed Relative Frequency

Observed Relative Frequency

Raw

Conditionally Calibrated

Forecast Data:

-

JMA 51-member Ensemble, 12Z cycle

-

5-day, 2-m temperature, 1



1

over CONUS

- Independent:

1 – 31 Jan 2009

- Dependent:

15 Dec 2007 – 15 Feb 2008

Ground

Truth:

-

ECMWF global model analysis (0-h forecast)Slide8

Randomly Calibrated Resampling (RCR)

-- based on bootstrap technique 1) From original n members, generate calibrated forecast probability (

pe ) for an event

2) Produce alternative

n

members by sampling originals with replacement

3) Apply random calibration (varied by ensemble’s error characteristics) to

resampled

set to account for ensemble’s insufficient simulation of uncertainty

4) Generate alternative forecast probability for the event

Estimating Ambiguity

999

RCR

p5 = 16.5%pe = 31.3%p5 = 18.9%p95 = 44.2%Forecast Probability (%)Resampling Only

p

e = 31.3%

p

95

= 37.3%

p

5

= 23.7%

Forecast Probability (%)

Frequency

p

e

= 31.3%

p

5

= 17.0%

p

95

= 46.5%

Forecast Probability (%)

CESSlide9

Solid: Raw JMA ensemble’ error distributions, from which the error variance associated with random sampling of a 51-member ensemble (

see below) is removed.Dashed: Produced a random shift factor and stretch factor to randomly calibration each of the 999 resampled forecast sets.

10

20

40

80 members

(Standardized Error in Ensemble Mean)

10

20

40

80 members

(Fractional Error in Ensemble Spread)

Average error determines primary

shift factor and stretch factor Slide10

Part II.

Application of AmbiguitySlide11

Cost-Loss Decision Scenario

Cost (C ) – Expense of taking protective actionLoss (L) – Expense of unprotected event occurrence

Probability ( pe) – The risk, or chance of a bad-weather event

Take protective action whenever

Risk > Risk Tolerance

or

p

e

> C / L…since expense of protecting is less than the expected expense

of getting caught unprotected,

C < L pe Application of AmbiguityC/LRisk Acceptablepe

C

/

L

Decision Unclear

Decision Unclear

C

/

L

p

e

p

e

Forecast

Probability (i.e., Risk)

0.0

1.0

Probability

Density

C

/

L

= 0.35

(Risk Tolerance)

Too Risky

p

e

But given ambiguity in the risk, the appropriate decision can be unclear.

Opposing Risk

:

Fraction of risk that goes against the normative decision. Slide12

The Ulterior Motives Experiment

GOAL: Maintain primary value while improving 2

nd order criteria not considered in primary risk analysis

 Event:

F

reezing surface temperature

 ‘U

ser’:

Specific risk tolerance level (i.e.,

C/L value) at a specific location

 2nd Order Criterion: Keep users’ trust by reducing repeat false alarms Forecast Data:

GFS Ensemble Forecast, 5-day, 2-m temperature, 1

1 over CONUS - Independent: 12 UTC daily, 1 – 31 Jan 2009 - Dependent: 12 UTC daily, 15 Dec 2007 – 15 Feb 2008 - Ground Truth: ECMWF global model analysis (0-h forecast)Black: Risk clearly exceeds tolerance  PrepareWhite: Risk clearly acceptable  Do Not PrepareGray: Decision unclear  Preparation Optional Given potential for repeat false alarm, user may go against normative decision.Slide13

8 User Behaviors

Behavior Name Behavior DescriptionSlide14

Ambiguity-Tolerant

Ambiguity-SensitiveBackward

OptimalThreshold of

Opposing Risk (%)

User C/L

The ‘Optimal’ Behavior

Test Value for Threshold of Opposing Risk

Testing for C/L = 0.01

Control’s

POD

Lowest threshold that maintains POD Max. chances to prevent repeat a false alarm Slide15

Measuring Primary Value

Value Score

(or expense skill score)

a

= # of hits

b

= # of false alarms

c

= # of misses

d

= # of correct rejections

=

C

/L ratio = (a+c) / (a+b+c+d)Efcst = Expense from follow the forecastEclim = Expense from follow a climatological forecastEperf = Expense from follow a perfect forecastSlide16

Deterministic

– Normative decisions following GFS calibrated deterministic forecasts Control – Normative decisions following GFS ensemble calibrated probability forecastsValue Score

User C/LMeasuring Primary ValueSlide17

Losers

(w.r.t. primary value)Cynical

FickleUser C/L

Control

Control

Value Score

User C/LSlide18

Marginal Performers

(w.r.t. primary value)Backward

Ambiguity-Sensitive

Optimistic

Control

Control

User C/L

User C/L

Value Score

Value Score

ControlSlide19

Winners

(w.r.t. primary value)Optimal

Ambiguity-TolerantUser C/L

Value Score

User C/L

Control

ControlSlide20

Optimal

BackwardCynical

FickleAmbiguity-Sensitive

Ambiguity-Tolerant

Optimistic

% Reduction in Repeat False Alarms

User C/L

2

nd

Order ValueSlide21

Conclusions

Ambiguity in ensemble forecasts can be effectively estimatedUsers can benefit from ambiguity information through improvement of 2nd order criteria, but that requires lots of creativitySlide22

Backup SlidesSlide23

True Forecast PDF

True forecast PDF recipe for the current forecast cycle and lead time 

1) Look back through an infinite history of forecasts produced by the analysis/forecast system in a stable climate 2) Pick out all instances with the same analysis (and resulting forecast) as the current forecast cycle. Note that each analysis, while the same, represents a different true initial state.

3)

Pool all the different

verifying true states at

to construct

the true distribution of possible states at time

Combined effect creates a wider true PDF. Erred model also contributes to analysis error.Erred

Erred

While each matched analysis corresponds to only one true IC,the subsequent forecast can match many different true states due to grid averaging at  =0 and/or lack of diffeomorphism. ErredPerfectEach historical analysis match will correspond to a different true initial state, and a different true state at time  . PerfectErredOnly one possible true state, so true PDF is a delta function.PerfectPerfect

Analysis Model True Forecast PDF (at a specific

)

Perfect

 exactly accurate (with infinitely

precision) Erred

 inaccurate, or accurate but discrete

Not absolute

-- depends on uncertainty in ICs and model (better analysis/model = sharper

true forecast

PDF)Slide24

1

st Moment Bias Correction (C)Ensemble Mean (C)(a)

ln(2

nd

Moment Bias Correction)

ln(Ensemble Variance)

(b)Slide25

5.8

6.1 7.3 9.2 9.8 10.0 10.1 11.2

13.8

Forecast Probability (

p

e

)

by

Rank Method

V

: verification value : event threshold n : number of membersxi : value of the ith member

G( ): Gumbel

CDFG’( ): Reversed Gumbel CDF When  has a rank >1 but < n When  has a rank n When  has a rank 1

…or

if

x

is positive definite:

TURB

Fcsts

(calibrated)

:

=

9.0

(the MDT TURB threshold),

p

e

approx. is 6/9 = 66.7%

p

e

= 6/10 +

[

(9.2

9.0

) /

(9.2

7.3)

] *

1/10 = 61.1% Slide26

Estimating Ambiguity by CES

(Calibrated Error Sampling)Random error (i.e., ambiguity) in pe is tied to random error in any moment of the forecast PDF.

A pe error for any value of the event threshold can be found if the true forecast PDF is known.Eckel and Allen, 2011, WAFSlide27

We never know the true forecast PDF, but we do know the range of possibilities of the true PDF based on the ensemble PDF’s error characteristics:

Each random draw from the ensemble’s PDF errors generates a unique set of pe errors.

Aggregate to form a distribution of pe errors, called an ambiguity PDF.

Spread = 2.0

C

Spread = 6.0

C

Estimating Ambiguity by CESSlide28

Ambiguity PDFs follow the beta distribution.

Estimating Ambiguity by CESSlide29

Optimal

BackwardCynical

Control

Fickle

Optimistic

Cynical

Optimal

Ambiguity-Tolerant

Ambiguity-Sensitive

Backward

*

*

*

oFickleAmbiguity-SensitiveAmbiguity-TolerantOptimistic% Reduction# of Repeat False AlarmsUser C/LUser C/L2nd Order ValueSlide30

(c)

(d)

(a)

(b)Slide31

Visualization of Ambiguity

and

Comparison of CES vs. RCR