/
Overall procedure of validation Overall procedure of validation

Overall procedure of validation - PowerPoint Presentation

alida-meadow
alida-meadow . @alida-meadow
Follow
383 views
Uploaded On 2017-04-11

Overall procedure of validation - PPT Presentation

Calibration Validation Figure 124 Validation calibration and prediction Oberkampf and Barone 2004 Model accuracy assessment by comparison of model outputs with experimental measurements ID: 536516

calibration model data parameters model calibration parameters data statistical discrepancy estimation uncertainty approach bayesian computer gpr error chemical function

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Overall procedure of validation" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Overall procedure of validation

Calibration

Validation

Figure 12.4 Validation, calibration, and prediction (

Oberkampf

and Barone, 2004 ).

Model accuracy assessment by comparison of model outputs with experimental measurements.

Adjusting of physical modeling parameters in the model to improve agreement with experimental data

Optional

If calibrated by experiment, prediction at untried conditions and validate again.

New

Experiment

Validated

Model

Blind prediction

PredictionSlide2

Approaches for calibration

Traditional (deterministic) calibration

Parameters are estimated as a single value such

that minimize the squared error between the computer model and experimental data.As a result, the model is given by a single function. Statistical calibration

Also called Calibration under Uncertainty (CUU). Parameters are estimated using statistical inference technique to incorporate uncertainty due to the observation error. As a result, the model is given by the confidence bounds.Slide3

Approaches for statistical calibration

Based on section 13.5, Oberkampf

textbook.Frequentist approachParameter

is constant, but unknown because of limited data. The most popular method is to follow the two steps.1. point estimate parameters by maximum likelihood estimation (MLE).2. draw samples of parameters by bootstrap technique.

Advantage is they are simpler and easier to use than Bayesian methods.But less applied to the calibration problem. Bayesian approachParameter is treated as random variable, characterized by the probability distribution conditional on the data. Also called Bayesian updating. Before updating, the distribution is prior, after updating , it is posterior.Slide4

Calibration in order of complexity

Deterministic

calibrationCarry out parameter estimation using optimization technique to obtain a single function as a calibrated model.The method is not useful because uncertainty is not included. It is like to use the mean value of the uncertainty in the design decision.

Statistical calibration without discrepancyCarry out parameter estimation using statistical technique to obtain confidence bounds of the calibrated model.Bayesian approach is common, with the MCMC as the technique for parameter estimation in the probabilistic way.Due to lack of knowledge, model often differs inherently from the reality.No matter how many data used for calibration, they may fail to agree.Without accounting for this, assuming the model is correct, we end up with large error, mistaken that they are from experiments, not the model.Slide5

Calibration in order of complexity

Statistical calibration with discrepancyHow to model the discrepancy ?

Gaussian process regression (GPR) is employed to express the discrepancy in approximate manner.Estimation includes not only the calibration parameters but also the associated GPR parameters.

The discrepancy term has two purposes.1. close the gap between the model and reality, making further improvement of the calibration.2. validate the model accuracy. If small discrepancy, the model is good.Slide6

Calibration in order of complexity

Statistical calibration with surrogate modelDuring the MCMC, thousands of model evaluations are needed.

If the model is expensive, surrogate model should be introduced. GPR is employed for this purpose, where design of computer experiments (DACE) is critical in the process.

Estimation includes three parts: calibration parameters, GPR parameters for surrogate model, GPR parameters for discrepancy.Efficiency decreases quickly as the number of parameters increases.MLE plug-in approach:Surrogate GPR model is deterministic. Parameters are point-estimated. Only the others are estimated probabilistically.Full Bayesian approach:Includes all the parameters in the estimation. This is the ultimate complexity in the calibration. This is the topic the KOH has addressed.Slide7

Outline of the calibration lecture

Motivating exampleDeterministic

calibrationStatistical calibration without discrepancyBayesian approach

Frequentist approachStatistical calibration with discrepancyGPR revisited.Statistical calibration with surrogate modelMLE plug-in approachFull Bayesian approachApplicationsSlide8

Motivating example

Problem addressed inLoeppky

, Jason L., Derek Bingham, and William J. Welch. "Computer model calibration or tuning in practice." Technometrics, submitted for publication (2006).Bayarri

, Maria J., et al. "A framework for validation of computer models." Technometrics 49.2 (2007).Originally Fogler, H. S., (1999), Elements of Chemical Reaction Engineering, Prentice Hall.Chemical kinetics modelDescribes a chemical reaction process with initial chemical concentration 5 and reaction rate 1.7. Amount of chemical remaining at time x is investigated.Slide9

Motivating example

Chemical kinetics model

Make virtual experimental (or observation) data with the noise Three replicates are made at 11 points of equal interval in [0,3].Repeated with data for right figure given in note page

ObjectiveFind computer model to simulate observations as closely as possible.Slide10

Simplest computer model

Somewhat wrong guess due to lack of knowledge.Calibrate

q such that minimizes the SSE between model and data.

Optimum solution from matlab fminsearchUsing nlinfit and nlparci with second data

but with n-1, (see notes page)Slide11

Discussion

If error was due only to noise, we would have expected RMSE=0.3

Given the true function, we can see that the model is not good. What are the clues without looking at the true function?What is the hierarchy of calibration methods

without discrepancy?Slide12

Calibration with improved models

Computer model and optimum solution

Two parameters optimization problemSolution improved. but still substantial gap.Computer model and optimum solution

Three parameters optimization problemExcellent match (true q1 = 1.5, q2 =3.5,

q3 =1.7)Model change made on ad-hoc basis. Besides, the close match is undoubtedly just luck. Is this possible in the real practice ? Slide13

Calibration under uncertainty

Bayesian approachAssume that the model is accurate representation of reality.

Field data is given byPosterior distribution of the unknown parameters q, s

2 Posterior distributionSlide14

Calibration under uncertainty

Posterior samples after MCMC (N=5e3)

Posterior prediction

Means of q and s0.6295 0.5921

0.6235 0.58700.6249 0.58660.6269 0.5952Slide15

Calibration under uncertainty

Frequentist approachLikelihood of

YF

Maximum likelihood estimationSlide16

Calibration under uncertainty

Bootstrap samplingMake virtual experimental data by applying the estimated parameter into the model and the noise.

Go through MLE estimation using the data, and repeat this N times to get the samples of parameters.

Meeker, William Q., and Luis A. Escobar.

Statistical methods for reliability data

. Vol. 314. Wiley. com, 1998.Slide17

Calibration under uncertainty

Discussion

Confidence bounds of model is now obtained. i.e., at x=1.5, the bound is (0.75, 3.19). Due to the incorrect model, end up with large bound. However, this is the best available solution under this condition.

Within this large bounds, not only measurement error but also the model error are included. We need to account for this by introducing discrepancy function.