2016-04-20 43K 43 0 0

##### Description

To fit a surrogate we minimize an error measure, called also “loss function.”. We also like the surrogate to be simple:. Fewest basis functions. Simplest basis functions. Flatness is desirable (given y=1 for x=. ID: 285126

**Embed code:**

## Download this presentation

DownloadNote - The PPT/PDF document "Two objectives of surrogate fitting" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

## Presentations text content in Two objectives of surrogate fitting

Two objectives of surrogate fitting

To fit a surrogate we minimize an error measure, called also “loss function.”We also like the surrogate to be simple:

Fewest basis functionsSimplest basis functionsFlatness is desirable (given y=1 for x=i, i=1,10 we don’t fit a sine to the data. Why?)

Slide2Support Vector Regression

Combines loss function and flatness as a single objective.Support vector machines developed by Vapnik and coworkers for optical character recognitions in Russia in the 1960s.First use for regression in 1997.Besides regression has become popular as classifier to divide design space into feasible domain (where constraints are satisfied) and infeasible domain (where they are not).

Slide3Epsilon-insensitive loss function

Support vector regression can use any loss function, but the one most often associated with it is epsilon-insensitive.It is less sensitive to one bad data point.

Figures from Gunn’s Support Vector Machines for Classification and Regression

Slide4Flatness measure

Take a surrogate , where the are shape functions and are coefficients. A possible flatness measure is , where is a measure of the curvature associated with the ith shape function. For example, the constant term should have a zero .

SVR optimization problem

The coefficients of the surrogate are found by solving the following optimization problemThis is a challenging optimization problem because the loss function is not a smooth function of the coefficient.A well behaved constrained formulation is given in the notes.

Example

Four strain-stress measurements are given: ε=[1,2,3,4] milistrains σ=[10,22,33,35] ksi.Fit an SVR surrogate to the data of the form , assuming that the stress is accurate up to 2 ksi.Assign zero flatness to the linear coefficient and =1, or =100 to the quadratic coefficient.You may use Matlab’s fminsearch for the optimization, since it is not a gradient based method.

Solution

Figure compares the two fits

For

=1, =12.9, the loss function dominates. Errors at data points are: -2.00 -0.17 2.50 -2.00For =100, 0.025; flatness is importantErrors at data points are: -0.025 2.00 3.075 -4.80Matlab commands given in notes.

Problems

Fit a linear polynomial and a quadratic polynomial to the data of the example using linear regression (e.g. Matlab’s polyfit or regress). Compare to the SVR fits and comments.Assume that the exact stress-strain law is . Generate data at 11 equispaced points in [1,4] contaminated by normal random noise of zero mean and standard deviation of 1. Perform the SVR fits for the two values of the flatness and the two linear regression fits. Compare and comment.

Slide10

Slide11

Slide12

Slide13

Slide14