/
ANOTHER LOOK AT FORECASTACCURACY METRICS FOR INTERMITTENT DEMAND by Rob J ANOTHER LOOK AT FORECASTACCURACY METRICS FOR INTERMITTENT DEMAND by Rob J

ANOTHER LOOK AT FORECASTACCURACY METRICS FOR INTERMITTENT DEMAND by Rob J - PDF document

myesha-ticknor
myesha-ticknor . @myesha-ticknor
Follow
464 views
Uploaded On 2014-12-19

ANOTHER LOOK AT FORECASTACCURACY METRICS FOR INTERMITTENT DEMAND by Rob J - PPT Presentation

Hyndman Preview Some traditional measurements of forecast accuracy are unsuitable for intermittentdemand data because they can give infinite or undefined values Rob Hyndman summarizes these forecast accuracy metrics and explains their potential fai ID: 26530

Hyndman Preview Some

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "ANOTHER LOOK AT FORECASTACCURACY METRICS..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

FORESIGHT43 Introduction: Three Waysto Generate Forecasts quantity (Y) from a particular forecasting method:1.We can compute forecasts from a common origin t (for There are four types of forecast-error metrics:absolute error (MAE or MAD); percentage-errormetrics such as the mean absolute percent error(MAPE); relative-error metrics, which averagethe ratios of the errors from a designatedmethod to the errors of a naïve method; andscale-free error metrics, which express eacherror as a ratio to an average error from a relatively large observations. In general, we would expectout-of-sample errors to be larger. Measurement of Forecast Errors We can measure and average forecast errors in several ways: Scale-dependent errors The forecast error is simply, – the forecast was produced. This is on the same scale as thedata, applying to anything from ships to screws. Accuracy are therefore scale-dependent.“deviation”). The use of absolute values or squaredvalues prevents negative andpositive errors from offsettingeach other.a method’s accuracy across 44FORESIGHT Issue 4 June 2006 Table 1. Forecast-Accuracy Metrics for Lubricant Sales Mean Naïve SES Croston In Out In Out In Out In OutGMAE Geometric Mean Absolute Error 1.65 0.96 0.00 0.00 1.33 0.09 0.00 0.99MAPE Mean Absolute Percentage Error   – –    sMAPE Symmetric Mean Absolute 1.73 1.47 – – 1.82 1.42 1.70 1.47 Percentage ErrorMdRAE Median Relative Absolute Error 0.95  0.98  0.93 GMRAE Geometric Mean Relative   – –     Absolute Error MASE Mean Absolute Scaled Error 0.86 0.44 1.00 0.20 0.78 0.33 0.79 0.45 Figure 1. Three Years of Monthly Sales of a Lubricant Product Sold in Large ContainersData source: Product C in Makridakis et al. (1998, chapter 1). The vertical dashed line indicatesthe end of the data used for fitting and the start of the holdout set used for out-of-sample forecasting. Sales of lubricant051015202530350246810 An Example of What Can Go Wrong Figure 1. These data were part of a consultingproject I did for a major Australian lubricantmanufacturer.Suppose we are interested in comparing the or(4) Croston’s method for intermittent demands performance of thesemethods by varying the origin and generating a performance based onforecasting the data in the hold-out period, using informationfrom the fitting period alone. These out-of-sample forecastsTable 1 shows some commonly used forecast-accuracymetrics applied to these data. The metrics are all defined inthe next section. There are many infinite values occurringin Table 1. These are caused by division by zero. Theundefined values for the naïve method arise from the divisionof zero by zero. The only measurement that always givesMASE, or the mean absolute scaled error. Infinite, undefined, Mean Square Error (MSE) 46FORESIGHT Issue 4 June 2006 Rob J. HyndmanMonash University, AustraliaRob.Hyndman@buseco.monash.edualways available and it effectively scales the errors. Incompare forecast accuracy at one step ahead for ten differentseries, then we would have one error for each series. Theout-of-sample MAE in this case is also zero. These types oferror. This ratio also renders the errors scale free and isthree situations described in the introduction). However, itis that the MASE is more widely applicable. The MAD/Meanratio assumes that the mean is stable over time (technically,that the series is “stationary”). This is not true for data whichshow trend, seasonality, or other patterns. While intermittentdata is often quite stable, sometimes seasonality does occur,series are the most difficult to forecast. Typicalvalues for one-step MASE values are less than one,values are often larger than one, as it becomes moredifficult to forecast as the horizon increases. References Armstrong, J. S. & Collopy F. (1992). Error measures forInternational Journal of ForecastingBoylan, J. (2005). Intermittent and lumpy demand: AForesight: The InternationalJournal of Applied Forecasting, Fildes, R. (1992). The evaluation of extrapolative forecastingInternational Journal of ForecastingHyndman, R. J. & Koehler, A. B. (2006). Another look atmeasures of forecast accuracy, Forecasting. To appear.Makridakis, S. & Hibon, M. (2000). The M3-competition:Journal of ForecastingMakridakis, S. G., Wheelwright, S. C. & Hyndman, R. J.Forecasting: Methods and Applications New York: John Wiley & Sons.Syntetos, A. A. & Boylan, J. E, (2005). The accuracy ofForecasting Table 2. Monthly Lubricant Sales, Naïve Forecast In-sample Out-of-sample