Eco 6380 Predictive Analytics For Economists
Author : faustina-dinatale | Published Date : 2025-06-23
Description: Eco 6380 Predictive Analytics For Economists Spring 2016 Professor Tom Fomby Department of Economics SMU Presentation 8 Ensemble Predictions Bagging and Boosting Classroom Notes Benefits of Combining Forecasts The accuracy of Ensemble
Presentation Embed Code
Download Presentation
Download
Presentation The PPT/PDF document
"Eco 6380 Predictive Analytics For Economists" is the property of its rightful owner.
Permission is granted to download and print the materials on this website for personal, non-commercial use only,
and to display it on your personal computer provided you do not modify the materials and that you retain all
copyright notices contained in the materials. By downloading content from our website, you accept the terms of
this agreement.
Transcript:Eco 6380 Predictive Analytics For Economists:
Eco 6380 Predictive Analytics For Economists Spring 2016 Professor Tom Fomby Department of Economics SMU Presentation 8 Ensemble Predictions, Bagging and Boosting Classroom Notes Benefits of Combining Forecasts The accuracy of Ensemble (Combination) predictions are usually better (especially if the individual methods making up the ensemble are pre-picked (“trimmed”) and there are not too many of them (rule of thumb: 4 or less) This general result holds in both prediction and classification problems Ensembles Predictions for Numeric Target Variable Based on M Competing Forecasts (Predictions) Obtaining the Weights for the Ensemble Methods The Nelson Ensemble weights are obtained by regressing the actual values of the target variable in an independent data set on the M forecasts (predictions) of the target variable in the independent data set derived by using the M forecasting (prediction) models. This is a restricted regression in that the intercept has to be set to zero and the weights (coefficients) applied to the forecasts have to be restricted to add to one. The Granger-Ramanathan weights are obtained like in the Nelson method but, instead, the intercept of the regression is not set to zero and the weights (coefficients) applied to the forecasts need not add to one. Some software packages provide the Simple Average Ensemble which is nothing more than the Nelson Ensemble method with each of the forecast weights being 1/M. The Simple Average Ensemble is likely to be less accurate than the more sophisticated Nelson and Granger-Ramanathan Ensemble methods except in cases where the forecasting methods making up the Ensemble are approximately equally accurate. Classification Ensembles: The Majority Voting Rule In the case that there are M binary classification models predicting either 1 (a “success”) or 0 (a “failure”), a Majority Voting Rule can be used. If M is odd, then the Majority Voting Ensemble prediction is the outcome that has the majority vote. If M is even and there is a tie vote, a coin flip can be used to break the tie. It is possible to have weighted voting rules with the most accurate classification models carrying greater voting power. Bagging Prediction and Classification Methods “Bagging” stands for Bootstrap Aggregation. Prediction and Classification Models are often improved in terms of accuracy of prediction and classification if they are “bagged.” Take, for example, Multiple Linear Regression (MLR) in the application of predicting a numeric target variable in an independent data set. More accurate prediction