Modified Holt s Linear Trend Method

Similar documents
A State Space Framework For Automatic Forecasting Using Exponential Smoothing Methods

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

Exponential smoothing in the telecommunications data

The value of feedback in forecasting competitions

Automatic forecasting with a modified exponential smoothing state space framework

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

Forecasting using exponential smoothing: the past, the present, the future

FORECASTING TIME SERIES WITH BOOT.EXPOS PROCEDURE

Complex exponential Smoothing. Ivan Svetunkov Nikolaos Kourentzes Robert Fildes

Do we need Experts for Time Series Forecasting?

c 2016 Harish B Narasingaraj

Improved Holt Method for Irregular Time Series

Department of Econometrics and Business Statistics

Old dog, new tricks: a modelling view of simple moving averages

Another look at measures of forecast accuracy

Forecasting using robust exponential smoothing with damped trend and seasonal components

Using Temporal Hierarchies to Predict Tourism Demand

The tourism forecasting competition

Forecasting: Methods and Applications

A Dynamic-Trend Exponential Smoothing Model

A new approach to forecasting based on exponential smoothing with independent regressors

Automatic Forecasting

Complex Exponential Smoothing

An approach to make statistical forecasting of products with stationary/seasonal patterns

Complex Exponential Smoothing

Local linear forecasts using cubic smoothing splines

António Ascensão Costa

FORECASTING. Methods and Applications. Third Edition. Spyros Makridakis. European Institute of Business Administration (INSEAD) Steven C Wheelwright

Forecasting Automobile Sales using an Ensemble of Methods

EXPONENTIAL SMOOTHING MODELING AND FORECASTING FOR INCIDENCE OF TUBERCULOSIS IN INDIA

Lecture 1: Introduction to Forecasting

ISSN Original Article Statistical Models for Forecasting Road Accident Injuries in Ghana.

NATCOR. Forecast Evaluation. Forecasting with ARIMA models. Nikolaos Kourentzes

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

A stochastic modeling for paddy production in Tamilnadu

Another Error Measure for Selection of the Best Forecasting Method: The Unbiased Absolute Percentage Error

Simple robust averages of forecasts: Some empirical results

Improving forecasting by estimating time series structural components across multiple frequencies

Published in Journal of Forecasting, 20, 2001, Copyright 2001 John Wiley & Sons, Ltd.

Decision 411: Class 4

interval forecasting

arxiv: v1 [stat.me] 18 Oct 2013

TIMES SERIES INTRODUCTION INTRODUCTION. Page 1. A time series is a set of observations made sequentially through time

at least 50 and preferably 100 observations should be available to build a proper model

Intermittent state-space model for demand forecasting

Prediction of maximum/minimum temperatures using Holt Winters Method with Excel Spread Sheet for Junagadh Region Gundalia Manoj j. Dholakia M. B.

DEPARTMENT OF QUANTITATIVE METHODS & INFORMATION SYSTEMS

Visualization of distance measures implied by forecast evaluation criteria

22/04/2014. Economic Research

Forecasting. Operations Analysis and Improvement Spring

Decision 411: Class 4

Automatic modelling of neural networks for time series prediction in search of a uniform methodology across varying time frequencies

A Dynamic Combination and Selection Approach to Demand Forecasting

FORECASTING FLUCTUATIONS OF ASPHALT CEMENT PRICE INDEX IN GEORGIA

On the benefit of using time series features for choosing a forecasting method

On Decomposition and Combining Methods in Time Series Forecasting

Forecasting Unemployment Rates in the UK and EU

Transforms and Truncations of Time Series

More About Exponential Smoothing

Journal of Statistical Software

Forecasting. Simon Shaw 2005/06 Semester II

Exponential smoothing is, like the moving average forecast, a simple and often used forecasting technique

Forecasting Chargeable Hours at a Consulting Engineering Firm

Automatic Identification of Time Series Features for Rule-Based Forecasting

A Comparison of the Forecast Performance of. Double Seasonal ARIMA and Double Seasonal. ARFIMA Models of Electricity Load Demand

A state space model for exponential smoothing with group seasonality

A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

Nonparametric time series forecasting with dynamic updating

Forecasting Using Time Series Models

Regional Population Estimation Using Satellite Imagery

CHAPTER 18. Time Series Analysis and Forecasting

THEORETICAL RESULTS AND EMPIRICAL S TitleON THE FORECASTING ACCURACY OF THE LINEAR EXPONENTIAL SMOOTHING METHOD

An Evaluation of Methods for Combining Univariate Time Series Forecasts

Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial Neural Network (ANN) for Measuring of Climate Index

Forecasting models and methods

Deep Learning Architecture for Univariate Time Series Forecasting

Combining Forecasts: The End of the Beginning or the Beginning of the End? *

A Comparison between LARS and LASSO for Initialising the Time-Series Forecasting Auto-Regressive Equations

A Joint Bayesian Forecasting Model of Judgment and Observed Data

Seasonal Autoregressive Integrated Moving Average Model for Precipitation Time Series

Forecasting Bangladesh's Inflation through Econometric Models

arxiv: v2 [cs.oh] 3 Sep 2014

Forecasting with Exponential Smoothing

Forecasting Gold Price. A Comparative Study

Forecasting with R A practical workshop

Modelling and forecasting Australian domestic tourism

A MACRO-DRIVEN FORECASTING SYSTEM FOR EVALUATING FORECAST MODEL PERFORMANCE

STAT 115: Introductory Methods for Time Series Analysis and Forecasting. Concepts and Techniques

Forecasting of Electric Consumption in a Semiconductor Plant using Time Series Methods

Ex-Ante Forecast Model Performance with Rolling Simulations

Forecasting & Futurism

Exponentially weighted forecasts

STA 6104 Financial Time Series. Moving Averages and Exponential Smoothing

Package Tcomp. June 6, 2018

A Comparison of Time Series Models for Forecasting Outbound Air Travel Demand *

BUSI 460 Suggested Answers to Selected Review and Discussion Questions Lesson 7

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

Testing for Regime Switching in Singaporean Business Cycles

FORECASTING COARSE RICE PRICES IN BANGLADESH

Transcription:

Modified Holt s Linear Trend Method Guckan Yapar, Sedat Capar, Hanife Taylan Selamlar and Idil Yavuz Abstract Exponential smoothing models are simple, accurate and robust forecasting models and because of these they are widely applied in the literature Holt s linear trend method is a valuable extension of exponential smoothing that helps deal with trending data In this study we propose a modified version of Holt s linear trend method that eliminates the initialization issue faced when fitting the original model and simplifies the optimization process The proposed method is compared empirically with the most popular forecasting algorithms based on exponential smoothing and Box-Jenkins ARIMA with respect to its predictive performance on the M3-Competition data set and is shown to outperform its competitors Keywords: Exponential smoothing, Forecasting, Initial value, M3-Competition, Smoothing parameter 2000 AMS Classification: AMS 1 Introduction Forecasting is an essential activity in various branches of science and in many areas of industrial, commercial and economic activity Forecasts can be obtained by using purely judgmental, explanatory and extrapolative methods or any combination of these three but extrapolative methods are reliable, objective, inexpensive, quick, and easily automated In recent decades, numerous time series forecasting models have been proposed A review of the 25 year period until the year 2005 can be found in De Gooijer and Hyndman (2005) It is clear that time series forecasting methods are still dominated by two major techniques: Box et al (1970) (ARIMA) and exponential smoothing (ES) (Brown, 1959) However, ES methods are the most widely used techniques in forecasting due to their simplicity, robustness and accuracy as automatic forecasting procedures (Goodwin et al, 2010) An excellent review of the literature on ES can be found in Gardner (1985) which is later updated in Gardner (2006) and Hyndman et al (2008) The popularity of ES in time series analysis is not just as a result of its simplicity but also its proven record against more sophisticated approaches (Makridakis et al, 1984; Makridakis and Hibon, 2000; Hyndman et al, 2002) In ES models recent observations are given relatively more weight in forecasting than the older observations ES is not a single model but rather a family of models ES models assume that the time series has up to three underlying data Department of Statistics, Faculty of Science, Dokuz Eylul University, Tinaztepe, Izmir, Turkey, Email: guckanyapar@deuedutr Corresponding Author

2 components: level, trend and seasonality The goal of an ES model is to estimate the final values of the level, trend and seasonal pattern and then to use these final values to construct forecasts Each model consists of one of the five types of trend (none, additive, damped additive, multiplicative, and damped multiplicative) and one of the three types of seasonality (none, additive, and multiplicative) Pegels (1969) proposed a taxonomy of ES methods, which was extended and modified later by Gardner Jr and McKenzie (1985), Hyndman et al (2002), and Taylor (2003) There are 15 different models, the best known of which are SES (no trend, no seasonality), Holt s linear model (additive trend, no seasonality) and Holt-Winters additive model (additive trend, additive seasonality) Hyndman et al (2008) proposed the ETS state space models to provide a solid theoretical foundation for ES In this work two possible innovative state space models for each of the 15 exponential smoothing methods are defined, one corresponding to a model with additive errors and the other to a model with multiplicative errors, resulting in 30 potential models There are many studies on the numerical and theoretical comparison of Box- Jenkins and ES methods For several decades, ES has been considered an ad hoc approach to forecasting, with no proper underlying stochastic formulation The state space framework described by Hyndman et al (2002) brings exponential smoothing into the same class as ARIMA models ES is widely applicable and has a sound stochastic model behind the forecast therefore the ad hoc approach argument is no longer true Hyndman et al (2002) introduced a state space framework that subsumes all the exponential smoothing models and allows for the computation of prediction intervals, likelihood and model selection criteria They also proposed an automatic forecasting strategy based on this model framework However, the main competition between these two major forecasting methods is on their post-sample forecasting accuracy The 1001 and 3003 time series used respectively in the M-competition (Makridakis et al, 1982) and the M3-competition (Makridakis and Hibon, 2000) have become recognized collections of test data for the evaluation of forecasting methods The first conclusion of these competitions is that statistically sophisticated or complex methods did not provide more accurate forecasts than simpler ones Some standard and simple combinations of ES methods were used in these competitions and their performances verified this result According to statements by the participants of the M-competition, the Box-Jenkins methodology (ARIMA models) required the most time (on the average more than one hour per series) To propose a simple, accurate, robust and automatic forecasting method as an alternative to ES methods is not easy task after the results of Hyndman et al (2002) In this study, they applied an automatic forecasting strategy to the M-competition data (Makridakis et al, 1982) and the M3 competition data (Makridakis and Hibon, 2000) The automatic forecasting procedure proposed in this paper tries 24 of the 30 state space models on a given time series and selects the best model using the AIC They show that the methodology is particularly good at short term forecasts (up to about 6 periods ahead), and especially accurate for seasonal short-term series (beating all other methods in the competitions)

3 Even though ES models are well developed, they still have some important shortcomings that affect the quality of the forecasts obtained using them The most important issues faced when building ES models are related to initialization and optimization problems For example Makridakis and Hibon (1991) showed that even though for other exponential smoothing models the type of initialization and loss functions that are employed did not result in significant changes in post sample forecasting accuracies, for Holt s linear trend model they were very influential especially for long term forecasting horizons Even if this was not the case, the fact that trying to find an optimal initial value both complicates and prolongs the optimization process can not be overlooked The modified simple exponential smoothing model proposed in Yapar (2016) helps deal with these problems but still lacks the ability to deal with possible trending behavior that may be present in the data In this study we aim to modify the Holt s linear trend model using the ideas in Yapar (2016) The performance of the proposed method will be tested empirically using the M-3 competition data set that was mentioned above 2 Modified Holt s Linear Trend Method Modified exponential smoothing (MES) methods can be adapted for each of the 30 different ES models classified by Hyndman et al (2008) but only one form of MES will be given in this study which can be called modified Holt s linear trend method (MHES) The goal is again to forecast future values of a time series from its own current and past values That is, given a series of equally spaced observations, X t for t = 1, 2,, n on some quantity, forecasts for h = 1, 2, should be obtained Let us first consider the standard Holt s linear trend method (Holt, 1957), which is given in equations (21)-(23) The method estimates the local growth, T t, by smoothing successive differences, (S t S t 1 ) of the local level, S t The forecast function is the sum of level and projected growth: (21) S t = αx t + (1 α)(s t 1 + T t 1 ), (22) T t = β(s t S t 1 ) + (1 β)t t 1, (23) ˆXt (h) = S t + ht t, where ˆX t (h) is the h-step-ahead forecast, α and β are smoothing parameters, 0 < α, β < 1 There are two smoothing parameters to estimate and starting values for both the level and trend must be provided An initial smoothed value can be chosen using various techniques The most common techniques are least squares estimation, backcasting, using a training set, using convenient initial values (eg using the first data value to initialize the level or the difference between the first and the second data value to initialize the trend) and setting the initial values to zero (Makridakis and Hibon, 1991) The parameters can be estimated by minimizing the one-step-ahead MSE, MAE, MAPE or some other criterion for measuring in-sample forecast error

4 Now, Holt s linear trend method will be modified as follows: ( p ) ( ) t p (24) S t = X t + (S t 1 + T t 1 ), t t (25) T t = ( q t ) (S t S t 1 ) + (26) ˆXt (h) = S t + ht t, ( t q t ) T t 1, n p q 0, for p {1,, n} and q {0, 1,, n} Unlike other approaches, the proposed model avoids potential initialization problems by letting S t = X t for t p, T t = X t X t 1 for t q and T 1 = 0 This model will simply be notated as MHES(p, q) Note that there are two smoothing parameters (p and q) to estimate but no starting values are needed for level and trend Also notice that for q = 0 the model MHES(p, 0) defined by equations (24)-(26) reduces to a modified simple exponential smoothing (M SES(p)) model (Yapar, 2016): {( p ) S(t) = t Xt + ( ) t p t St 1, for t > p, X t, for t p 3 Application to M3-competition Data To test the performance of the proposed modification, the method is applied to the M3-competition data (Makridakis and Hibon, 2000) since this collection is the most recent and comprehensive time-series data collection available and its results are verified The M3-competition data collection consists of 3003 time series data sets each of which have a pre-determined number of data points that are used for testing the out-sample performances of the competing methods For example the last 6 data points for yearly data, the last 8 data points for quarterly data and the last 18 data points for the quarterly data are not used while estimating the smoothing parameters and the models are tested on these out-sample points After the smoothing parameters are estimated using the in-sample data points in each set (as specified in the M3-competition), forecasts up to 18 steps ahead (the number of out-sample data points as specified in the M3-competition) are computed Then, the symmetric mean absolute percentage errors (smape) for all forecast horizons are computed and averaged across all 3003 series to stay consistent with the rest of the literature The one-step-ahead smape can be defined as: X t ˆX t smap E = 200 mean X t + ˆX, t where X t is the actual value and ˆX t is the one-step-ahead forecasted value Results from four different versions of the proposed modification will be given here For all versions the data will be deseasonalized by the classical decomposition method of the ratio-to-moving averages, if necessary The first version is the modified simple exponential smoothing model MHES(p, 0) where p is the first parameter optimized by minimizing the in-sample one-step-ahead smape holding q = 0

The second version that will be considered here is the MHES(p, 1) where q = 1 and p is optimized as before Even though it may look like this model is simply a specific parametrization of the model, this special case is of relevance since it allows for all the trend components over time to receive equal weights which can not be achieved with other ES methods For the third version we perform a simple model selection of the previous two versions where the best, i e the one that yields to a smaller forecast error for each data set, between these two is chosen and used for prediction This version of the model will be denoted as MHES select in the tables that follow The final version MHES(p, q) is the one where both parameters are optimized in a sequential fashion For this version first the optimal value for the parameter p is obtained by minimizing the in-sample one-step-ahead smape holding q = 0 and then the optimal value for the parameter q (q p ) is found by minimizing the in-sample one-step-ahead smape again Reseasonalized forecasts are produced for all versions for as many steps ahead as required It is worth noting that the out-sample data points in each data set are left out while estimating the smoothing parameters and only the errors from the outsample data points (the forecast horizons as determined in the M3-competition) are given here The results from the proposed models along with results from some of the methods from the M3-competition are given in Tables 1-7(Makridakis and Hibon, 2000) To stay consistent with other research on this data set, symmetric MAPEs are averaged across the series for each forecast horizon Table 1 contains the results for all 3003 series of the M3-competition Here it can be seen that MHES(p, 0) performs better than its competitor the single exponential smoothing model for all forecasting horizons individually and averaged MHES(p, q) and MHES select both perform better than Holt s linear trend model, Box-Jenkins and ET S for all forecasting horizons again individually and averaged It is worth noting that even the one model M HES(p, 1) outperforms both ET S and Box Jenkins based models for relatively short-term forecast horizons (horizons 1-8) It is also worth underlining the fact that the model selection for MHES select is carried out of only the two parameterizations of one model compared to the full parameterizations of 24 alternative models considered in its competitor ET S These can be seen as proofs of the strong positive effect this modification can have on forecasting accuracy The results can be studied closely as done in Tables 2 and 3 for seasonal and non-seasonal data From these tables it can be seen that M HES(p, 0) outperforms single exponential smoothing for both data sets for all forecasting horizons again MHES(p, q) and MHES select both perform better than Holt s linear trend model and Box-Jenkins for both seasonal and non-seasonal data sets ET S performs slightly better than the proposed modifications for seasonal data due to the fact that it allows for various seasonal components (additional and multiplicative) however for non-seasonal data MHES(p, q) and especially MHES select outperform ET S on all forecasting horizons Tables 4-7 have summaries of the results for the annual, quarterly, monthly and other data sets respectively M HES(p, 0) produced comparable results to single exponential smoothing for quarterly and other data sets while single exponential 5

6 Table 1 Average symmetric MAPE across different forecast horizons: all 3003 series Method 1 2 3 4 5 6 8 12 15 18 1-4 1-6 1-8 1-12 1-15 1-18 Naive2 105 113 136 151 151 159 145 160 193 207 1262 1357 1376 1424 1481 1547 Single 95 106 127 141 143 150 133 145 183 194 1173 1271 1284 1313 1367 1432 Holt 90 104 128 145 151 158 139 148 188 202 1167 1293 1311 1342 1395 1460 B-J automatic 92 104 122 139 140 146 130 141 178 193 1142 1239 1252 1278 1333 1399 ETS 88 98 120 135 139 147 130 141 176 189 1104 1213 1232 1266 1314 1377 MHES(p, 0) 89 100 121 137 139 147 128 139 173 189 1116 1221 1234 1264 1313 1377 MHES(p, 1) 84 97 115 129 136 142 129 154 189 209 1064 1172 1194 1266 1332 1409 MHES select 87 96 115 129 131 137 121 137 173 187 1069 1158 1169 1209 1264 1328 MHES(p, q) 86 97 117 135 137 145 125 136 173 187 1089 1196 1208 1237 1287 1350 smoothing performed slightly better for annual data and M HES(p, 0) performed much better for monthly data For the annual data in Table 4, the MHES(p, q) and MHES select both outperformed Holt, Box-Jenkins and ET S for all forecasting horizons The superiority of M HES select for annual data is not only against these three models but also for annual data it was able to outperform all competitors of the M3-competition including the best performing Theta method (Assimakopoulos and Nikolopoulos, 2000) It is worth noticing that for the same annual data sets ET S is unable to produce better forecasts than the naive approach For the quarterly series in Table 5 MHES select and MHES(p, q) produced better forecasts compared to Holt, Box-Jenkins and ET S when the averaged smap E are studied for both short term and long term forecasting horizons For monthly data the results can be seen in Table 6 Here MHES(p, q) and M HES select outperform Holt and Box-Jenkins consistently while ET S performs slightly better for short-term horizons and the proposed approaches perform better for long-term horizons therefore they produce comparable results Finally for the other series in Table 7 the proposed modified approaches MHES select and MHES(p, q) both perform better compared to Holt, Box-Jenkins and ET S for all forecasting horizons individually and when averaged for both short and long term horizons 4 Conclusion The modification to Holt s linear trend method proposed in this paper is a simple, fast, computationally inexpensive and accurate alternative extrapolative forecasting method Four versions of the proposed model (two special parameterizations, a model selection strategy out of these two special cases and a fully optimized version) were applied to the M3-competition data sets and it was shown that the proposed modification can perform as well as and in most cases much better than the models that are based on the two major forecasting approaches:

7 Table 2 Average symmetric MAPE across different forecast horizons: 862 seasonal series Method 1 2 3 4 5 6 8 12 15 18 1-4 1-6 1-8 1-12 1-15 1-18 Naive2 80 81 95 95 99 115 121 110 140 155 877 941 1012 1054 1091 1140 Single 71 74 88 87 93 109 113 107 131 146 802 871 942 978 1013 1062 Holt 65 69 82 84 94 106 112 115 132 153 750 833 915 966 1009 1067 B-J automatic 71 74 80 88 92 103 105 105 133 145 782 846 903 931 979 1037 ETS 62 64 77 82 89 102 106 101 120 140 712 793 867 901 935 987 MHES(p, 0) 65 69 80 83 92 109 111 107 123 142 745 830 909 944 975 1022 MHES(p, 1) 62 70 77 80 91 102 102 102 123 140 723 803 869 920 959 1009 MHES select 65 69 78 80 87 102 100 103 117 135 728 801 864 915 951 1002 MHES(p, q) 64 68 78 81 89 106 106 104 118 140 726 810 885 920 949 996 Table 3 Average symmetric MAPE across different forecast horizons: 2141 nonseasonal series Method 1 2 3 4 5 6 8 12 15 18 1-4 1-6 1-8 1-12 1-15 1-18 Naive2 115 126 153 173 171 175 159 192 228 241 1417 1522 1532 1597 1673 1754 Single 104 119 143 163 163 165 145 170 218 225 1322 1428 1430 1470 1540 1621 Holt 100 119 148 174 181 185 162 178 235 248 1352 1510 1523 1569 1643 1726 B-J automatic 100 116 139 159 160 164 144 164 207 224 1287 1397 1404 1443 1509 1585 ETS 99 112 137 156 159 166 144 167 213 222 1261 1383 1391 1439 1503 1577 MHES(p, 0) 98 113 137 158 158 163 138 160 206 219 1266 1379 1376 1416 1481 1558 MHES(p, 1) 93 108 131 149 154 158 144 184 232 254 1202 1321 1336 1430 1517 1615 MHES select 96 107 130 149 148 150 133 159 210 222 1206 1301 1302 1354 1428 1509 MHES(p, q) 95 109 133 157 157 161 136 157 208 218 1234 1352 1349 1387 1454 1531 Box Jenkins and exponential smoothing An Excel file that contains the optimum parameter values for the proposed methods for each M3-competition data set along with the forecasts and errors is provided as supplementary material and can be accessed at the journal s website The models success can be attributed to the facts that it is less dependent on the initial values and is more flexible When the smoothing parameters are equal to 1, ie p = 1 and q = 1, the method assigns equal weights to all past observations when estimating the level and trend which is a very intuitive starting point assuming the future is represented by the average of the past From the tables in Section 3 it can be clearly seen how important it is to be able to assign

8 Table 4 Average symmetric MAPE across different forecast horizons: 645 annual series Method 1 2 3 4 5 6 1-4 1-6 Naive2 85 132 178 199 230 249 1485 1788 Single 85 133 176 198 228 248 1482 1782 Holt 83 137 190 220 252 273 1577 1927 B-J automatic 86 130 175 200 228 245 1478 1773 ETS 93 136 183 208 234 258 1548 1853 MHES(p, 0) 91 135 176 199 228 251 1504 1800 MHES(p, 1) 83 122 168 186 215 233 1395 1678 MHES select 83 115 156 177 205 220 1328 1594 MHES(p, q) 83 124 170 200 231 251 1440 1762 Table 5 Average symmetric MAPE across different forecast horizons: 756 quarterly series Method 1 2 3 4 5 6 8 1-4 1-6 1-8 Naive2 54 74 81 92 104 124 137 755 882 995 Single 53 72 78 92 102 120 134 738 863 972 Holt 50 69 83 104 115 131 156 767 921 1067 B-J automatic 55 74 84 99 109 125 142 779 910 1026 ETS 50 66 79 97 109 121 142 732 871 994 MHES(p, 0) 52 71 78 97 101 118 135 745 862 971 MHES(p, 1) 53 68 76 91 99 110 124 719 828 924 MHES select 52 70 77 92 96 111 123 728 830 921 MHES(p, q) 50 69 76 95 102 119 137 723 851 969 equal weights to past observations which can not be achieved by other exponential smoothing models Note that we have not done any preprocessing of the data, identification of outliers or level shifts, or used any other strategy designed to improve the forecasts These results are based on simple applications of the algorithms to the data It should be expected that the results from the modification could be improved further if some sophisticated data preprocessing techniques are used as done by some of the competitors in the M3 competition It is clear that the data sets include different types of trend (multiplicative, damped, multiplicative damped) and seasonal components (additive, multiplicative) in addition to different types of errors (additive and multiplicative errors) and when these components are incorporated in the proposed method then the forecasting performance of proposed method

9 Table 6 Average symmetric MAPE across different forecast horizons: 1428 monthly series Method 1 2 3 4 5 6 8 12 15 18 1-4 1-6 1-8 1-12 1-15 1-18 Naive2 150 135 157 170 149 144 156 160 193 207 1530 1508 1526 1555 1616 1689 Single 130 121 140 151 135 131 138 145 183 194 1353 1344 1360 1383 1451 1532 Holt 122 116 134 146 136 133 137 148 188 202 1295 1311 1333 1377 1451 1536 B-J automatic 123 117 128 143 127 123 130 141 178 193 1278 1270 1286 1319 1395 1480 ETS 115 106 123 134 123 123 132 141 176 189 1193 1205 1243 1296 1364 1445 MHES(p, 0) 115 108 126 138 126 125 129 139 173 189 1220 1233 1278 1298 1367 1449 MHES(p, 1) 110 109 122 134 128 128 138 154 189 209 1186 1216 1319 1407 1501 1533 MHES select 116 110 126 138 124 123 127 137 173 187 1222 1226 1265 1284 1353 1431 MHES(p, q) 116 109 125 138 124 123 127 136 173 187 1218 1223 1242 1279 1347 1427 Table 7 Average symmetric MAPE across different forecast horizons: 174 other series Method 1 2 3 4 5 6 8 1-4 1-6 1-8 Naive2 22 36 54 63 78 76 92 438 549 630 Single 21 36 54 63 78 76 92 436 548 629 Holt 19 29 39 47 58 56 72 332 413 481 B-J automatic 18 30 45 49 61 61 75 352 438 506 ETS 20 30 40 44 54 51 63 337 399 451 MHES(p, 0) 21 35 54 63 78 75 91 434 545 626 MHES(p, 1) 19 29 41 48 60 57 71 346 426 487 MHES select 18 28 40 46 57 53 66 330 403 460 MHES(p, q) 17 26 37 43 54 49 62 309 377 430 will improve further In this study, the main focus was obtaining point forecasts only Further study will be computation of prediction intervals and expanding this concept for other ES models References Assimakopoulos, V, Nikolopoulos, K, 2000 The theta model: a decomposition approach to forecasting International journal of forecasting 16 (4), 521 530 Box, G E, Jenkins, G M, Reinsel, G, 1970 Forecasting and control Time Series Analysis 3, 75 Brown, R G, 1959 Statistical forecasting for inventory control McGraw/Hill

10 De Gooijer, J G, Hyndman, R J, 2005 25 years of iif time series forecasting: a selective review Tinbergen Institute Discussion Papers No TI, 05 068 Gardner, E S, 1985 Exponential smoothing: The state of the art Journal of forecasting 4 (1), 1 28 Gardner, E S, 2006 Exponential smoothing: The state of the art-part ii International journal of forecasting 22 (4), 637 666 Gardner Jr, E S, McKenzie, E, 1985 Forecasting trends in time series Management Science 31 (10), 1237 1246 Goodwin, P, et al, 2010 The holt-winters approach to exponential smoothing: 50 years old and going strong Foresight 19, 30 33 Holt, C, 1957 Forecasting trends and seasonal by exponentially weighted averages International Journal of Forecasting 20 (1), 5 10 Hyndman, R, Koehler, A B, Ord, J K, Snyder, R D, 2008 Forecasting with exponential smoothing: the state space approach Springer-Verlag Hyndman, R J, Koehler, A B, Snyder, R D, Grose, S, 2002 A state space framework for automatic forecasting using exponential smoothing methods International Journal of Forecasting 18 (3), 439 454 Makridakis, S, Andersen, A, Carbone, R, Fildes, R, Hibon, M, Lewandowski, R, Newton, J, Parzen, E, Winkler, R, 1982 The accuracy of extrapolation (time series) methods: Results of a forecasting competition Journal of forecasting 1 (2), 111 153 Makridakis, S, Hibon, M, 1991 Exponential smoothing: The effect of initial values and loss functions on post-sample forecasting accuracy International Journal of Forecasting 7 (3), 317 330 Makridakis, S, Hibon, M, 2000 The m3-competition: results, conclusions and implications International journal of forecasting 16 (4), 451 476 Makridakis, S G, Andersen, A, Carbone, R, Fildes, R, Hibon, M, Lewandowski, R, Newton, J, Parzen, E, Winkler, R, 1984 The forecasting accuracy of major time series methods Wiley Pegels, C C, 1969 On startup or learning curves: An expanded view AIIE Transactions 1 (3), 216 222 Taylor, J W, 2003 Exponential smoothing with a damped multiplicative trend International journal of Forecasting 19 (4), 715 725 Yapar, G, 2016 Modified simple exponential smoothing Hacettepe University Journal of Mathematics and Statistics Early access, 0