Bachelor thesis. Forecasting inflation in Sweden - A univariate approach. Eva Huselius and Linn Wallén

Size: px
Start display at page:

Download "Bachelor thesis. Forecasting inflation in Sweden - A univariate approach. Eva Huselius and Linn Wallén"

Transcription

1 Bachelor thesis Department of Statistics Kandidatuppsats, Statistiska Institutionen Nr 2014:11 Forecasting inflation in Sweden - A univariate approach Eva Huselius and Linn Wallén Bachelor thesis 15 HE credits Statistics III, Spring 2014 Supervisors: Pär Stockhammar and Gebrenegus Ghilagaber

2

3 Abstract State space models are dynamic models that take into account how unobserved components describing a time series develop over time. This leads to estimation of fewer parameters and smaller specification errors. The aim of this study was to evaluate univariate time series methods from an underlying state space model to predict the Swedish inflation rate. Exponential smoothing and ARIMA models, both regular and from an underlying state space model were fitted, and forecasts were compared with NIER s. The result showed that a state space MA(9) performed best in relation to NIER, and had lower specification errors. In times of a varying pattern an original ARMA (1,11) model with and without seasonality of 12 often performed well but at a too high level. In times of stagnation the state space exponential (ETS) models performed well, by capturing the accurate level. The conclusion was that different univariate models can perform well in different economic cycles, but multivariate state space models would probably be better for longer periods.

4 Table of contents 1 Introduction Background Measures of inflation in Sweden Aim of the study Modelling time series data Exponential smoothing Autoregressive Integrated Moving Average (ARIMA) Models The state space approach Introduction to the state space approach Kalman filtering Exponential smoothing in state space form ARIMA in state space form Illustration with Swedish inflation data The data set Test for heteroscedasticity Test for stationarity Model evaluation measures Forecast accuracy measures Results Model fit and split half method Model fit Exponential smoothing Model fit ARIMA Model fit Exponential smoothing in state space form Model fit ARIMA in state space form Forecast evaluation Forecast interpretation Forecast Forecast Forecast Forecast Discussion References Appendix A Appendix B Appendix C... 43

5 5 1 Introduction 1.1 Background In different areas, such as economics there is an interest in modelling data over time as well as to forecast future outcomes. Inflation is one of those outcomes, since the forecasts play a crucial role in monetary policy analysis (Manopimoke, 2013). The Swedish inflation development is forecasted by National Institute of Economic Research, NIER, which is then used for decision making about economic politics. There are several ways for modelling data over time, and it can be performed either with one or several predictor variables. The univariate methods often considered are the exponential smoothing and Box-Jenkins (ARIMA) methods. When ARIMA-models were fitted to GDP-data of the Philippines, the model containing one autoregressive and one moving average term, ARMA (1,1), appeared to have the best fit according to the Schwartz information criterion, SIC, and adjusted R-square (Tamayo, Cuizon, & Zagpang, 2014). The inflation of Ghana on the other hand, has been modeled by different exponential smoothing methods and the damped trend model came out as the most appropriate, both in the case of data transformations, using log and square root, and in absence of transformations (Ofori & Ephraim, 2012). The inflation rate of Nepal has earlier been considered to follow a random walk, an AR (1) with a time independent coefficient, but according to Koirala (2013) this is an invalid presumption in reality. The changing inflation expectations have a larger impact than previously assumed, which increases the gap between actual and target inflation. The Philips curve, the negative relationship between inflation and unemployment (Philips, 1958), has often been used for modelling and forecasting inflation, since it has appeared stable, reliable and accurate in comparison to other alternatives. The inflation of Hongkong, for example, was modelled using unobserved components analysis in accordance to the Phillips curve. The price dynamics in the short and long run were predicted by the US trend inflation and US and China output gaps (Manopimoke, 2013). The conclusion from this study was that these types of models can possibly yield fruitful results since they give more economic content compared to the univariate approach. The unobserved component analysis, or state space approach, could be explained as a gathering all available information about the future. This method has also been successful when forecasting other economic parameters like the money reserve (Pandher, 2007). Given a certain model and having information about the last observation, the state space framework makes it possible to forecast as far into the future as preferred. An example is during early space shuttle mission landings when the control engineers were heard to say Your state vector is looking good! This indicated that all relevant measurements were continuously calculated and from them the coming flight path was computed and updated (Brocklebank, & Dickey, 2013). This repeated updating is the basic idea behind the state space forecasting approach. When modelling export data the state space models have appeared to perform better than original ARIMA-models according to the information criteria AIC and SBC, as well as RMSE and MAE (Ravichandran, 2001).

6 The objective of this thesis is to apply the state space framework to univariate forecasting methods on Swedish inflation data and then evaluate how well it can work in practice, compared to the common univariate methods. The forecasts will be compared to those performed by NIER. 1.2 Measures of inflation in Sweden Inflation is defined as a sustained rise in the general price level paid by consumers. When the price level rises by say 0.2 percent in one year, we can say that the inflation is 0.2 percent. The wage-price spiral captures and somewhat also explains the mechanism behind the rising prices. When firms have to pay higher wages, they also need to set higher prices and the price level increases. In response to the higher price level, workers ask for higher wages and again, the higher wages makes the firms increase their prices and the price level rises even more. We end up in a steady inflation as long as prices keep rising. If prices start to fall there will instead be deflation: a negative inflation rate (Sveriges Riksbank, 2011). The Swedish price level-change is measured by the consumer price index, CPI, which is measured from the price level changes in the same basket of goods and services each month. Price changes for the items in the predetermined basket are averaged, where they are weighted differently according to their importance. The predicted rate of inflation comes from the expected price changes in the different groups of goods and services (Statistiska Centralbyrån, 2013). There are other measures KPIF and KPIX which are the underlying CPI. KPIF is CPI when the interest on condominium is set to constant. KPIX excludes the household s interest on their homes. KPIX also excludes the instant effects of changes in taxes and subsidies. The effects of interest rates are excluded because it makes it easier to explain why an increased interest rate also increases the inflation. All inflation measures are calculated and published by The Central Bureau of Statistic (Statistiska Centralbyrån, SCB) by order of the National Bank of Sweden (Sveriges Riksbank, 2011). Inflation forecasts are currently calculated by National Institute of Economic Research, as mentioned previously. Expected price changes in different groups of goods and services are weighted together to get an estimate of the general price development. 1.3 Aim of the study The aim of this study is to evaluate univariate time series methods from an underlying state space model to predict the Swedish inflation rate. The thesis is organized as follows. In section 2 different univariate methods for time series modelling and forecasting are considered and the state space approach for dynamic time series modelling is introduced. Subsequently, in section 3, an empirical application will follow where the different methods are applied in order to fit a model to the annual inflation rates. The inflation rates are tested for stationarity and heteroscedasticity. Under section 4 the different methods are first fitted to the inflation data and the best models are then used for forecasting. Diagnostic tests will be applied

7 7 to the chosen models and forecasting accuracy will be tested both in sample (nlag=0) and out of sample on a forecasting horizon of 6 and 17 months. The forecasts will be compared to the inflation forecasts performed by National Institute of Economic Research (NIER). A discussion and conclusion follows under section 5. 2 Modelling time series data 2.1 Exponential smoothing Exponential smoothing is a rather simple forecasting method. This smoothing separates the signal, a deterministic component, and noise, a stochastic component, to obtain the pattern of a time series. The smoother then creates an estimate of the signal. The basic exponential smoothing methods are of first and second order, but the smoothing process can also be applied in form of higher orders. In first order, or simple, exponential smoothing, SES, also called Brown s method, the smoothed value is a linear combination of the current level and the smoothed level at the previous point in time. This is an appropriate smoother when the process is considered to be constant, which means it s expected to vary around a certain level with stochastic fluctuations (Brown, 1956). The random noise as independent shocks to the process is an important assumption for the exponential smoothers, in other words no serial auto correlation. The exponentially smoothed observation: y t = αy t + (1- α)y t 1 (i) When the time series is defined by a level, only: l t = αy t + (1- α) l t 1 (ii) (2.1) The second order smoother, Holt s-winters method or double exponential smoothing, DES, is a better alternative when the time series shows a linear trend over time. If using the first order smoother, it will in this case consistently underestimate the data and therefore be biased. A simple exponential smoother with a large discount factor could also do the job, but it would at the same time fail to smooth the series which defeats the purpose (Montgomery, Jennings, & Kulahci, 2008). The level at time t: l t = αy t + (1- α)(l t 1 + b t 1 ) (i) Trend at time t, additive: b t = β (l t l t 1 )+(1- β )b t 1 (ii) Trend at time t, multiplicative: b t = β (l t /l t 1 )+(1- β ) b t 1 (iii) (2.2) Also for seasonal series, which is called Winter s method when there is both seasonality and trend, the exponential smoothing could be useful. The seasonality could be either additive or multiplicative. The first type is appropriate if the amplitude of the seasonal pattern remains quite constant over time. The restriction of this model is that the

8 seasonal adjustments add to zero over one period. If the amplitude of the seasonal pattern changes in proportion to the average level of the series, a multiplicative model would be a better choice. The restriction then, is that the seasonal adjustments makes an add to the length of the period of the cycles (Winters, 1960). For all the exponential smoothers the assumptions of uncorrelated errors and homoscedastic variance have to be met. The level at time t: l t = α (y t -s t m ) + (1- α)(l t 1 + b t 1 ) (i) Trend at time t, additive: b t = β (l t l t 1 )+(1- β ) b t 1 (ii) Seasonality at time t, additive: s t = γ (y t l t 1 b t 1 )+(1- γ) s t m (iii) Trend at time t. multiplicative: b t = β (l t /l t 1 )+(1- β ) b t 1 (iv) Seasonality at time t, multiplicative: s t = γ(y t /(l t 1 b t 1 ))+(1- γ) s t m (v) (2.3) For an additive damped trend, b t 1 follows a damping parameter, ø. For multiplicative damped trend, b t 1 is instead raised to the power of the damping parameter, ø. If there are changes in the behavior of a time series, like changes in the trend, then an adaptive, instead of a fixed, discount factor could be applied. This doesn t need to improve forecasts, though. The true effect may be the reverse (Everette & Gardner, 1985). When using exponential smoothing, the initial values of level and trend are often obtained through fitting a linear regression. The intercept is then taken as the initial level and the slope as the initial trend (Hyndman, Koehler, Ord, & Snyder, 2008). If higher order exponential smoothers seem to be required, then an ARIMA model would rather be considered, since the former becomes quite complicated. Brown s original work for models underlying the exponential smoothing technique has been developed and the trend component has appeared to often become too optimistic or pessimistic. When a time series experiences a trend over time the Holt s model is to be preferred to Brown s and when a linear trend is present it should be damped when forecasting over a long time period, more than 3 to 4 periods ahead. The damping parameter modifies the linear trend to become more realistic and also makes it easier to differentiate the series in order to achieve stationarity (McKenzie, & Gardner, 2010). The exponential smoothing methods suffer from not having any procedure in order to reach an objective statistical identification, nor a diagnostic system for evaluating the goodness of competing exponential models. The fit isn t based on any hypothesis testing for the parameters or check concerning the error terms, which is why these have been considered ad hoc statistical models (Hyndman, Koehler, Ord, & Snyder, 2008). Since most of the exponential models are special cases of the more general ARIMA models these have been considered less useful.

9 9 2.2 Autoregressive Integrated Moving Average (ARIMA) Models When modeling with ARIMA the time series need to be stationary, which means it should exhibit a similar behavior over time in form of constant expected value and auto covariance. ARIMA models are used in order to model the serial dependence in a time series, where the AR-terms model the interdependency in Y and the MA-terms describe how Y depends on previous error terms (Montgomery, Jennings, & Kulahci, 2008). Y t = δ + ε t + Φ 1 Y t 1 + Φ 2 Y t Φ p Y t p (2.4) Y t = ε t + Θ 1 ε t 1 + Θ 2 ε t Θ q ε t q (2.5) The equation (2.4) represent an autoregressive model of order p, AR (p) and (2.5) is a moving average process of order q, MA (q). The appropriate number of AR- and MAterms can be obtained from the ACF and PACF plots of the time series. The lag at which the PACF cuts off, gives us the order of AR-terms. The lag at which the ACF cuts off, gives the order of MA-terms that are appropriate for modelling the time series (Montgomery, Jennings, & Kulahci, 2008). When there are both AR- and MAcomponents, both the ACF and PACF are exponentially decaying or exhibit a damped sinusoid pattern. The seasonal pattern can be observed by looking if there is a reoccurring pattern in the correlation functions. A seasonal pattern in the ACF indicates a seasonal pattern in the AR-term and if it appears in the PACF it gives a hint of seasonality in the MA-term, accordingly. 2.3 The state space approach Introduction to the state space approach The state space approach, also called dynamic linear modelling, applied to the univariate methods doesn t lead to any new techniques for forecasting but it makes it possible to use a common mathematical framework in the model development (Hyndman, Koehler, Ord, & Snyder, 2008). When a model can be put in state space form it leads to a lot of powerful statistical properties. This is simply a way to put all processes in a common form. Optimal forecasts can be made through the Kalman filter, explained in section 2.3.2, and optimal estimates of unobserved components can be achieved through smoothing (Harvey, 1984). The unobserved variables are the state variables, and estimates of the model parameters as well as unobserved expectations are obtained simultaneously (Burmeister, & Wall, 1982). It s important to distinguish the method from the model. The method can be seen as an algorithm producing point forecasts, a point-forecasting equation, while the stochastic state space framework is an underlying model. The state space expression makes the models easier to interpret, not at least when the parameter values are extreme. But most important, the state space models often lead to smaller specification errors (Hyndman, Koehler, Ord, & Snyder, 2008). The state space models work in two steps. First, the state vector is created where significant components are added until it s complete. The main purpose of state space

10 modeling is to find the state vector, which is the smallest vector that summarizes the past of the system in full (Brocklebank & Dickey, 2003). The state space approach then explains the smoothing in two linear equations, see (2.7). Eq (i) is the observation equation, which describes the relationship between the unobserved states and the current observation. Eq (ii), the state equation, describes the evolution of states over time. This can be explained as the inter-temporal dependencies between values in the series. Through the state equation, the state vector becomes continuously updated. The state equation is a vector containing unobserved components, such as level, trend and seasonality or AR- and MA-components of a time series. y t = w x t 1 + ɛ t x t = Fx t 1 + gɛ t (i) (ii) (2.6) w = vector of coefficients (effect of past components on current y, obs at time t) F = vector of coefficients (effect of past on current state, x) g = vector of coefficients (effect of error on current state, persistence vector) x t = the state vector, which is a vector of unobserved components at time t (such as y t, ɛ t, l t, b t and s t ) ɛ t = white noise The transition matrix, F, determines the dynamic properties of the state space model. The coefficient matrix g gives the variance structure of the state equation. In forecasting mode, the observation and state equations become: y t+1 = w x t + ɛ t+1 x t+1 = Fx t + gɛ t+1 (i) (ii) (2.7) Applying dynamic modelling techniques have appeared to show better performance than the original forecasting techniques (Ravichandran, 2001). For the AR (1) model all forecasts are linear combinations of Y t which gives the state vector defined just by (Y t ). But when the model is an AR (2) the state vector will be (Y t, Y t+1 t ) since the forecast of Y t+1 involves the observation Y t 1 and this value cannot be determined from just Y t. When forecasting Y for time t+2 this is a linear combination of the elements in the state vector, in other words of Y t and Y t+1 t. When then forecasting it looks like: Y t+1 t = α 1 Y t + α 2 Y t 1 Y t+2 t = α 1 Y t+1 t + α 2 Y t Y t+3 t = α 1 Y t+2 t + α 2 (α 1 Y t + α 2 Y t 1 ) Y t+3 t = α 1 (α 1 Y t+1 t + α 2 Y t ) + α 2 (α 1 Y t + α 2 Y t 1 )

11 11 Y t+3 t =α 1 α 1 (α 1 Y t + α 2 Y t 1 )+ α 2 (α 1 Y t + α 2 Y t 1 ) (2.8) The logic behind the state space modeling is that the auxiliary variables, that could be either observable or not, are continuously updated as soon as a new observation is available. It shows how the forecasted values are created from the components in the state vector. The forecasted values are linear combinations of linear combinations. At the end of period t, Y t has been observed and goes from random to fix. But how do we know what to put in the state vector in the first place? This is done by calculating the covariance matrix between Y t and Y t+j, in the AR case, which will here be the example. The components in the state vector, and therefore the variables between which the covariances are estimated, depend on the components that describe the method. In the exponential smoothing case these would be level, trend and seasonality. Y t 1 = 0 1 Y t A B Y t Y t 1 e t Where A and B are coefficients of Y t 2 and Y t 1, respectively. Γ( j) = E (Y t Y t j ) = A E (Y t 1 Y t j ) + B E (Y t 2 Y t j ) = A Γ( j + 1) + B Γ( j + 2) = Γ (j) Γ(j) = Γ(1) = A Γ(0) + B Γ( 1) The formula above is for the covariance when j>0, but when j=0 the covariance becomes: Γ(0) = A Γ(1) + B Γ(2) + e t (2.9) These covariances in (2.9) constitute the Yule-Walker equation, where the largest lag, or the total number of lags, is chosen by the minimum AIC-value. It s now possible to compute the covariance matrix M between the vector of current and lagged Y:s (Y t, Y t 1, Y t 2 ) which are the columns of M and the row vector of current and future Y:s is (Y t, Y t+1, Y t+2 ). The Yule-Walker equation builds up the F-matrix which contains the AR-coefficient. The covariances must first be estimated. An autoregressive model (in the univariate case) is fitted to the data, from which M is estimated. The matrix M is created by putting the covariance matrices together. This matrix is successively expanded. The state space model can be called a canonical representation of a univariate or multivariate time series process, in this case a univariate process. The canonical correlation is calculated in order to find if there is a relationship between the two sets of variables: past and future. It works through adding significant variables to the state vector and exclude those with insignificant canonical correlations, i.e. those that don t give any significant added correlation between past and future. This is the way to

12 determine the state vector, and when this is done the state space model can be fitted to the data. Γ(0) Γ(1) Γ(2) M: Γ(1) Γ(2) Γ(3) Γ(2) Γ(3) Γ(4) Figure 1. The M matrix. From the M matrix it can be seen that the covariance between two variables is a linear combination of the covariances with significant canonical correlations, already added to the M-matrix. Those in column 4 of matrix M, are linear combinations of the covariances in the previous columns 1-3. The coefficients give a row of the F matrix, the effect of previous states on the current or the current s effect on future states, respectively. In the exponential smoothing case the canonical correlation is based on the states level, trend and seasonality and in ARIMA it s based on previous Y s and errorterms, as already mentioned. The build-up and forecasting processes work simultaneously. Any prediction of Y a certain steps into the future, is a linear combination of the previous sequence of state vector elements. The state vector is constructed by successively including forecasts j steps ahead until it s complete. This happens when the first forecast that is linearly dependent on forecasts already in the state vector, is reached. At this point the expansion of the state vector is stopped. When the state vector is constructed previous observations of the state elements are put into the vector which builds up the vector. In forecasting mode the state vector is continuously updated by each new forecasted value of Y. When the time series is defined by a MA process, it s the error terms that explain the observed and future values of Y. In the same way as the Y s in the expressions above are already observed, the error terms already observed are known in this case. When forecasting with MA (2) the state vector is still (Y t, Y t+1 t ). x t = (Y t, Y t 1 ) (i) For the MA-process the state vector (i) must be reformulated to contain the lagged error terms instead of lagged Y s: x t = (Y t, θ 1 ɛ t θ 2 ɛ t 1, θ 2 ɛ t ) x t+1 = (Y t+1, θ 1 ɛ t+1 θ 2 ɛ t, θ 2 ɛ t+1 ) (ii) Y t = ɛ t - θ 1 ɛ t 1 - θ 2 ɛ t 2 Y t+1 t = θ 1 ɛ t + θ 1 ɛ t 1 Y t+2 t = θ 2 ɛ t Y t+3 t = 0

13 13 (2.10) It can be seen that for a MA(2) model forecasts more than 2 steps ahead, more than q steps ahead in the general case, are linear combinations of state vector elements equal to zero. Observation equation: State equation: ɛ t ɛ t 1 = ɛ t 1 ɛ t 2 + ɛ t x t+1 = x t + θ 1 ɛ t θ 2 (i) (ii) (2.11) For the MA-process the innovations show up both in the state transition and observation equation. The advantage of the state space approach is that the time dependency of the underlying parameters is taken into account. What makes state space modelling useful is that the specification of the parameter structure becomes more flexible. By defining large and complex models into smaller parts the chance of specification errors is reduced and it also enables a more systematic model selection (Hyndman, Koehler, Ord, & Snyder, 2008) Kalman filtering Modern work with state space models began with Kalman (1960). The state space forecasts are obtained through the Kalman filtering technique (Harvey, 1984). The importance of this technique is accentuated by Ravichandran (2001). It s the Kalman filter that updates knowledge of the system each time a new observation is brought in which minimizes the error terms. It yields the MMSE (minimum mean squared error) of the state vector, or the estimated parameters, given the information available at time t. The best estimate is found by filtering out the noise, and project the measurements onto the state estimate. This updates the state vector. The filter is an optimal estimator, which separates parameters of interest from inaccurate. It progressively revises the moments of the distributions of states and currently unobserved time series values. The filter is an algorithm which is used to solve the linear state space models. The prediction and updating equations together becomes the Kalman filter. The Kalman filter residuals are the innovation terms which represent the new part of Y that isn t explained from the past. We can say that it maintains the estimates of the states as well as the error covariance matrix of the state estimate (Harvey, 1984). Here follow the estimation of the states: x (k k) an estimate of x(k) given measurements up to y(k) x (k+1 k) an estimate of x(k+1) given measurements up to y(k) (i) P (k k) covariance of x(k) given measurements up to y(k) (ii) P (k+1 k) estimate of the covariance of x(k+1) given measurements up to y(k)

14 If we assume that x (k k), u(k) and P (k k) are known, the new observation is y(k+1): The first step is that the new state and measurements are predicted: x (k+1 k) = F(k)x (k k) + G(k)u(k) y (k+1 k) = w(k) x (k+1 k) From this information the measurement residual can be calculated: v(k+1) = y(k+1) - y (k+1 k) And that s all needed for updating the state estimate: x (k+1 k+1) = x (k+1 k) + K(k+1)v(k+1) (iii) (iv) (v) (vi) K(k+1) is called the Kalman gain This is the state covariance estimation: P (k+1 k) = F(k)P(k k) F(k) + Q(k) The state prediction covariance Where Q(k) = E [v(k) v(k) ] The covariance of v(k), which is white noise. S(k+1) = w(k+1)p(k+1 k) w(k+1) + R(k+1) The measurement prediction covariance The Kalman gain then becomes: P(k+1 k) w(k+1) K(k+1) = S(k+1) And the state covariance can finally be updated: P (k+1 k+1) = P (k+1 k) - K(k+1) S(k+1) K(k+1) (vii) (viii) (ix) (x) (xi) (2.12) Exponential smoothing in state space form In the state space approach for the exponential smoothing models the error terms are smoothed. The corresponding models are called innovation state space models. The error term is the residual from the previous point in time which is smoothed to get an accurate estimate of future values of level, trend and seasonality. For each of the exponential methods there are two possible underlying state space models depending on whether the error terms are assumed to act additively or multiplicatively. For the exponential smoothing methods the state vector, x t,is built by for example level, l t, growth, b t, and seasonality, s t, components. The state vector is minimized by only including components that are reachable and observable. This means that the M matrix is expanded as long as canonical correlation between past and future components (level, trend and seasonality) becomes significantly increased. When adding an insignificant component the added rows of w, g and F will only consist of zeros.

15 15 The F matrix, or transition matrix, is built by the significant covariances between elements in the state vector. Otherwise, if there is no significant covariance, it s set to zero. This is the same for all state space modelling, no matter which methods it s applied to. Again, the state space equations are: y t = w x t 1 + ɛ t (i) Observation equation when additive errors x t = Fx t 1 + gɛ t (ii) State equation x t = (l t, b t, s t, s t 1, s t m+1 ) The state vector of unobserved states (2.13) Exponential smoothing has been considered a rather simple forecasting approach but in a state space framework it has appeared to have advantages compared to models like ARIMA (Hyndman, Koehler, Snyder, & Grose, 2002) as the methodology is useful to different types of data as well as to different types of specification error (Gardner, 1987). The exponential smoothing methods have been considered ad hoc methods since there is no underlying model defined (Chatfield, Koehler, Ord, & Snyder, 2001). But the state space model can be seen as underlying the exponential methods, and for each method there are then two state space models depending on whether the errors are additive or multiplicative. These models are denoted (ETS) describing its error, trend and seasonality components (Hyndman, Koehler, Ord, & Snyder, 2008) ARIMA in state space form The observation and state equations can also be applied to the ARIMA models, but the components become different from the exponential state space models. The F matrix consists of calculated covariances between current and lagged Y s when describing an AR-process. When fitting these state space models the response variable, in this case the inflation rate, is mean corrected by default. The state vector is as always constructed by sequentially including forecasts of Y until the first forecast linearly dependent on forecasts already in the vector is reached. Putting all ARMA processes in canonical form eliminates the problem of identifying the appropriate autoregressive and moving average orders. But the problem here is to from the observed data decide which elements that are needed to construct the state vector. This is what s achieved by looking at the canonical correlation and choosing the number of lags that gives the lowest AIC-value. All possible ARMA processes of any dimension can be put into state space form. The following explains an ARMA (p,q) in vector form: x t = Ax t 1 - Bɛ t 1 + ɛ t (2.14) This can be rewritten in state space form, like all ARMA models: State space equations:

16 y t = w x t 1 + ɛ t x t = Fx t 1 + gɛ t (i) (ii) (2.15) The ARMA method works through explaining a variable by positive autoregressive and negative error terms (2.13) but the state equation, expression (ii), looks this way regardless which method is used. x t = (X t,, X t p+1 ) for an AR (p) process x t = (ɛ t,, ɛ t q+1 ) for a MA (q) process x t = (X t,, X t p+1, ɛ t,, ɛ t q+1 ) for a ARMA (p,q) process (2.16) In the next section the inflation data that these methods will be applied to are presented and tested for important assumptions. Evaluation measures for the fitted models are presented, as well as measures for assessment of the different models forecasting ability. 3 Illustration with Swedish inflation data 3.1 The data set Data used are monthly observations of KPIF from January 1987 to December 2013, calculated by SCB (Statistiska Centralbyrån) on behalf of NIER (National Institute of Economic Research). In total 352 observations. But when the models then were fitted and forecasts were made, only the rates were included (228 observations). Figure 1. KPIF (1980=100). From KPIF data the levels were used to calculate monthly and annual rates of inflation. To calculate the annual rates, the percentage change in KPIF-level from a certain month one year to the same month the year before, was calculated. Monthly rates were

17 17 calculated in the same way but with the KPIF-level for one month relative to the month before. Monthly inflation rates: KPIF t KPIF t 1 KPIF t 1 = KPIF t KPIF t 1 KPIF t 1 KPIF t 1 = KPIF t KPIF t 1 1 *100 Annual inflation rates: KPIF t KPIF t 12 1 *100 Figure 2. Annual inflation rates Figure 3. Annual inflation rates When visually inspecting the monthly inflation rates for longer periods there didn t seem to be any seasonal pattern, but the picture became a bit different when looking at the years separately (see Appendix A). There is a reoccurring pattern defined by a dip between June and August, and also in the end of each year. When visually inspecting the plot there seem to be a cyclical pattern over time. This explains the rise after January, seen in the plots below. But the seasonal differences are too small to be significant, when performing formal tests Test for heteroscedasticity In time series modelling an important issue is if there is heteroscedasticity present. If there is, certain kinds of models must be considered. To check for heteroscedasticity the Breusch-Pagan test (Engle, 1982) was conducted in order to find if there was serial autocorrelation between the squared residuals at different (k) lags in the time series. For a homoscedastic time series the error variance can be stated as follows: var (ԑ t ) = E (ԑ t ) = E (ԑ t k ) = E (ԑ t+k ) = σ 2 (3.1) H 0 : Homoscedasticity This test has been considered the most appropriate for detecting serial autocorrelation in dynamic models (Rois, Basak, Rahman, & Majumder, 2012). This is a LM-test where autocorrelation in the error terms of a considered regression model is tested. A linear regression for the y-variable is considered where the residuals, ɛ t, may follow an autoregressive scheme. y t = β 0 + β 1 t + ε t (i)

18 ε t = ρ 1 ɛ t ρ k ɛ t k + e t LM = n* R 2 ~ χ 2 α,h (ii) (iii) LM = 240*0, ,09 (p<0,08) The R-square is obtained from fitting the auxiliary equation (ii) above. The presence of homoscedasticity can therefore not be rejected at alpha level 0,05 and the methods can be further applied Test for stationarity (3.2) In ARIMA modelling the time series need to be stationary, which can be visually inspected by looking at the autocorrelation function, ACF (see Appendix A). The plots indicated stationarity since the ACF was sharply decaying. But the Augmented Dickey- Fuller test (Dickey, & Fuller, 1979), Augmented since there is auto correlation in the error terms, can also be conducted. Since the Breusch-Pagan-Godfrey test couldn t reject the null hypothesis of no serial correlation the Dickey Fuller-test was conducted to test for stationarity. The null hypothesis is that it s a non stationary process. This is the same as there is a unit root, δ=0, the model is a random walk. This is described in (i) and test statistics is presented in (ii). y t - y t 1 = δy t 1 + ε t τ = δ se(δ ) (i) (ii) (3.3) The test showed that the null hypothesis of a unit root, i.e. that the annual rates are a non-stationary process, could be rejected (p 0,004). The ARIMA models could therefore be fitted. 3.2 Model evaluation measures To evaluate the fit of a model there are several ways to go. A common way to compare model fit is to use an information criterion. There are several different criteria, but the one used in this study is The Akaike Information Criteria (Engel, 2010). Choosing between different exponential smoothing methods, and AIC has appeared to perform slightly better than the other criteria (Billah, King, Snyder, & Koehler, 2006). The AIC gives the relative performance of a statistical model for a given data set. The criterion has to be interpreted in relation to another model to be useful. AIC = T ln(ɛ t 2 ) + 2n (3.4) But a low AIC-value doesn t say anything about the forecasting ability. It s based on the fit to previous observations. The model also needs to be evaluated in respect to its

19 19 forecasting performance. This is investigated by the split half method where one can look at the MSE and MAPE. n MAD = 1 ɛ n t=1 t (t) (3.5) MSE = 1 n ɛ n t=1 t 2 (3.6) n ɛ t (t) 100 MAPE = 1 n t=1 (3.7) y t The residuals also need to be investigated according to auto correlation and normal distribution. If these assumptions are met the error terms constitute what s called white noise. The autocorrelation was tested using Durbin Watson statistics (Durbin, & Watson, 1971). DW = n t=2 (ɛ t ɛ t 1 ) 2 n 2 t=1 ɛ t (3.8) To test for normality the Anderson-Darling statistics was used. This test is based on the deviation of the cumulative distribution of variable, in this case model residuals, from the normal cumulative distribution (Anderson, & Darling, 1976). n AD = n 1 (2i 1) ln F(Y n i=1 i) + ln 1 F(Y n+1 i ) (3.9) 3.3 Forecast accuracy measures The quality in prediction ability, but also the adequacy of different models, is important aspects in policy analysis. Summary forecast statistics like MSE, MAD and MAPE are usually assessed for evaluating and comparing different models predictive accuracy, but there are also formal tests that could be applied to test forecasting ability. The test that will here be conducted is the Diebold Mariano (DM)-test, which tests the relative predictive accuracy with the null hypothesis of no difference between two methods (Diebold & Mariano, 1995). When calculating this statistics g (e it ) is the loss function of NIER and g (e jt ) is the loss function corresponding to each method. When interpreting the statistics, positive values indicate that our model has a better forecasting accuracy, and vice versa a negative statistics indicates that NIER performs better. The loss function g (e t ) = e t (i) The loss differential between method i and j, d t = g (e it ) - g (e jt ) (ii) d t defines the mean loss differential and std (d t) is the estimated standard deviation of this, calculated from the formula (iii) where h-1 is the number of covariances.

20 std (γ(0)+2 h 1 γ(k)) (d t) = k=1 (T 1) (iii) γ (k) = 1 T (d T t=k+1 t d t) (d t k d t) (iv) DM = d t std ~ t (T-1) (v) (d t ) (3.10) The loss function is defined as the absolute error since the direction of the forecasting error doesn t matter. Positive and negative forecasting errors should otherwise cancel out each other. This could then lead to a false acceptance of the null hypothesis, also called type II-error. What s good about DM is that it s model free test and directly applicable as long as the loss function isn t quadratic. It works for multi-period forecasts and even when the errors have non-zero mean, are serially correlated and non- Gaussian (Diebold & Mariano, 1995). 4 Results 4.1 Model fit and split half method When fitting models the first step is to see if the coefficients are significant. The models chosen for further exploration were those where this was realized. To further evaluate the models fit we also studied the DW-statistics to see if there was autocorrelation in the residuals and the Andersson-Darling to see if the residuals from the model were normally distributed. Next step was to conduct the split half method, when the last 24 observations were taken away and then forecasted, to evaluate the forecasting performance out of sample. Under this section decision are made about which models to be further used for forecasting Model fit Exponential smoothing When fitting exponential smoothing models to the annual rates the double exponential smoothing with smoothing constants of 0.3, had a better fit than the others according to the information criterion AIC as well as to the MSE, MAPE and SSE (see equation 4.1 below). When looking at the plotted data there seem to be a seasonal pattern, but adding a seasonal term resulted in higher AIC-values and larger error terms. The seasonal pattern may be too small to be significant. According to the split half method, the SES seemed better though. Table 1. Comparison of exponential smoothing methods * indicates sign positive autocorrelation (p>0,05). Model AIC MSE MAPE SSE DW SES α =0,1-242,796 0, ,365 77,919 SES α =0,2-326,436 0,238 92,055 53,992

21 21 SES α =0,3-385,329 0,184 75,692 41,701 1,918 Split half SES α =0,3 DES α =0,1 β=0,1 DES α =0,2 β=0,2 DES α =0,3 β=0,3 Split half DES α =0,3 β=0,3 Winter no trend, s=6, α =0,1 β=0,1 Winter no trend, s=6 α =0,2 γ=0,2 Winter no trend, s=6 α =0,3 γ=0,3-326,019 0,067 28, ,623 0,273 87,990 61, ,162 0,185 92,055 41, ,492 0,144 47,470 32,596 0,990* -377,846 3, , ,105 0, ,883 77, ,655 0, ,105 89, ,015 0,253 90,609 56,023 The following shows how the smoothed time series is created when applying double exponential smoothing with α =0,3 and β=0,3. When putting in the estimated initial values of level and trend, the following equations show how the smoothed time series becomes estimated. l t = ʎy t + (1- ʎ)(l t 1 + b t 1 ) b t = β (l t l t 1 ) + (1- β )b t 1 l 1 = ʎy 1 + (1- ʎ)(l 0 + b 0 ) = 0,3y (0,75-0,02) b 1 = β (l 1 l 0 ) + (1- β )b 0 = 0,3(l 1-0,75) + 0,7*(-0,02) (4.1) The Winters method was also performed with trend and seasonality but since the error terms were greater and also AIC, we chose not to continue with these. The Durbin- Watson test indicated that there was no serial auto correlation, neither positive nor negative, for the simple exponential smoothing model. For the double exponential smoothing model, though, there was positive auto correlation in the error terms. The starting values l 0 and b 0 were estimated by fitting a linear regression to the first few observations, where the intercept corresponds to the initial level, l 0, and the slope gives the initial trend, b 0. This is recommended by SAS Institute (2014). When applying the split half method by excluding the last 24 observations for the simple as well as the double exponential smoothing, the simple turned out to be better Model fit ARIMA When fitting ARIMA models to the annual rates the ARIMA (1,0,11) had a better fit than the others, but also the seasonal model had a good fit. According to MAPE the latter one also had smaller error. Table 2. Comparison of ARIMA smoothing methods

22 Model AIC MSE MAPE MAD DW ARIMA(1,0,1) 110,365 ARIMA(1,0,2) 112,249 ARIMA(1,0,3) 113,712 ARIMA(1,0,4) 115,708 ARIMA(1,0,5) 117,648 ARIMA(1,0,11) 15,8387 0,056 31,790 0,183 1,928 Split half ARIMA(1,0,11) 20,644 0, ,709 ARIMA(1,0,11)(1,0,0)s=12 16,6676 0,056 31,580 0,183 1,925 Split half ARIMA(1,0,11)(1,0,0)s=12 54,698 0,526 80,136 The ARIMA (1,0,11) is shown in scalar form in equation (4.2), with estimated coefficients. y t = 1, ,17y t 1 0,84e t 1 0,86e t 2 0,85e t 3 0,84e t 4 0,84e t 5 0,83e t 6 0,79e t 7 0,77e t 8 0,84e t 9 0,85e t 10 0,84e t 11 (4.2) The DW statistics indicated no evidence for serial autocorrelation. ARIMA (1,0,11) and ARIMA (1,0,11)(1,0,0) s=12 were tested by the split half method where the last 24 observations were taken away, and then forecasted. The seasonal model performed better. The residuals seemed to be normally distributed looking at histograms of their distribution (see Appendix A). The seasonal ARIMA gave the lowest MSE Model fit Exponential smoothing in state space form When fitting an exponential smoothing model with a state space approach the ETS(A,N,N) was found have the best fit to the data, which means neither trend nor seasonality, but an additive error term. This smoothing doesn t work in the same way as the regular exponential smoothing. Here the error terms are smoothed, not the time series components: level, trend and seasonality. The DW-statistics indicated no serial autocorrelation among the residuals of the ETS-models. Table 3. Comparison of exponential smoothing models in state space form Model AIC MSE MAPE MAD DW ETS(A,N,N) 704,693 0,095 38,619 0,233 1,825 Split half 620,579 0,176 40,223 ETS(A,N,N) ETS(A,A,N) 708,587 0,095 38,227 0,233 1,827 Split half 624,538 0,293 69,739 ETS(A,A,N) ETS(A,Ad,N) 710,050 0,095 38,107 0,232 1,832 Split half ETS(A,A d,n) 627,530 0,228 47,388

23 23 ETS(A, N, N) This state space model is defined by the following scalar equations: y t = l t 1 + ɛ t l t = l t 1 + αɛ t This is the simplest exponential smoothing method where there is level but neither trend nor seasonality. It s the error terms that are different, they are defined as additive. This is the expressions in matrix form: x t = l t w=1 F=1 g=α l 1 = l 0 + αɛ 1 = 2, ,99ɛ 1 ɛ 1 ~ N(0;0, ) (4.3) Model fit ARIMA in state space form Table 4. Comparison of ARIMA models in state space form Model AIC MSE MAPE MAD DW AR(1) -525,172 0,09 43,267 0,232 1,918 Split half -453,184 0,321 62,321 AR(1) MA (9) -516,48 0,094 41,113 0,235 1,903 Split half MA(9) Not possible ARIMA (1,0,0)(1,0,0) s=12-275,89 0,265 77,219 0,403 When modelling the Box Jenkins ARIMA an ARMA model appeared to have a better fit. When using the state space model, on the other hand, the MA (9) had the best fit, and the residuals were normally distributed (see Appendix A), also according to the Andersson-Darling test (p>0,05). The split half wasn t possible to conduct for the state space MA (9) since when taking 24 observations away the time series couldn t be fit by a significant model. The seasonal ARIMA model had a higher AIC and larger error terms which made us drop this model. The information criterion for the different models indicated that the state vector should include Y t, Y t+1 and Y t+2. In this model though, the coefficient for Y t+2 wasn t significant and was therefore excluded. The best model became an AR(1) (see 4.4). Neither of the state space ARIMA models had auto correlated residuals. The coefficient for F(2,1) wasn t significant. The model was restricted by leaving out this term. When estimating the restricted model the matrices turned out to be: F = ,87

24 g = 1 1,02 x t = Y t Y t+1 t x t 1 = Y t 1 Y t In this model all parameters are significant on alpha level 0,05, and the forecasting equation become, in accordance to the state equation in (2.7): Y t Y t+1 t = ,87 Y t 1 Y t + 1 1,02 ε t And in scalar form: Y t+1 t = 0,87Y t + 1,02ε t (4.4) 4.2 Forecast evaluation The forecasting ability was evaluated for the time forecasting horizon 17 months for all models. It can be seen that NIER s forecasting errors are small compared to the others for all but the first and the last forecasting period. The first table (Table 5) shows forecasts for the 17 months For the second table (Table 6), where the model is fit to data the period forecasted is The third table (Table 7) shows forecasts and the last one (Table 8) have forecasts for the time period Table 5. Forecasts h=17, fitted to data * Indicates a significant difference from NIER in predictive accuracy (p<0,05). Model (data ) MSE MAPE MAD DM (t) NIER 0,422 33,362 0,575 SES, α =0,3 0,172 22,668 0,351 1,06 DES, α =0,3 β=0,3 2,436 70,402 1,337-3,263* ARIMA(1,0,11) 0,217 22,772 0,41 1,443 ARIMA(1,0,11)(1,0,0)s=12 0,219 18,949 0,337 12,111* ETS(A,N,N) α =0,99 0,347 25,344 0,551 0,148 ETS(A,A,N) α =0,99 β=0,0001 0,436 29,287 0,617-0,501 ETS(A,Ad,N) α =0,99 β=0,046 Φ=0,8 0,513 31,758 0,666-0,934 State space AR(1) 0,185 17,256 0,408 1,246 State space MA(9) 0,177 17,177 0,321 1,229

25 25 3 Forecast June 2009 to October ,5 2 1,5 1 0,5 Annual rates KI 2009 ARIMA(1,0,11) ARIMA s=12 ST. SPACE AR(1) ST. SPACE MA(9) 0 Figure 6. Forecasts ARIMA ARIMA s=12 stands for ARIMA (1,0,11)(1,0,0) s=12. For this forecasting period it can be seen that the seasonal ARIMA was the only method performing statistically better than NIER s while the others couldn t be distinguished from NIER s. The models AR(1) and MA (9) in state space seemed to be good as well, according to the forecasting errors, but still not significantly better than NIER. 3 Forecast June 2009 to October ,5 2 1,5 1 0,5 0-0, Annual rates KI 2009 SES DES ETS(ANN) ETS(AAN) ETS(AAdN) Figure 7. Forecasts Exponential Smoothing

26 Also the simple exponential smoothing had small forecasting errors. But these weren t statistically distinguished from the accuracy of NIER s forecasts. The double exponential smoothing performed worse, though. Table 6. Forecasts h=17, fitted to data * indicates a significant difference from NIER in predictive accuracy (p<0,05). Model (data ) MSE MAPE MAD DM (t) NIER 0,119 15,583 0,265 SES, α =0,3 0,479 26,055 0,406-4,034* DES, α =0,3 β=0,3 0,528 41,660 0,650-4,391* ARIMA(1,0,11) 0,129 18,583 0,307-1,163 ARIMA(1,0,11)(1,0,0)s=12 0,171 22,516 0,366-2,025 ETS(A,N,N) α =0,99 0,292 33,114 0,494-4,123* ETS(A,A,N) α =0,99 β=0,0001 0,370 37,219 0,555-4,256* ETS(A,Ad,N) α =0,99 β=0,025 Φ=0,8 0,301 33,629 0,502-4,209* State space AR(1) 0,120 19,777 0,296-0,374 State space MA(9) 0,116 18,246 0,281-0,053 3 Forecast June 2010 to October ,5 2 1,5 1 0, Annual rates KI 2010 ARIMA(1,0,11) ARIMA s=12 ST. SPACE AR(1) ST. SPACE MA(9) Figure 8. Forecasts ARIMA ARIMA s=12 stands for ARIMA (1,0,11)(1,0,0) s=12. Also for the next period the ARIMA methods from the underlying state space model had small forecasting errors. But here the original ARIMA(1,0,11), with and without seasonality had small forecasting errors as well. Looking at the DM statistics no model performed better than NIER, though.

27 27 3 Forecast June 2010 to October ,5 2 1,5 1 0,5 Annual rates KI 2010 SES DES ETS(ANN) ETS(AAN) ETS(AAdN) Figure 9. Forecasts Exponential Smoothing Here it can be seen that the forecasts for all exponential smoothing models are significantly worse than NIER s. All our models are overestimating the level, and also they are unable to catch the varying pattern. NIER, on the other hand, is underestimating the level but follows the pattern in a good way. Table 7. Forecasts h=17, fitted to data * Indicates a significant difference from NIER in predictive accuracy (p<0,05). Model (data ) MSE MAPE MAD DM (t) NIER 0,143 36,602 0,318 SES, α =0,3 0,428 65,761 0,585-9,165* DES, α =0,3 β=0,3 0,297 54,488 0,476-4,804* ARIMA(1,0,11) 0,741 84,374 0,827-15,506* ARIMA(1,0,11)(1,0,0)s=12 0,816 89,480 0,867-18,689* ETS(A,N,N) α =0,99 0,492 70,954 0,637-10,575* ETS(A,A,N) α =0,99 β=0,0001 0,433 66,581 0,596-8,677* ETS(A,Ad,N) α =0,99 β=0,0053 Φ=0,8 0,493 71,032 0,638-10,595* State space AR(1) 0,476 69,640 0,624-10,233* State space MA(9) 0,056 3,182 0,055-8,277*

28 2,5 Forecast June 2011 to October ,5 1 0,5 Annual rates KI 2011 ARIMA(1,0,11) ARIMA s=12 ST. SPACE AR(1) ST. SPACE MA(9) Figure 10. Forecasts ARIMA ARIMA s=12 stands for ARIMA (1,0,11)(1,0,0) s=12. Here it can be seen that NIER performs significantly better than all of our fitted models. ARIMA (1,0,11) with and without sesonality seem to catch the pattern but overestimate the level. 2 1,8 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 0 Forecast June 2011 to October Annual rates KI 2011 SES DES ETS(ANN) ETS(AAN) ETS(AAdN) Figure 11. Forecasts Exponential Smoothing

29 29 It s easy to visually conclude that all the exponential smoothing methods are worse than NIER for this forecasting period. Neither of them are able to forecast the varying pattern. They are only estimating a level, and it s too high. Table 8. Forecasts h=17, * indicates a significant difference from NIER in predictive accuracy (p<0,05). Model fitted to data MSE MAPE MAD DM (t) NIER 0,143 36,602 0,318 SES, α =0,3 0,037 20,092 0,153 5,231* DES, α =0,3 β =0,3 0,170 32,001 0,353 1,459 ARIMA(1,0,11) 0,873 96,284 0,881-4,554* ARIMA(1,0,11)(1,0,0)s=12 0,522 75,064 0,669-3,351* ETS(A,N,N) α =0,99 0,034 18,390 0,141 6,551* ETS(A,A,N) α =0,99 β=0,0001 0,037 17,895 0,153 5,966* ETS(A,Ad,N) α =0,99 β=0,0042 Φ=0,8 0,034 18,270 0,140 6,557* State space ARIMA AR(1) 0,348 67,310 0,524-2,882* State space MA(9) 0,422 69,066 0,534-1,612 2,5 Forecast June 2012 to October ,5 1 0, Annual rates KI 2012 ARIMA(1,0,11) ARIMA s=12 ST. SPACE AR(1) ST. SPACE MA(9) Figure 12. Forecasts ARIMA ARIMA s=12 stands for ARIMA (1,0,11)(1,0,0) s=12. From the DM statistics it can be concluded that our ARIMA models, except MA (9), are performing significantly worse than NIER. The state space MA (9) appears to do equally well as NIER according to the DM statistics, although the forecasting errors are much larger. The ARIMA (1,0,11) with and without seasonality seem to capture the movements of annual rates but at a level that s too high.

30 1,8 Forecast June 2012 to October ,6 1,4 1,2 1 0,8 0,6 0,4 0, Annual rates KI 2012 SES DES ETS(ANN) ETS(AAN) ETS(AAdN) Figure 13. Forecasts Exponential Smoothing The last table shows the result when data was fitted to and the following months were forecasted. Here we can see that the exponential smoothing methods, both the original and those fitted from a state space model, are performing much better than National Institute of Economic Research. This isn t hard to understand looking at the diagram. NIER s error terms are small, although the forecasts are overestimating the level. All our ETS models, which are simply forecasting a level, are doing significantly better according to the smaller residuals. The best performing models in each period were again used for forecasts over a shorter time horizon of 6 months. The models were then evaluated and tested against NIER s forecasts. Neither of them performed better than National Institute of Economic Research. All of them did equally well except from one.

31 31 Table 9. Forecasts h=6, * indicates a significant difference from NIER in predictive accuracy (p<0,05). Model fitted to data MSE MAPE MAD DM NIER ,133 0,263 0,355 ARIMA s=12 0,013 0,051 0,077 0,727 State Space AR(1) 0,070 0,144 0,203 0,479 State Space MA(9) 0,098 0,182 0,250 0,363 Model fitted to data NIER ,034 0,070 0,127 ARIMA s=12 0,274 0,494 0,463-2,435* ARIMA(1,0,11) 0,052 0,642 0,217-0,520 State Space AR(1) 0,100 0,162 0,267-0,830 State Space MA(9) 0,112 0,167 0,279-0,917 Model fitted to data NIER ,098 0,320 0,263 State space MA(9) 0,485 0,734 0,550-2,037 Model fitted to data NIER ,200 0,435 0,410 SES 0,017 0,117 0,103 0,760 ETS(ANN) 0,017 0,107 0,103 0,766 ETS(AAN) 0,024 0,131 0,131 0,723 ETS(AAdN) 0,017 0,107 0,103 0, Forecast interpretation If we start by looking at the forecasting on time horizon 17 months there are univariate forecasting methods performing better than National Institute of Economic Research for each of the time periods, except In that year it couldn t be rejected that the univariate models were equally good, neither concluded that they performed worse than NIER (see Table 6) Forecast When forecasting the seasonal ARIMA was significantly better at forecasting than NIER. The estimated ARIMA with a seasonal pattern of 12, follows the actual rates very well for the first 7 months (until the end of 2009) but from the beginning of 2010 it under- and overestimates the real level. NIER on the other hand, overestimates for the first 7 months and underestimates for the rest of the period. The ARIMA (1,0,11) underestimates the first 12 rates and overestimates for the last months.

32 We can see that all the estimated ARIMA-models (state space and original) picked up the pattern of the annual rates better than the forecasts by NIER. This may perhaps be since the year 2008 and forward are considered years of crisis. When the univariate models are based on previous observations, the forecasts by NIER were probably affected by the expected downfall in the economy (see Figure 6). In Figure 7 the forecasts by the exponential smoothing methods are presented. Neither of them follows the real pattern, they choose a middle road. The inflation rate goes up and down, but the SES is constant. At several points though, they are the same since the actual rate crosses the forecasted level by SES, which generates several zero residuals. The DM statistics indicates that the damped trend model is equally good as NIER Forecast During this period there is a negative trend with a peak in December The inflation rates go steeply down in the end which could probably result from the renewed and prolonged crisis in the Euro area. The magnitude of the economic problems in Greece and Portugal became realized. The ARIMA models performed equally well as NIER, with very low errors. ARIMA (1,0,11) and NIER forecasted the same pattern, but when NIER underestimates the level, ARIMA overestimates. The ARIMA with seasonality follows the ups and downs of the annual rates in the end of the period, but in the beginning the estimated peak is in August when real peak comes in December The state space AR (1) catches the trend but the level is too high. The state space moving average, on the other hand, exhibits a better fit to the varying pattern and the residuals are smaller. Exponential smoothing, on the other hand, did consequently worse and overestimated the level. This period has negative trend, with a sharp dip in the end of the period, after September/October The forecasts by NIER underestimate for almost the entire period, except from the end. The real pattern has a peak in December 2010 and goes steeply down in January Forecast Even though all estimated models performed worse than NIER we can again conclude that ARIMA (1,0,11) with and without seasonality was able to capture the pattern but at a too high level. The state space models didn t perform well when forecasting this period. As usual, the exponential smoothing methods didn t capture the real level in the pattern. The forecasts are performed from data up to May 2011, where the rate was 2,1. That is also the level forecasted by the exponential smoothing methods for the following 17 months Forecast Not surprisingly, the ARIMA models are at a too high level, but again they follow the peaks in annual rates. For the last period, when forecasting December 2012-September 2013 the ETS models did a good job. The end of 2012 and the beginning of 2013 was a period of stagnation in the economy, which the more simple ETS-models seemed to handle in a good way. The fluctuations were smaller than for previous periods where the

33 33 pattern was more volatile. The ETS with additive trend seemed to catch the down going trend in the end of the period. Also the simple exponential smoothing was good forecasting these months, but with larger residuals than state space. The MA (9) process seemed to have a good fit according to past data and also make accurate forecasts for several of the periods. This means that the rate of inflation can be explained by a series of random error shocks up to 9 months back in time. By adding more lagged error terms or taking some of them away (changing the size of the F matrix) wouldn t give any better forecasts. When there is a moving average process the errors in the measurement equation are no longer serially independent. This makes estimation of the parameters more difficult. This seems a bit strange, why should the inflation rate 9 months ago have an impact the inflation rate today? Six or twelve months would have appeared seemed more natural. But we can see that when taking this dependence into account, the residuals are uncorrelated and normally distributed. 5 Discussion The aim of this study was to evaluate univariate time series methods from an underlying state space model to predict the Swedish inflation rate. The intension was to fit a model that could forecast the inflation development in contrast to the forecasting performed today. This was performed by first estimating exponential smoothing and ARIMA models. Thereafter exponential models as well as ARIMAs were fit in the state space framework and the best performing models were selected by looking at error terms and an information criterion. The best performing models were then used to forecast the annual inflation rate 17 months ahead which were also compared to the forecasts by National Institute of Economic Research. It could have been a good idea to compute the DM statistics to compare the fitted models, but here the focus was to compare univariate models to NIER s forecasts. The forecasts were then also performed on a 6 months horizon. It couldn t be rejected that they performed equally well as NIER, except from one. The state space modelling in SAS works in a slightly different way than in R. When performing the state space procedure the mean is subtracted by default, which we didn t know in the first place. This means that the state vector here didn t contain the rates, but the mean corrected rates. When we tried to add a seasonal component in form of an ETS (A,A,A) the output in R was simply Non seasonal data, which meant the model couldn t even be estimated. Different trend models on the other hand, could be estimated even though the trend component didn t add any significant canonical correlation between past and future observations. This seems a bit strange since the seasonal ARIMA model performed well on several periods, both on horizon 6 and 17 months. There are probably ways to get around this problem in R, but we were unable to find them.

34 The exponential smoothing and ARIMA modelling aren t equalized with the state space approaches since the procedures work in different ways. When running the state space modelling procedure in SAS the state vector is created automatically as previously described. When it comes to state space ARIMA models, we would have wanted to perform the split half method like we did to evaluate the other models. But since the logic behind the state space procedure is that the state vector is always updated to the observations available at the moment, it became modified when the last 24 observations were taken away. This resulted in that the same model wasn t possible to fit to the whole and reduced data set. The inflation rates are assumed to be integrated of order one, which we tried to apply to this time series but the models didn t seem to fit. When looking at the ACF the data was already stationary before any such transformation. Studies from US and UK have shown that the inflation rate can move between integration of order 0 and 1 over time (Halunga, Osborn, & Sensier, 2009). A similar study could be of interest regarding the Swedish inflation rates. It would have been of interest to compare the univariate and NIER s forecasts on more than a 17 months horizon. But all data files from NIER had a forecasting horizon of 17 months. Forecasts for later periods were based on updated sets of data which made it impossible to put them together and get a longer forecasting horizon for comparison. The ARIMA models with seasonality performed well for every forecasting period, according to the pattern even though the estimated level was consequently too high. It s a bit peculiar that when fitting seasonal exponential smoothing models, neither of them seemed applicable. The plotted annual inflation rates seemed to exhibit a seasonal pattern of 6 months but both ACF and PACF indicated a possible seasonal pattern of 11 or 12. When fitting the ARIMA models with a seasonality of 6 it didn t seem to fit the data according to high error terms and a higher value of AIC. Maybe the reason is that the seasonal changes are so small, the annual rates fluctuated only between 0,5 and 2. The result showed that for some periods the univariate methods performed significantly better than NIER, but most of the time the forecasts by them were better or equally well. The methods based on the state space model were often performing better than the others. Unfortunately, there were different models performing better for different forecasting periods. This means that even though the automatic forecasting methods could be applicable, to use them in practice seem problematic since we need information about coming economic business cycles, if there is a forthcoming boom or recession. A solution to this problem, when it comes to forecasting GDP, has been to use multivariate unobserved component analysis with several inflation measures (Kuttner, 1994). The local level model and the state space AR(1) often resulted in accurate forecasts in this study. This is consistent with the fact that autoregressive models have appeared to perform well in modelling and forecasting inflation when it comes to the US. These are therefore often used as benchmarks when forming more complicated models (Pandher, 2007). When comparing the ARIMA models over the four periods the AR (1) in state space form, had consequently lower or equally low forecasting errors as the other

35 35 models, except from the MA (9) which errors often were even lower. This is consistent with the theory of the state space approach (Hyndman, Koehler, Ord, & Snyder, 2008). Previously, an autoregressive model has been considered well at forecasting future inflation rates. In this study it was shown that a moving average model can perform equally well or better than NIER in several of the periods under study. Both the state space MA (9) and the original ARIMA(1,0,11) with and without seasonality performed well. This indicates that the time series of annual inflation rates could also be explained by previous error terms and often even better. When the ARIMA was fitted from a state space model, the eleventh lag wasn t significantly adding anything to the state vector and only lags up to 9 were included. A theory could be that the sensitivity is higher in the state space framework, as a result of the updating process. A model with less parameters to estimate, and therefore more robust, is possible to fit. This is in accordance with the fact that state space models define complex models into smaller parts which then reduce specification errors (Hyndman, Koehler, Ord, & Snyder, 2008). The really simple SES appeared to do well in forecasting the inflation rate for periods of stagnation where they seemed to catch an appropriate level. Years when the exponential smoothing and ETS models were doing well were the 2009 and Significant for these periods is that there are very small fluctuations while in year of the fluctuations were larger and SES and ETS weren t appropriate. Even if there has been a negative trend in inflation rates for the last four years the changes seem too small to make a trend model appropriate. When applying Holt s method it doesn t have a good fit. Also the smoothing parameters for the level error and trend error terms in the trend-ets are composed in such a way that we end up in a local level model also in this case. The trend component is in other word insignificant and doesn t add any more explanation to the inflation rates since we got an alpha of 0,999 and beta of 0,001. When modelling the local trend model with damped trend in the state space framework, the damping parameter leading to the smallest error terms was really high which indicates that the model should exhibit a linear trend. This stays in contradiction to the fact that the local level model performed better than the one with a linear trend, though. In times of fluctuations in the economy the ARIMA models picked up the pattern of the peaks and valleys in the inflation rate. Like the simple exponential smoothing, the state space models forecasted a suitable smoothed line through the time series. When exponential smoothing, both state space and not, were forecasting a straight line into the future the ARIMA state space models formed a smoothed curve but weren t either able to catch the larger fluctuations. The ARIMA models underestimated the level of the time series. An explanation to this could be the quite volatile pattern. If the variance would have been smaller, the estimate of the level would probably been better. SES and ETS are very sensitive to the data series available at the moment. The last available observation is what matters when the future level is forecasted. The univariate methods could be useful for forecasting the inflation rate, at least on a forecasting horizon of one and a half year. What was found, though, was that different methods performed well during different forecasting periods. The economy is by nature fluctuating and the univariate approach doesn t seem able to handle the variations and

36 level at the same time. They either pick the correct level or the varying pattern. The year of seemed really difficult to forecast. All models, except MA (9) had a hard time forecasting this period. The MA (9) in state space form had good forecasting accuracy over all periods, and performed equally well as NIER except from The economy doesn t seem consistently predictable from previous observations in form of one specific univariate forecasting method. Even though the state space model gave small residuals, a multivariate model of this kind, could probably yield better results since there are many factors affecting the economy. The state space approach has previously shown good results when there are multiple predictor variables, and this is how the state space approach has been more commonly used. These univariate models don t handle variables like unemployment, neither the effect of human behavior. The Philips curve has been used in state space form to forecast inflation, but also the expanded Philips curve could be of interest. The human psychology, for example in form of self-fulfilling prophecies, plays a crucial part in the economic development. If the changing expectations in some way could be one predictor variable, the inflation forecasts from a state space model could be even better. Multivariate state space models have appeared good in forecasting inflation and other economic factors and there are a lot of variables that could be of interest for the development of inflation. The question for further investigation is therefore which set of variables that could constitute a complete explanatory set.

37 37 References Anderson, T. W., & Darling, D. A. (1976). Asymptotic theory of certain goodness of fit criteria based on stochastic process. Annals of Mathematical Statistics, 23. Billah, B., King, M. L., Snyder, R. D., & Koehler, A. B. (2006). Exponential smoothing model selection for forecasting. International Journal of Forecasting, 22. Brocklebank, J. C., & Dickey, D. A. (2003). SAS for forecasting time series. Cary: SAS Institute Inc. Brown, R. G. (1956). Exponential smoothing for predicting demand. Massachusetts: Arthur D. Little, Inc. Burmeister, E., & Wall, K. D. (1982). Kalman Filtering Estimation of Unobserved Rational expectations with an application to the German hyperinflation. Journal of Econometrics, 20. Chatfield, C., Koehler, A. B., Ord, J. K., Snyder, R. D. (2001). A new look at models for exponential smoothing. The Statistician, 50. Durbin, J., & Watson, G. S. (1971). Testing for serial correlation in least squares regression. Biometrica, 58. Engle, R. F. (1982). A general approach to Lagrange Multiplier model diagnostics. Journal of Econometrics, 20. Gardner, E. S. (1985). Exponential smoothing: The State of the Art. Journal of Forecasting, 11. Pandher, G. S. (2007). Modelling & Controlling monetary and economic identities with constrained state space model. International Statistical Review, 75. Halunga, A., Osborn, D. R., & Sensier, M. (2009). Changes in the order of integration of US and UK inflation. Economic letters, 102. Harvey, A. C. (1984). A Unified View of Statistical Forecasting Procedure. Journal of Forecasting, 3. Hyndman, R. J., Koehler, A. B., Snyder, R. D., & Grose, S. (2002). A state space framework for automatic forecasting using exponential smoothing methods. International Journal of Forecasting, 18. Hyndman, R. J., & Athanasopoulos, G. (2014). Forecasting: Principles and practice. (E-book). Hyndman, R. J., Koehler, A. B., Ord, J. K., & Snyder, R. D. (2008). Forecasting with exponential smoothing. A state space approach. Springer: Berlin. Kalman, R. E. (1960). A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82. Koirala, T. P. (2013). Time-varying Parameters of Inflation model in Nepal. State space modelling. NRB Economic Review, 25. Kuttner, N. K. (1994). Estimating potential output as a latent variable. Journal of Business & Economic Statistics, 12. Philips, A. W. H. (1958). The relation between unemployment and the rate of change of money, wage rates in the United Kingdom, Economica, 25.

38 Ravichandran, S. (2001). State Space Modelling Versus ARIMA Time-Series Modelling. Journal of Indian Society of Agricultural Statistics, 54. Rois, R., Basak, T., Rahman, M. M., & Majumder, A. K. (2012). Modified Breusch Godfrey Test for Restricted Higher Order Autocorrelation in Dynamic Linear Model A Distance Based Approach. International Journal of Business and Management, 17. SAS Institute (2014). Forecasting methods htm#etsug.forecast.startvalues (visited ). Statistiska Centralbyrån (2013). Konsumentprisindex (KPI). (visited ). Sveriges Riksbank (2011). Hur mäts inflation? (visited ). Tamayo, A., Cuizon, R., Sagpang, A. (2014). Gross Domestic Product Using Box-Jenkins Methodology. Social Science Electronic Publishing. Manopimoke, P. (2013). Hong Kong Inflation Dynamics: Trend and Cycle Relationships with the USA and China. In Y. Zeng, & S. Wu (Eds.), State Space Models Applications in Economics and Finance (pp ). New York, NY: Springer. McKenzie, E., & Gardner, E. S. (2010). Damped trend exponential smoothing: A modelling viewpoint. International Journal of Forecasting, 26. Montgomery, D. C., Jennings, C. L., & Kulahci, M. (2008). Introduction to Time Series Analysis and Forecasting. Hoboken: Wiley. Ofori, T., & Ephraim. L. (2012). Vagaries of the Ghanaian inflation rates: application of exponential smoothing technique. International Journal of Research in Environmental Science and Technology, 2. Winters, P (1960). Forecasting sales by exponentially weighted moving averages. Management Science, 6.

39 39 Appendix A Monthly inflation rates Monthly inflation rates Autocorrelation and Partial autocorrelation plots for Annual rates Residual diagnostics ARIMA (1,0,11) Distribution of residuals ARIMA (1,0,11)

40 Residual diagnostics ARIMA (1,0,11) (1,0,0)s=12 Residual diagnostics ARIMA (1,0,11) (1,0,0)s=12 Distribution of residuals SES. Distribution of residuals DES. Distribution of residuals ETS (A,N,N) Distribution of residuals ETS (A,A,N) Distribution of residuals ETS (A,A d,n) Distribution of residuals STATE SPACE AR (1)

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

Austrian Inflation Rate

Austrian Inflation Rate Austrian Inflation Rate Course of Econometric Forecasting Nadir Shahzad Virkun Tomas Sedliacik Goal and Data Selection Our goal is to find a relatively accurate procedure in order to forecast the Austrian

More information

Empirical Approach to Modelling and Forecasting Inflation in Ghana

Empirical Approach to Modelling and Forecasting Inflation in Ghana Current Research Journal of Economic Theory 4(3): 83-87, 2012 ISSN: 2042-485X Maxwell Scientific Organization, 2012 Submitted: April 13, 2012 Accepted: May 06, 2012 Published: June 30, 2012 Empirical Approach

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Forecasting of the Austrian Inflation Rate

Forecasting of the Austrian Inflation Rate Forecasting of the Austrian Inflation Rate Case Study for the Course of Econometric Forecasting Winter Semester 2007 by Nadir Shahzad Virkun Tomas Sedliacik Goal setting and Data selection The goal of

More information

Advanced Econometrics

Advanced Econometrics Advanced Econometrics Marco Sunder Nov 04 2010 Marco Sunder Advanced Econometrics 1/ 25 Contents 1 2 3 Marco Sunder Advanced Econometrics 2/ 25 Music Marco Sunder Advanced Econometrics 3/ 25 Music Marco

More information

MODELLING TIME SERIES WITH CONDITIONAL HETEROSCEDASTICITY

MODELLING TIME SERIES WITH CONDITIONAL HETEROSCEDASTICITY MODELLING TIME SERIES WITH CONDITIONAL HETEROSCEDASTICITY The simple ARCH Model Eva Rubliková Ekonomická univerzita Bratislava Manuela Magalhães Hill Department of Quantitative Methods, INSTITUTO SUPERIOR

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Oil price volatility in the Philippines using generalized autoregressive conditional heteroscedasticity

Oil price volatility in the Philippines using generalized autoregressive conditional heteroscedasticity Oil price volatility in the Philippines using generalized autoregressive conditional heteroscedasticity Carl Ceasar F. Talungon University of Southern Mindanao, Cotabato Province, Philippines Email: carlceasar04@gmail.com

More information

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1 Forecasting using R Rob J Hyndman 2.4 Non-seasonal ARIMA models Forecasting using R 1 Outline 1 Autoregressive models 2 Moving average models 3 Non-seasonal ARIMA models 4 Partial autocorrelations 5 Estimation

More information

Author: Yesuf M. Awel 1c. Affiliation: 1 PhD, Economist-Consultant; P.O Box , Addis Ababa, Ethiopia. c.

Author: Yesuf M. Awel 1c. Affiliation: 1 PhD, Economist-Consultant; P.O Box , Addis Ababa, Ethiopia. c. ISSN: 2415-0304 (Print) ISSN: 2522-2465 (Online) Indexing/Abstracting Forecasting GDP Growth: Application of Autoregressive Integrated Moving Average Model Author: Yesuf M. Awel 1c Affiliation: 1 PhD,

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

The Prediction of Monthly Inflation Rate in Romania 1

The Prediction of Monthly Inflation Rate in Romania 1 Economic Insights Trends and Challenges Vol.III (LXVI) No. 2/2014 75-84 The Prediction of Monthly Inflation Rate in Romania 1 Mihaela Simionescu Institute for Economic Forecasting of the Romanian Academy,

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH. I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo

MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH. I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo Vol.4, No.2, pp.2-27, April 216 MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo ABSTRACT: This study

More information

Forecasting Bangladesh's Inflation through Econometric Models

Forecasting Bangladesh's Inflation through Econometric Models American Journal of Economics and Business Administration Original Research Paper Forecasting Bangladesh's Inflation through Econometric Models 1,2 Nazmul Islam 1 Department of Humanities, Bangladesh University

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Decision 411: Class 9. HW#3 issues

Decision 411: Class 9. HW#3 issues Decision 411: Class 9 Presentation/discussion of HW#3 Introduction to ARIMA models Rules for fitting nonseasonal models Differencing and stationarity Reading the tea leaves : : ACF and PACF plots Unit

More information

5 Autoregressive-Moving-Average Modeling

5 Autoregressive-Moving-Average Modeling 5 Autoregressive-Moving-Average Modeling 5. Purpose. Autoregressive-moving-average (ARMA models are mathematical models of the persistence, or autocorrelation, in a time series. ARMA models are widely

More information

Testing methodology. It often the case that we try to determine the form of the model on the basis of data

Testing methodology. It often the case that we try to determine the form of the model on the basis of data Testing methodology It often the case that we try to determine the form of the model on the basis of data The simplest case: we try to determine the set of explanatory variables in the model Testing for

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression

More information

Characterization of Stationary properties on Macroeconomic time series

Characterization of Stationary properties on Macroeconomic time series Stockholm University Department of statistics Characterization of Stationary properties on Macroeconomic time series Sercan Kaya 15-ECTS credits Bachelor thesis in Statistics III, autumn 2011 Supervisor:

More information

Autoregressive Integrated Moving Average Model to Predict Graduate Unemployment in Indonesia

Autoregressive Integrated Moving Average Model to Predict Graduate Unemployment in Indonesia DOI 10.1515/ptse-2017-0005 PTSE 12 (1): 43-50 Autoregressive Integrated Moving Average Model to Predict Graduate Unemployment in Indonesia Umi MAHMUDAH u_mudah@yahoo.com (State Islamic University of Pekalongan,

More information

FORECASTING THE INVENTORY LEVEL OF MAGNETIC CARDS IN TOLLING SYSTEM

FORECASTING THE INVENTORY LEVEL OF MAGNETIC CARDS IN TOLLING SYSTEM FORECASTING THE INVENTORY LEVEL OF MAGNETIC CARDS IN TOLLING SYSTEM Bratislav Lazić a, Nebojša Bojović b, Gordana Radivojević b*, Gorana Šormaz a a University of Belgrade, Mihajlo Pupin Institute, Serbia

More information

Econ 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis. 17th Class 7/1/10

Econ 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis. 17th Class 7/1/10 Econ 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis 17th Class 7/1/10 The only function of economic forecasting is to make astrology look respectable. --John Kenneth Galbraith show

More information

Univariate linear models

Univariate linear models Univariate linear models The specification process of an univariate ARIMA model is based on the theoretical properties of the different processes and it is also important the observation and interpretation

More information

Ch 6. Model Specification. Time Series Analysis

Ch 6. Model Specification. Time Series Analysis We start to build ARIMA(p,d,q) models. The subjects include: 1 how to determine p, d, q for a given series (Chapter 6); 2 how to estimate the parameters (φ s and θ s) of a specific ARIMA(p,d,q) model (Chapter

More information

Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial Neural Network (ANN) for Measuring of Climate Index

Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial Neural Network (ANN) for Measuring of Climate Index Applied Mathematical Sciences, Vol. 8, 2014, no. 32, 1557-1568 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.4150 Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial

More information

Basics: Definitions and Notation. Stationarity. A More Formal Definition

Basics: Definitions and Notation. Stationarity. A More Formal Definition Basics: Definitions and Notation A Univariate is a sequence of measurements of the same variable collected over (usually regular intervals of) time. Usual assumption in many time series techniques is that

More information

Time Series Forecasting: A Tool for Out - Sample Model Selection and Evaluation

Time Series Forecasting: A Tool for Out - Sample Model Selection and Evaluation AMERICAN JOURNAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH 214, Science Huβ, http://www.scihub.org/ajsir ISSN: 2153-649X, doi:1.5251/ajsir.214.5.6.185.194 Time Series Forecasting: A Tool for Out - Sample Model

More information

Chapter 2: Unit Roots

Chapter 2: Unit Roots Chapter 2: Unit Roots 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and undeconometrics II. Unit Roots... 3 II.1 Integration Level... 3 II.2 Nonstationarity

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Suan Sunandha Rajabhat University

Suan Sunandha Rajabhat University Forecasting Exchange Rate between Thai Baht and the US Dollar Using Time Series Analysis Kunya Bowornchockchai Suan Sunandha Rajabhat University INTRODUCTION The objective of this research is to forecast

More information

Exercises - Time series analysis

Exercises - Time series analysis Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare

More information

Forecasting the term structure interest rate of government bond yields

Forecasting the term structure interest rate of government bond yields Forecasting the term structure interest rate of government bond yields Bachelor Thesis Econometrics & Operational Research Joost van Esch (419617) Erasmus School of Economics, Erasmus University Rotterdam

More information

ARIMA modeling to forecast area and production of rice in West Bengal

ARIMA modeling to forecast area and production of rice in West Bengal Journal of Crop and Weed, 9(2):26-31(2013) ARIMA modeling to forecast area and production of rice in West Bengal R. BISWAS AND B. BHATTACHARYYA Department of Agricultural Statistics Bidhan Chandra Krishi

More information

Econometrics I: Univariate Time Series Econometrics (1)

Econometrics I: Univariate Time Series Econometrics (1) Econometrics I: Dipartimento di Economia Politica e Metodi Quantitativi University of Pavia Overview of the Lecture 1 st EViews Session VI: Some Theoretical Premises 2 Overview of the Lecture 1 st EViews

More information

Chapter 6: Model Specification for Time Series

Chapter 6: Model Specification for Time Series Chapter 6: Model Specification for Time Series The ARIMA(p, d, q) class of models as a broad class can describe many real time series. Model specification for ARIMA(p, d, q) models involves 1. Choosing

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

Romanian Economic and Business Review Vol. 3, No. 3 THE EVOLUTION OF SNP PETROM STOCK LIST - STUDY THROUGH AUTOREGRESSIVE MODELS

Romanian Economic and Business Review Vol. 3, No. 3 THE EVOLUTION OF SNP PETROM STOCK LIST - STUDY THROUGH AUTOREGRESSIVE MODELS THE EVOLUTION OF SNP PETROM STOCK LIST - STUDY THROUGH AUTOREGRESSIVE MODELS Marian Zaharia, Ioana Zaheu, and Elena Roxana Stan Abstract Stock exchange market is one of the most dynamic and unpredictable

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

Economics 618B: Time Series Analysis Department of Economics State University of New York at Binghamton

Economics 618B: Time Series Analysis Department of Economics State University of New York at Binghamton Problem Set #1 1. Generate n =500random numbers from both the uniform 1 (U [0, 1], uniformbetween zero and one) and exponential λ exp ( λx) (set λ =2and let x U [0, 1]) b a distributions. Plot the histograms

More information

FinQuiz Notes

FinQuiz Notes Reading 9 A time series is any series of data that varies over time e.g. the quarterly sales for a company during the past five years or daily returns of a security. When assumptions of the regression

More information

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends Reliability and Risk Analysis Stochastic process The sequence of random variables {Y t, t = 0, ±1, ±2 } is called the stochastic process The mean function of a stochastic process {Y t} is the function

More information

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation 1/30 Outline Basic Econometrics in Transportation Autocorrelation Amir Samimi What is the nature of autocorrelation? What are the theoretical and practical consequences of autocorrelation? Since the assumption

More information

Introduction to Modern Time Series Analysis

Introduction to Modern Time Series Analysis Introduction to Modern Time Series Analysis Gebhard Kirchgässner, Jürgen Wolters and Uwe Hassler Second Edition Springer 3 Teaching Material The following figures and tables are from the above book. They

More information

Arma-Arch Modeling Of The Returns Of First Bank Of Nigeria

Arma-Arch Modeling Of The Returns Of First Bank Of Nigeria Arma-Arch Modeling Of The Returns Of First Bank Of Nigeria Emmanuel Alphonsus Akpan Imoh Udo Moffat Department of Mathematics and Statistics University of Uyo, Nigeria Ntiedo Bassey Ekpo Department of

More information

Frequency Forecasting using Time Series ARIMA model

Frequency Forecasting using Time Series ARIMA model Frequency Forecasting using Time Series ARIMA model Manish Kumar Tikariha DGM(O) NSPCL Bhilai Abstract In view of stringent regulatory stance and recent tariff guidelines, Deviation Settlement mechanism

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT USING PROC ARIMA TO MODEL TRENDS IN US HOME PRICES Ross Bettinger, Analytical Consultant, Seattle, WA We demonstrate the use of the Box-Jenkins time series modeling methodology to analyze US home

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages: Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the

More information

Chapter 3: Regression Methods for Trends

Chapter 3: Regression Methods for Trends Chapter 3: Regression Methods for Trends Time series exhibiting trends over time have a mean function that is some simple function (not necessarily constant) of time. The example random walk graph from

More information

Forecasting. Simon Shaw 2005/06 Semester II

Forecasting. Simon Shaw 2005/06 Semester II Forecasting Simon Shaw s.c.shaw@maths.bath.ac.uk 2005/06 Semester II 1 Introduction A critical aspect of managing any business is planning for the future. events is called forecasting. Predicting future

More information

Dynamic Time Series Regression: A Panacea for Spurious Correlations

Dynamic Time Series Regression: A Panacea for Spurious Correlations International Journal of Scientific and Research Publications, Volume 6, Issue 10, October 2016 337 Dynamic Time Series Regression: A Panacea for Spurious Correlations Emmanuel Alphonsus Akpan *, Imoh

More information

Appendix A: The time series behavior of employment growth

Appendix A: The time series behavior of employment growth Unpublished appendices from The Relationship between Firm Size and Firm Growth in the U.S. Manufacturing Sector Bronwyn H. Hall Journal of Industrial Economics 35 (June 987): 583-606. Appendix A: The time

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

ECONOMETRIA II. CURSO 2009/2010 LAB # 3

ECONOMETRIA II. CURSO 2009/2010 LAB # 3 ECONOMETRIA II. CURSO 2009/2010 LAB # 3 BOX-JENKINS METHODOLOGY The Box Jenkins approach combines the moving average and the autorregresive models. Although both models were already known, the contribution

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Time Series Methods. Sanjaya Desilva

Time Series Methods. Sanjaya Desilva Time Series Methods Sanjaya Desilva 1 Dynamic Models In estimating time series models, sometimes we need to explicitly model the temporal relationships between variables, i.e. does X affect Y in the same

More information

Section 2 NABE ASTEF 65

Section 2 NABE ASTEF 65 Section 2 NABE ASTEF 65 Econometric (Structural) Models 66 67 The Multiple Regression Model 68 69 Assumptions 70 Components of Model Endogenous variables -- Dependent variables, values of which are determined

More information

Circle a single answer for each multiple choice question. Your choice should be made clearly.

Circle a single answer for each multiple choice question. Your choice should be made clearly. TEST #1 STA 4853 March 4, 215 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. There are 31 questions. Circle

More information

22/04/2014. Economic Research

22/04/2014. Economic Research 22/04/2014 Economic Research Forecasting Models for Exchange Rate Tuesday, April 22, 2014 The science of prognostics has been going through a rapid and fruitful development in the past decades, with various

More information

Forecasting Egyptian GDP Using ARIMA Models

Forecasting Egyptian GDP Using ARIMA Models Reports on Economics and Finance, Vol. 5, 2019, no. 1, 35-47 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ref.2019.81023 Forecasting Egyptian GDP Using ARIMA Models Mohamed Reda Abonazel * and

More information

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014 Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive

More information

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

Circle the single best answer for each multiple choice question. Your choice should be made clearly. TEST #1 STA 4853 March 6, 2017 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. There are 32 multiple choice

More information

Problem Set 2: Box-Jenkins methodology

Problem Set 2: Box-Jenkins methodology Problem Set : Box-Jenkins methodology 1) For an AR1) process we have: γ0) = σ ε 1 φ σ ε γ0) = 1 φ Hence, For a MA1) process, p lim R = φ γ0) = 1 + θ )σ ε σ ε 1 = γ0) 1 + θ Therefore, p lim R = 1 1 1 +

More information

Estimation and application of best ARIMA model for forecasting the uranium price.

Estimation and application of best ARIMA model for forecasting the uranium price. Estimation and application of best ARIMA model for forecasting the uranium price. Medeu Amangeldi May 13, 2018 Capstone Project Superviser: Dongming Wei Second reader: Zhenisbek Assylbekov Abstract This

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Forecasting using exponential smoothing: the past, the present, the future

Forecasting using exponential smoothing: the past, the present, the future Forecasting using exponential smoothing: the past, the present, the future OR60 13th September 2018 Marketing Analytics and Forecasting Introduction Exponential smoothing (ES) is one of the most popular

More information

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND Testing For Unit Roots With Cointegrated Data NOTE: This paper is a revision of

More information

Forecasting the Prices of Indian Natural Rubber using ARIMA Model

Forecasting the Prices of Indian Natural Rubber using ARIMA Model Available online at www.ijpab.com Rani and Krishnan Int. J. Pure App. Biosci. 6 (2): 217-221 (2018) ISSN: 2320 7051 DOI: http://dx.doi.org/10.18782/2320-7051.5464 ISSN: 2320 7051 Int. J. Pure App. Biosci.

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

BUSI 460 Suggested Answers to Selected Review and Discussion Questions Lesson 7

BUSI 460 Suggested Answers to Selected Review and Discussion Questions Lesson 7 BUSI 460 Suggested Answers to Selected Review and Discussion Questions Lesson 7 1. The definitions follow: (a) Time series: Time series data, also known as a data series, consists of observations on a

More information

Trending Models in the Data

Trending Models in the Data April 13, 2009 Spurious regression I Before we proceed to test for unit root and trend-stationary models, we will examine the phenomena of spurious regression. The material in this lecture can be found

More information

A Diagnostic for Seasonality Based Upon Autoregressive Roots

A Diagnostic for Seasonality Based Upon Autoregressive Roots A Diagnostic for Seasonality Based Upon Autoregressive Roots Tucker McElroy (U.S. Census Bureau) 2018 Seasonal Adjustment Practitioners Workshop April 26, 2018 1 / 33 Disclaimer This presentation is released

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Econometrics of financial markets, -solutions to seminar 1. Problem 1 Econometrics of financial markets, -solutions to seminar 1. Problem 1 a) Estimate with OLS. For any regression y i α + βx i + u i for OLS to be unbiased we need cov (u i,x j )0 i, j. For the autoregressive

More information

Automatic Forecasting

Automatic Forecasting Automatic Forecasting Summary The Automatic Forecasting procedure is designed to forecast future values of time series data. A time series consists of a set of sequential numeric data taken at equally

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

Univariate, Nonstationary Processes

Univariate, Nonstationary Processes Univariate, Nonstationary Processes Jamie Monogan University of Georgia March 20, 2018 Jamie Monogan (UGA) Univariate, Nonstationary Processes March 20, 2018 1 / 14 Objectives By the end of this meeting,

More information

TERMS OF TRADE: THE AGRICULTURE-INDUSTRY INTERACTION IN THE CARIBBEAN

TERMS OF TRADE: THE AGRICULTURE-INDUSTRY INTERACTION IN THE CARIBBEAN (Draft- February 2004) TERMS OF TRADE: THE AGRICULTURE-INDUSTRY INTERACTION IN THE CARIBBEAN Chandra Sitahal-Aleong Delaware State University, Dover, Delaware, USA John Aleong, University of Vermont, Burlington,

More information

Oil price and macroeconomy in Russia. Abstract

Oil price and macroeconomy in Russia. Abstract Oil price and macroeconomy in Russia Katsuya Ito Fukuoka University Abstract In this note, using the VEC model we attempt to empirically investigate the effects of oil price and monetary shocks on the

More information

Implementation of ARIMA Model for Ghee Production in Tamilnadu

Implementation of ARIMA Model for Ghee Production in Tamilnadu Inter national Journal of Pure and Applied Mathematics Volume 113 No. 6 2017, 56 64 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Implementation

More information

Testing for Unit Roots with Cointegrated Data

Testing for Unit Roots with Cointegrated Data Discussion Paper No. 2015-57 August 19, 2015 http://www.economics-ejournal.org/economics/discussionpapers/2015-57 Testing for Unit Roots with Cointegrated Data W. Robert Reed Abstract This paper demonstrates

More information

Lecture 4a: ARMA Model

Lecture 4a: ARMA Model Lecture 4a: ARMA Model 1 2 Big Picture Most often our goal is to find a statistical model to describe real time series (estimation), and then predict the future (forecasting) One particularly popular model

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

Forecasting Area, Production and Yield of Cotton in India using ARIMA Model

Forecasting Area, Production and Yield of Cotton in India using ARIMA Model Forecasting Area, Production and Yield of Cotton in India using ARIMA Model M. K. Debnath 1, Kartic Bera 2 *, P. Mishra 1 1 Department of Agricultural Statistics, Bidhan Chanda Krishi Vishwavidyalaya,

More information

Output correlation and EMU: evidence from European countries

Output correlation and EMU: evidence from European countries 1 Output correlation and EMU: evidence from European countries Kazuyuki Inagaki Graduate School of Economics, Kobe University, Rokkodai, Nada-ku, Kobe, 657-8501, Japan. Abstract This paper examines the

More information

Automatic forecasting with a modified exponential smoothing state space framework

Automatic forecasting with a modified exponential smoothing state space framework ISSN 1440-771X Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Automatic forecasting with a modified exponential smoothing state space framework

More information

Inflation and inflation uncertainty in Finland

Inflation and inflation uncertainty in Finland Mat-2.4108 Independent Research Projects in Applied Mathematics Inflation and inflation uncertainty in Finland 1985 2008 Matti Ollila 13.4.2009 HELSINKI UNIVERSITY OF TECHNOLOGY Faculty of Information

More information

Problem set 1 - Solutions

Problem set 1 - Solutions EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed

More information

Econometrics II Heij et al. Chapter 7.1

Econometrics II Heij et al. Chapter 7.1 Chapter 7.1 p. 1/2 Econometrics II Heij et al. Chapter 7.1 Linear Time Series Models for Stationary data Marius Ooms Tinbergen Institute Amsterdam Chapter 7.1 p. 2/2 Program Introduction Modelling philosophy

More information

Tourist Arrivals in Sri Lanka: A Comparative Study of Holt- Winter s versus Box- Jenkin s Modeling Methods

Tourist Arrivals in Sri Lanka: A Comparative Study of Holt- Winter s versus Box- Jenkin s Modeling Methods RESEARCH ARTICLE OUSL Journal, 2018 DOI: http://doi.org/10.4038/ouslj.v13i1.7395 Vol. 13, No. 01 (pp. 65-91) Tourist Arrivals in Sri Lanka: A Comparative Study of Holt- Winter s versus Box- Jenkin s Modeling

More information

Econometrics. 9) Heteroscedasticity and autocorrelation

Econometrics. 9) Heteroscedasticity and autocorrelation 30C00200 Econometrics 9) Heteroscedasticity and autocorrelation Timo Kuosmanen Professor, Ph.D. http://nomepre.net/index.php/timokuosmanen Today s topics Heteroscedasticity Possible causes Testing for

More information

10. Time series regression and forecasting

10. Time series regression and forecasting 10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the

More information

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle AUTOREGRESSIVE LINEAR MODELS AR(1) MODELS The zero-mean AR(1) model x t = x t,1 + t is a linear regression of the current value of the time series on the previous value. For > 0 it generates positively

More information