Forecasting of the Austrian Inflation Rate

Similar documents
Austrian Inflation Rate

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

ECONOMETRIA II. CURSO 2009/2010 LAB # 3

Frequency Forecasting using Time Series ARIMA model

Romanian Economic and Business Review Vol. 3, No. 3 THE EVOLUTION OF SNP PETROM STOCK LIST - STUDY THROUGH AUTOREGRESSIVE MODELS

Problem Set 2: Box-Jenkins methodology

Univariate linear models

MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH. I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo

Analysis. Components of a Time Series

Author: Yesuf M. Awel 1c. Affiliation: 1 PhD, Economist-Consultant; P.O Box , Addis Ababa, Ethiopia. c.

Univariate ARIMA Models

Basics: Definitions and Notation. Stationarity. A More Formal Definition

10. Time series regression and forecasting

Oil price volatility in the Philippines using generalized autoregressive conditional heteroscedasticity

Forecasting Bangladesh's Inflation through Econometric Models

Forecasting. Simon Shaw 2005/06 Semester II

Empirical Market Microstructure Analysis (EMMA)

at least 50 and preferably 100 observations should be available to build a proper model

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

7. Integrated Processes

Applied time-series analysis

Forecasting Egyptian GDP Using ARIMA Models

Review Session: Econometrics - CLEFIN (20192)

Modelling Seasonality of Gross Domestic Product in Belgium

7. Integrated Processes

LATVIAN GDP: TIME SERIES FORECASTING USING VECTOR AUTO REGRESSION

Chapter 3, Part V: More on Model Identification; Examples

MODELLING TIME SERIES WITH CONDITIONAL HETEROSCEDASTICITY

CHAPTER 6: SPECIFICATION VARIABLES

APPLIED MACROECONOMETRICS Licenciatura Universidade Nova de Lisboa Faculdade de Economia. FINAL EXAM JUNE 3, 2004 Starts at 14:00 Ends at 16:30

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis

Lecture 2: Univariate Time Series

Asitha Kodippili. Deepthika Senaratne. Department of Mathematics and Computer Science,Fayetteville State University, USA.

Empirical Approach to Modelling and Forecasting Inflation in Ghana

ARDL Cointegration Tests for Beginner

ARIMA Models. Jamie Monogan. January 16, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 16, / 27

1 Quantitative Techniques in Practice

Lesson 13: Box-Jenkins Modeling Strategy for building ARMA models

Econometrics II Heij et al. Chapter 7.1

Forecasting Foreign Direct Investment Inflows into India Using ARIMA Model

Econ 427, Spring Problem Set 3 suggested answers (with minor corrections) Ch 6. Problems and Complements:

Econ 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis. 17th Class 7/1/10

Brief Sketch of Solutions: Tutorial 3. 3) unit root tests

Estimation and application of best ARIMA model for forecasting the uranium price.

Ch 6. Model Specification. Time Series Analysis

The GARCH Analysis of YU EBAO Annual Yields Weiwei Guo1,a

Lab: Box-Jenkins Methodology - US Wholesale Price Indicator

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

Financial Time Series Analysis: Part II

Introduction to Econometrics

5 Autoregressive-Moving-Average Modeling

Autoregressive Integrated Moving Average Model to Predict Graduate Unemployment in Indonesia

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Econometrics for Policy Analysis A Train The Trainer Workshop Oct 22-28, 2016 Organized by African Heritage Institution

Advanced Econometrics

The Prediction of Monthly Inflation Rate in Romania 1

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Suan Sunandha Rajabhat University

2. An Introduction to Moving Average Models and ARMA Models

Modeling and forecasting global mean temperature time series

7. Forecasting with ARIMA models

Econometric Forecasting

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Final Exam Financial Data Analysis at the University of Freiburg (Winter Semester 2008/2009) Friday, November 14, 2008,

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis of Currency in Circulation in Nigeria

Time Series 2. Robert Almgren. Sept. 21, 2009

10) Time series econometrics

Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial Neural Network (ANN) for Measuring of Climate Index

Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models

Inflation Revisited: New Evidence from Modified Unit Root Tests

THE LONG-RUN DETERMINANTS OF MONEY DEMAND IN SLOVAKIA MARTIN LUKÁČIK - ADRIANA LUKÁČIKOVÁ - KAROL SZOMOLÁNYI

The Evolution of Snp Petrom Stock List - Study Through Autoregressive Models

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Time Series Forecasting: A Tool for Out - Sample Model Selection and Evaluation

Output correlation and EMU: evidence from European countries

Arma-Arch Modeling Of The Returns Of First Bank Of Nigeria

Testing for Unit Roots with Cointegrated Data

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

FE570 Financial Markets and Trading. Stevens Institute of Technology

FORECASTING COARSE RICE PRICES IN BANGLADESH

Univariate Time Series Analysis; ARIMA Models

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them.

Classic Time Series Analysis

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Automatic seasonal auto regressive moving average models and unit root test detection

Eastern Mediterranean University Department of Economics ECON 503: ECONOMETRICS I. M. Balcilar. Midterm Exam Fall 2007, 11 December 2007.

ARIMA modeling to forecast area and production of rice in West Bengal

The Identification of ARIMA Models

Applied Econometrics. Applied Econometrics Second edition. Dimitrios Asteriou and Stephen G. Hall

Forecasting the Prices of Indian Natural Rubber using ARIMA Model

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

Time Series 4. Robert Almgren. Oct. 5, 2009

Chapter 2: Unit Roots

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38

TESTING FOR CO-INTEGRATION

Modeling and Forecasting Currency in Circulation in Sri Lanka

Stochastic Processes

Transcription:

Forecasting of the Austrian Inflation Rate Case Study for the Course of Econometric Forecasting Winter Semester 2007 by Nadir Shahzad Virkun Tomas Sedliacik

Goal setting and Data selection The goal of our project is to forecast the Austrian inflation rate as well as to find a procedure which best fits to the actual development of the underlying inflation series. Different model free and model based procedures are to be used and both the forecastability of the inflation rate series as well as the accuracy of the applied forecasting procedures are to be discussed. For this purpose a series of monthly inflation rates for the period January 1987 until December 2006 will be analyzed. These 240 observations were calculated as logarithmic changes in the monthly level of the Austrian Consumer Price Index for the particular time period P t. Thus for each observation X t : X P P t t = log Xt = log Pt log Pt 1 t 1 In order to match the nature of information availability present in the real world we want to check the accuracy of the forecasting procedures out of sample. This means that the future predictions gathered within the estimation process are calculated only based on the past and current information set. The proper procedure selection and so the estimation of the parameters will therefore always be executed in sample only. In order to match this goal the available data series has to be divided into two groups, the estimation set and the testing set. In our case the following separation of the data series has been chosen: Estimation set: Jan 1987 - Dec 2001 (180 values) Testing set: Jan 2002 - Dec 2006 (60 values) Specifying the objective function

In order to evaluate the accuracy of the forecasting methods to be applied one has to make the basic decision about the objective criteria. What is a good forecast? Usually some loss function is chosen to offer a potential answer for this question. In our case we decide to treat the very common concept of minimizing the out of sample mean squared error (MSE) as our objective function. Alternatively the pair of criteria considering rather the utilization of the information set as an objective, the Akaike information criterion and potentially the Schwarz criterion are taken into account, at least in case of model based procedures. In addition some important characteristics of the resulting prediction errors might be discussed. Particularly the mean and the autocorrelation of the error series as well as their distribution might be discussed, since these properties are useful indicators for the evaluation of model accuracy. Forecasting For the forecasting itself one has to decide which method shall be used in order to get a good forecast for the underlying data series. Here both model based and model free procedures will be analyzed. For their relative simplicity we first discuss the model free forecasting methods which do not explicitly suspect any particular model to be responsible for having generated the original data series. For the choice of a particular forecasting method one should certainly look at the characteristics of the observed data series itself. Apart from many statistical criteria used for these purposes also a pure visual inspection of the time plotted data series might offer a valuable insight into its actual form and dynamics. For example a potential trend or seasonality of the series can often be depicted from a simple visual inspection. Figure 1 represents the in sample data series, thus the monthly inflation rate for the period Jan 1987 - Dec 2001.

Figure 1. The inflation series in-sample The histogram of the data is shown in figure 2. In order to check the normality of the data set the relevant Jarque-Bera test statistics has been calculated. Equivalently to its actual value of 8.869 one could reject the null hypothesis of the data being normal at a confidence level of 1.19% or equivalently assuming the data to be normal with nearly 99% of confidence. Figure 2. Distribution of the series

Model free procedures The simplest model free methods are the exponential smoothers, e.g. simple and double exponential smoothing. According to the time plot of the inflation series in figure 1 these smoothing methods are not expected to provide a good forecast for this data series. Even a quick look at the time plot points out an obvious seasonality of the series, at least for the first ten years, 120 observations, thus the first two thirds of the in sample data. After 1997 the regularity of the fluctuations seems to be widely reduced, though the fluctuations from the mean level are quite high anyway. This change in price behavior might be partly explained by the stricter EU oriented political strategy after the Austrian EU entrance in 1995. Neither simple nor double exponential smoothing are able to capture seasonal elements of the analyzed data series, therefore none of them can actually provide an accurate fit to the underlying inflation series. Though to see if our expectations are right and also for the purpose of comparison to the later used procedures we estimated the optimal smoothing parameter α for our in sample inflation series. The calculation was provided by the statistical program EViews which had used the mean of the first half of the series as the starting value. Since the optimal α equals 0.011, which actually points out a very strong smoothing, the smoothed series is similar to a flat line. As we expected the method was unable to capture the seasonality of the data which ended up in a very poor fit. Because of the form of the smoothing equation simple exponential smoothing only offers a constant forecast while extrapolating any k-step estimation into the future. According to our objective function the method leads to a root mean squared error of above 0.004, precisely 0.004033, for the in sample set (1987-2006) and a MSE of 0.00212 out of sample (2002-2006). At the current stage this can hardly be evaluated properly but will be done after some of the more accurate procedures have been fitted to the underlying data series and the future forecasts (out of sample) have been calculated. It seems to be important to mention that the out of sample MSE is even higher than the standard deviation of the data itself which is not surprising since the SES only ends up in a constant level for any k-step future forecast, which usually differs from the mean of the

series itself which out of all constant terms offers the minimal standard error. Graphically the results of the simple exponential smoothing together with the original inflation series are to be found in figure 3. Figure 3. Simple exponential smoothing Since the double exponential smoothing might only be advantagous in the way that it might enable to capture a potential trend in a data series, it has no sense to use it here, since the presence of a trend in the underlying inflation series seems to be neglectable. It may only result in an even more smoothed data which does not seem very reasonable in this case. Thus the method will be ignored. It might be much more plausible to apply some method which could also exploit the information of the seasonal development of the series. For this purpose e.g. the

seasonal Holt-Winters methods seem to offer a good alternative to the previously discussed simplest exponential smoothers. There are two variants of the seasonal Holt- Winters procedure, one multiplicative the other one additive. They only differ in the way how the seasonal factor enters the smoothing equation. Either it adds up or multiplies with the smoothed variable in order to get the forecast. Unfortunately the multiplicative version is unable to treat data series which takes both positive and negative (in cases of negative inflation, thus deflation) values. Therefore we use the additive one to analyze the underlying inflation series. Since we used monthly observations a seasonal cycle of 12 seems to be the most relevant for the series. This is also widely supported by visual inspection of the time plot in figure 1. Due to production and consumption cycles in the economy and also different yearly repeating events such as summer holidays, new year's price resetting, etc. the price level tends to undergo similar shifts at certain periods within a year. Most of all the first ten years of our series show this property strongly enough. The following years rather show some decline in these seasonal regularities but they still might be present even if visually less observable. Executing the additive seasonal Holt-Winters method in EViews results in following optimal values for the three estimation parameters: α = 0.05 ; β = 0 ; γ = 0.6101. The influence of the current seasonally adjusted observation is again very small, which implies a very strong smoothing. There is quite no trend, which seems to be plausible according to the data series. All the relatively strong fluctuations are determined through the seasonal factor. Its value of 0.61 implies a strong seasonality of the data series which highly supports our expectations. To evaluate the accuracy of the forecasting method let us now look at our objective criterion, the mean squared error. For the estimation set it has a value of 0.002565 which is a solid improvement compared to the 0.004033 of the simple exponential smoothing. However the result out of sample is rather less convincing, since here the MSE equals 0.002222, which is even worse than the constant future forecast of the SES. It seems as if the fluctuations of the forecast series caused by the seasonal factor γ often went in the opposite direction than the actual inflation development out of sample. This inconsistence might also be caused by the quite bold change in the late nineties. The regular form of price changes in the decade before

might therefore be not an accurate data series to conveniently forecast the later inflation development. The graphical result of the additive Holt-Winters together with the original series can be found in figure 4. Figure 4. Seasonal Holt-Winters Additive (cycle: 12) Model based procedures A new insight into the forecastability of the inflation series might be gathered through the application of some model based procedures. In such a case one tries to find a model which might be responsible for having generated the data or at least to be as similar as possible to the original data generating process of the series. Each model coincides with some data series which might fulfill some specific characteristics. Thus, before the proper selection of a potential model let us first look more closely at our existing inflation series. Most of all the stationarity of the data (the independence of the

first two moments on time) and the autocorrelation (dependencies between the lags of the series) are to be discussed in order to get some insight into the data generating process. E.g. autoregressive or moving average models require stationary data with specific autocorrelation or partial correlation functions if considered as responsible for the generating process. To check the stationarity one usually applies the Augmented Dickey-Fueller test to the underlying data series, which is a test for a unit root in a time series sample. According to its definition, the more negative the test statistics is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence (Econterms). The null hypothesis simply assumes that there is a unit root. This assumption may or may not be rejected for some chosen confidence level. The ADF value (lagged differences of 4) of the in sample inflation series has a value of -7.218. Since the respective 1% critical value equals -3.469, we can clearly reject the presence of a unit root at a confidence of over 99%. Regarding the first moment of the series this actualy is not much surprising since the mean of the series even visually seems to stay constant over time, which also was supported by a trend variable of 0 estimated within the previously executed Holt-Winters procedure. The inflation series in a way is already a differenced series of the original price level series, which certainly would not be stationary considering easily only its periodically changing mean. On the other hand it would have been less clear to judge the time dependence of the second moment by a pure visual inspection. Aggregately however the augmented Dickey-Fueller test has demonstrated that the analyzed in sample inflation series does not have a unit root and therefore implies a stationary process with a quite high level of confidence. Hence we will treat it as stationary for the purposes of model selection. Another important property of the data series which can be very useful in order to select an accurate forecasting model is the autocorrelation of the observed variable, thus the correlation between its different lags. The autocorrelation function (ACF) and the partial autocorrelation function (PACF) both offer a valuable insight into the linear dependencies between the lags of the variable which might simplify the inclusion of

independent variables within the model equation. Both, ACF and PACF for the in sample inflation series are portrayed in figure 5. Figure 5. ACF and PACF of the inflation series The spiral form of the ACF again points out the seasonality of the data series. Both ACF and PACF make it obvious that the highest linear dependence lies between lags distant by 12 which for monthly data could also be expected. The PACF implies that a model of maximum order 12 might be sufficient to exploit most of the information covered by the autoregressive process. Though maybe not all twelve lags will be necessary to be included in the equation. A less parameterized model might offer an even more efficient utilization of the information set and simultaneously end up in a quite good forecast according to our objective criteria. ARMA Model Autoregreesive moving average (ARMA) models are combination of autoregressive and moving average models with order p and q respectivily and in order to shed some light on these type of model an AR (p) is of type:

ε t is white noise process, with zero mean and constant variance under the implication of i.i.d normality assumption, is building block assumption for the time series analysis. MA(q) model represents a moving average of oder q with white noise errors. Given a time series of data X t, the ARMA model is a tool for understanding and, perhaps, predicting future values in this series. The model consists of two parts, an autoregressive (AR) part and a moving average (MA) part as explained. The model is usually then referred to as the ARMA(p,q) model where p is the order of the autoregressive part and q is the order of the moving average part.in order that the ARMA model functions properly the roots of the AR(p) should be stationary and the MA(q) roots should be invertible.the ARMA (p,q) is soforth depicted as The notation ARMA (p, q) refers to the model with p autoregressive terms and q moving average terms. This model contains the AR(p) and MA(q) models. ARMA models in general can, after choosing p and q, are fitted by least squares regression to find the values of the parameters which minimize the error term. It is generally considered good practice to find the smallest values of p and q which provide an acceptable fit to the data. ARIMA Processes

Before proceeding to the empirical part of this project we will also narrate briefly about an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average model. These models are fitted to time series data either to better understand the data or to predict future points in the series. The model is generally referred to as an ARIMA (p,d,q) model where p, d, and q are integers greater than or equal to zero and refer to the order of the autoregressive, integrated, and moving average parts of the model respectively. Since we already discussed the AR and MA part in precise, the integrated aspect of the data leads to differencing of the series in order to achieve a subsequent stationary series fit ARMA process and together they are mentioned as ARIMA. Empirical Testing Since, we employed inflation price changes data from January 1987 till December 2001 to estimate the next five year projections. In this regards, as our model free procedure suggest that the data is with strong seasonality and therefore in order to fit the model as data generating process the model should take care of the seasonal component as well. We tested the data for many basic kind of autoregressive and moving average models, but the best fit on the basis of aforementioned objective functions is achieved for ARMA (4, 4) with a seasonal autoregressive and moving average component of order 12, depending upon all the mentioned objective functions namely minimized square sum or errors, AIC and SIC. The achieved model fit for the inflation level log price changes suggest as our model free part that there is strong seasonality in the inflation series. We achieved a better fit with adjusted R 2 which is much less for the unreported model (ARMA (4,4)) without seasonality component, that is, 0.46 compared to the counter part adjusted R 2, that is, 0.69.Similarly the reported AIC and SIC criterion are also better for the model taking care of seasonality along with the minimized sum of errors.

Variable Coefficient Std. Error t-statistic Prob. C 0.001954 0.000525 3.722416 0.0003 AR(1) 0.021279 0.021791 0.976493 0.3304 AR(2) -1.008557 0.023636-42.67107 0.0000 AR(3) -0.001112 0.021432-0.051894 0.9587 AR(4) -0.889765 0.023488-37.88097 0.0000 SAR(12) 0.763888 0.053404 14.30396 0.0000 MA(1) 0.005743 0.013980 0.410785 0.6818 MA(2) 1.023708 0.011223 91.21573 0.0000 MA(3) 0.003572 0.012814 0.278782 0.7808 MA(4) 0.958545 0.011078 86.52594 0.0000 SMA(12) -0.356582 0.095878-3.719127 0.0003 R-squared 0.705042 Mean dependent var 0.001973 Adjusted R-squared 0.685763 S.D. dependent var 0.004076 S.E. of regression 0.002285 Akaike info criterion -9.260207 Sum squared resid 0.000799 Schwarz criterion -9.052289 Log likelihood 770.3369 F-statistic 36.57174 Durbin-Watson stat 2.023842 Prob(F-statistic) 0.000000 Table 1. ARMA (4, 4) with a Seasonal AR and MA Component of Order 12 As, the results in the table from EViews show that seasonal coefficients of autoregressive part and moving average part are significant at 1% critical region and are having a impact of 0.76 and -0.36 on the log price changes of inflation data, whereas, from the ARMA part, the coefficients of AR(2), MA(2), AR(4) and MA(4) are also significant at 1% critical region and the impact on the inflation log price changes are - 1.01, 1.02, -0.89 and 0.96 respectively. Remaining coefficients of applied model are insignificant and also of minimal impact as compared to the listed significant coefficients. Also, we suggest that the model is sufficiently provides a fit as the AIC and SIC are decreasing for the model with seasonal components and this in result suggest absence of over estimation as these criterions provide a penalty for estimating each parameter and in terms of over estimation of the model the value for these criterions tend to increase.

Also the Durbin-Watson statistic is roughly 2.01 for the fitted ARMA model with seasonal component. If we examine the residuals through the correlogram, it suggests that there aren t any interdependencies in the residuals and are white noise. The ACF and PACF both remain within the bound,of two standard errors computed as +-2/ ( T), implying that the auto correlation is not significantly different from zero at (approximately) the 5% significance level. If we examine the correlogram the values of ACF and PACF from 11 th observation as the Q-Statistic adjusted for the number of estimated parameters, that is, ten ARMA terms. But the coefficients of 34 and 35 th lag suggest some negative correlation but is insignificant in statistical terms..015.010.005.008.004.000.000 -.005 -.010 -.004 -.008 88 89 90 91 92 93 94 95 96 97 98 99 00 01 Residual Actual Fitted Figure 6. Actual, Fitted Model and Residual Series The decay in both the functions doesn t appear exponentially as per the theoretical suggestion of true ARMA as data generating model but might it be the strong seasonality impact that leads to the distortion in the actual behavior of ACF and PACF but otherwise

the residuals appear to be clean and white noise. Also as figure 6 shows that the model appears to track the seasonal impact in the right direction but fully. Table 2. Correlogram-Q-Statistic for the Fitted Model But the residuals after those years are more volatile and getting out bound more frequently than the previous years. This can be attributed to the Austria joining the European Union in the year 1995. The visual inspection of figure 1 also shows that the there is some change in the inflation log price level changes from 1996 to the period

before it and even the actual series appears to get smoothed to its prior year trudges. It is reasonable to argue that this can be a reason of joining EU which also led to be a part of EMU (European Economic and Monetary Union), which require strict price stability and is the premier objective of the Bank and main contribution of the policy include a "high level of employment" and "sustainable and non-inflationary growth"..015.010.005.008.004.000.000 -.005 -.010 -.004 -.008 88 89 90 91 92 93 94 95 96 97 98 99 00 01 Residual Actual Fitted Figure 7. Actual, Fitted Model and Residual Series Therefore, since joining the EU time period there has been change in the inflation level log prices and it can be reasoned that since the EMU require stringent monetary policy restrictions and therefore the period from 1996 till 2001 was quirked for monetary preparedness which were required by the participant countries and contributed to shift in dynamics of the participant markets indifferently to there prior defining dynamics of inflation prices and therefore, residuals are more random during this period than prior period. Therefore, we argue it as one reason for the poor estimation of the model from 1996 onwards.

If we examine figure 6 for the actual series and the model estimated in sample projection of the series, it appears the model give a reasonable estimation output till the year 1995-96 and there are few residual values which are out or touching the bound of two standard errors. Since, the aim of the presentation is to check the forecasting ability of model free and model based procedures for inflation level log price changes out of sample for the five year period from 2002:1 till 2006:12 from the parameters estimated from the 15 year sample from 1987:1 to 2001:12.Now after fitting the model which gives white noise residual process, and can be considered as the benchmark data generating process..012.008.004.000 -.004 Forecast: INFLF Actual: INFL Forecast sample: 2002M01 2006M12 Included observations: 60 Root Mean Squared Error 0.002183 Mean Absolute Error 0.001842 Mean Abs. Percent Error 93.75106 Theil Inequality Coefficient 0.488646 Bias Proportion 0.031906 Variance Proportion 0.489950 Covariance Proportion 0.478144 -.008 2002 2003 2004 2005 2006 INFLF Figure 8. The 5 Year out of sample Forecast The forecast given by the fitted model is in the designated confidence interval but as shown by the comparison in figure 8 the predicted forecast is way too smoothed to be declared as true forecast by the fitted and figure 8 clearly vindicates the forecast from the fitted model performs poorly.

.015.010.005.000 -.005 -.010 88 90 92 94 96 98 00 02 04 06 INFL INFLF Figure 9. Actual and Forecast Series for 2002 till 2006 In order to improve on the out of sample we also carried out a model with autoregressive terms of 2 nd and 6 th lag, moving average terms of 2 nd and 6 th lag with the seasonal autoregressive and moving average terms of 12 th lag which gives better fit than the benchmark fitted model. The model improves upon the fitted benchmark model on the basis of AIC and SIC though it is low on the R 2 than the fitted benchmark model. Since, it improves upon the information criterion it appeared interesting to see the out of sample forecast but as the benchmark model it also got flattened in terms of forecast and it appears it also could not predict the changing dynamics of inflationary log price changes. The reported table estimated the parameters from time series starting from 1995:1 till 2001:12 and gives the out sample forecast and also as per the loss function criterion its pretty same as the benchmark model and the out of sample forecast is more flat to benchmark model as shown in figure 9.

Variable Coefficient Std. Error t-statistic Prob. C 0.001276 0.000353 3.614223 0.0004 AR(2) -0.133194 0.026905-4.950586 0.0000 AR(6) 0.829862 0.022861 36.29961 0.0000 SAR(12) 0.787796 0.040020 19.68507 0.0000 MA(2) 0.126430 0.010671 11.84788 0.0000 MA(6) -0.899451 0.010052-89.48176 0.0000 SMA(12) -0.439886 0.074814-5.879752 0.0000 R-squared 0.658355 Mean dependent var 0.001837 Adjusted R-squared 0.648821 S.D. dependent var 0.003646 S.E. of regression 0.002161 Akaike info criterion -9.405584 Sum squared resid 0.001004 Schwarz criterion -9.298292 Log likelihood 1051.020 F-statistic 69.05144 Durbin-Watson stat 2.096504 Prob(F-statistic) 0.000000 Table 3. Model with Lesser Parameters.010.005.000 -.005 -.010 95 96 97 98 99 00 01 02 03 04 05 06 INFL11 INFL11F Figure 10. Actual and Forecast from Model with Lesser Parameters

Multivariate Modeling Even though the univariate models were able to explain the data generating process by nearly 0.70 and 0.66 ( R 2 ) respectively, the out of sample forecast based on these models hasn t performed very well according to our objectives. Partly for this reason but also inspired by pure curiosity to compare the accuracy of different modeling strategies, we decided to apply a few multivariate models in order to analyze the underlying inflation series. For this purpose two more variables have been used: 12- month monthly Euribor rate and monthly values of the Austrian production index. Unfortunately for none of the variables there was such a long series of actual observations available and therefore we were forced to respectively shorten the sample to be analyzed which finally resulted in an in-sample set of only 60 observations (Jan 1999 Dec 2004) to be treated for model selection and parameter estimation and an out-ofsample set consisting of the remaining 24 monthly observations (Jan 2005 Dec 2006). Both of the independent data series were classified as non stationary when checking for a unit root and therefore we used the series of their first differences for our modeling purposes, since these resulted as being stationary after having applied the same ADF testing procedure. Using the differenced series seems to be quite reasonable for both Euribor and the production index. Changes in the interest rate as well as production growth seem to be sound and plausible factors in order to predict the price changes. Before the proper model selection we looked at the cross-correlations between the inflation series and the two independent variables. The results are demonstrated in figure 11 below.

Figure 11. Cross-correlation of inflation with Diff. Euribor and prod. index respectively From the form of the correlograms it is obvious that in order to judge linear dependence of the inflation series on the previous lags of change in Euribor rates or production index one has to look at the right part (slope) of each table respectively. At first sight it seems to quite clear that the actual correlations are not very high. The form of switching sing for the correlations with the lags of the differenced production has been probably caused by the strong seasonality of the economic activity during the year. The results of figure 11 seem to be much more surprising, since in cases of stronger interest rate increase there seems to be a higher price growth within the following 12 months than in reverse cases of rather smaller or negative shifts in the interest rates in the past. Though the differences between the correlations are relatively small with not much

dominance involved it seems to be reasonable to include at least the first six, maybe even twelve lags of the variables within the model. We first analyzed the inflation based only on the previous lags of both interest rate and production index separately, not including the autoregressive part of the inflation itself. As we had expected the results were very poor, ending up with R 2 of less than 20%. In the following models we also included an autoregressive part of the first six lags of the inflation series itself. The inclusion of other lags didn t improve our objectives in any significant manner. Then some VAR-models including each of the two factors separately as well as such consisting of both explanatory variables have been run. Afterwards, to get more accurate forecast some moving average term was introduced. The so called VARMA-models have also been treated once separately for each variable and also combining both of them within an aggregate model. Summarized results of the models which have been selected as being the possibly best for each category are demonstrated in table 4. In order to judge the accuracy of multivariate models two most successful (conducted within our analysis) univariate models are also presented. Thus one can get much better inside while comparing the strengths and weaknesses of both uniand multivariate models as applied on the underlying inflation rate series. The structure of each model is described in the Appendix. Model OOS-MSE Akaike IC R-squared norm. of err. var6_deur6 0,00239-9,39 0,26 0,65 var6_dindex12 0,00247-9,71 0,52 0,92 var6_deur12_dindex12 0,00292-9,6 0,67 0,75 varma_deur 0,00237-10,11 0,65 0,85 varma_dindex 0,00223-10,07 0,52 0,42 varma_deur_dindex 0,00248-9,83 0,5 0,68 arma(1,2,6,12) 0,00211-9,78 0,42 0,93 arma_special 0,00213-10,01 0,62 0,43 Table 4. Resulting criteria for both uni- and multivariate models Since the results are rather self explaining we will at this stage not go into a detailed interpretation. Obviously for each criteria the three best outcomes have been depicted in a particular color. None of the models is dominating the others in each of the

criteria. Based on these results one could hardly protect the argument that multivariate models should provide a more accurate forecast for the underlying series. Regarding the out of sample mean squared error they were not able to compete with much more simple univariate procedures, simultaneously lying on a very similar level according to the other three important criteria. Figure 12 brings a nice insight into the actual out of sample prediction fitting for one univariate and one multivariate candidate. Figure 12. Out of sample forecasts for the best uni- and multivariate model Figure 12 makes it visually clear how much better the univariate model performs for the out of sample forecast, at least within the first half of the set. For the estimation set, thus in sample, rather the opposite seems to be true. One might be convinced of this fact while looking at figures 13 and 14 below.

Figure 13. Actual, fitted and residual series for the ARMA(1,2,6,12)-Model Figure 14. Actual, fitted and residual series for the VARMA_dEUR-Model

Conclusion Based on the experience and the results of our analysis we would like to conclude this paper emphasizing two important points. First, there might arise great difficulties in the model fitting and forecasting procedure if the data series itself is somehow divided into in a way differently behaving sets thus not having a certain level of homogeneity for the aggregate sample. Usually external effects are responsible for such properties and if these cannot be captured by the information used the prediction gained might be very poor. In our case the series has lost its previously quite homogenous seasonal behavior somewhere around 1996/97. We assume that this effect might have been caused by the political and economic process of the EU and far more the EMU integration of which Austria has taken part. Second, at least for the shorter underlying inflation series, which we analyzed based also on other variables, it has been shown that their introduction into the modeling process doesn't necessarily improve the accuracy of the forecast. Multivariate doesn't automatically results a better forecast than univariate! This is an important issue which has been manifested within our analysis. Anyway, the series used for this purpose seems to be too short for any reliable conclusions. A rather longer data series would have offered much more relevant insights into the forecasting process.

Appendix Specification of the variables involved in the models: Var6_dEur6 Autoregressive lags: 1, 2, 3, 4, 5, 6 Lags of the differenced Euribor series: 1, 2, 3, 4, 5, 6 Var6_dIndex12 Autoregressive lags: 1, 2, 3, 4, 5, 6 Lags of the differenced Production Index series: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 Var6_dEur12_dIndex12 Autoregressive lags: 1, 2, 3, 4, 5, 6 Lags of the differenced Euribor series: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 Lags of the differenced Production Index series: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 Varma_dEur Autoregressive lags: 1, 2, 6, 12 Moving Average Terms: 1, 2, 6, 12 Lags of the differenced Euribor series: 1, 2, 4, 6, 10 Varma_dIndex Autoregressive lags: 1, 2, 6, 12 Moving Average Terms: 1, 2, 6, 12 Lags of the differenced Production Index series: 1, 2, 6, 8 Varma_dEur_dIndex Autoregressive lags: 1, 2, 6, 12 Moving Average Terms: 1, 2, 6, 12 Lags of the differenced Euribor series: 1, 2, 4, 6, 10

Lags of the differenced Production Index series: 1, 2, 6, 8 Arma(1,2,6,12) Autoregressive lags: 1, 2, 6, 12 Moving Average Terms: 1, 2, 6, 12 Arma_Special Autoregressive lags: 2, 3, 5, 6, 9, 11, 12 Moving Average Terms: 1, 2, 3, 4, 5, 6, 12