Time Series Forecasting: A Tool for Out - Sample Model Selection and Evaluation

Similar documents
Dynamic Time Series Regression: A Panacea for Spurious Correlations

Arma-Arch Modeling Of The Returns Of First Bank Of Nigeria

Estimation and application of best ARIMA model for forecasting the uranium price.

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

Design of Time Series Model for Road Accident Fatal Death in Tamilnadu

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

FE570 Financial Markets and Trading. Stevens Institute of Technology

Brief Sketch of Solutions: Tutorial 3. 3) unit root tests

ECONOMETRIA II. CURSO 2009/2010 LAB # 3

Frequency Forecasting using Time Series ARIMA model

Univariate linear models

Univariate ARIMA Models

MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH. I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo

Lesson 13: Box-Jenkins Modeling Strategy for building ARMA models

Review Session: Econometrics - CLEFIN (20192)

Empirical Market Microstructure Analysis (EMMA)

Problem Set 2: Box-Jenkins methodology

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

On Consistency of Tests for Stationarity in Autoregressive and Moving Average Models of Different Orders

Elements of Multivariate Time Series Analysis

Econometría 2: Análisis de series de Tiempo

Romanian Economic and Business Review Vol. 3, No. 3 THE EVOLUTION OF SNP PETROM STOCK LIST - STUDY THROUGH AUTOREGRESSIVE MODELS

Ross Bettinger, Analytical Consultant, Seattle, WA

AE International Journal of Multi Disciplinary Research - Vol 2 - Issue -1 - January 2014

Performance of Autoregressive Order Selection Criteria: A Simulation Study

5 Autoregressive-Moving-Average Modeling

Author: Yesuf M. Awel 1c. Affiliation: 1 PhD, Economist-Consultant; P.O Box , Addis Ababa, Ethiopia. c.

Econometrics for Policy Analysis A Train The Trainer Workshop Oct 22-28, 2016 Organized by African Heritage Institution

FORECASTING THE INVENTORY LEVEL OF MAGNETIC CARDS IN TOLLING SYSTEM

Ch 6. Model Specification. Time Series Analysis

Box-Jenkins ARIMA Advanced Time Series

Time Series. Chapter Time Series Data

Classification of Forecasting Methods Based On Application

Automatic Forecasting

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Automatic seasonal auto regressive moving average models and unit root test detection

ARIMA Modelling and Forecasting

Forecasting the Prices of Indian Natural Rubber using ARIMA Model

CHAPTER 8 FORECASTING PRACTICE I

ARIMA modeling to forecast area and production of rice in West Bengal

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Circle a single answer for each multiple choice question. Your choice should be made clearly.

Autoregressive Integrated Moving Average Model to Predict Graduate Unemployment in Indonesia

7. Forecasting with ARIMA models

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Empirical Approach to Modelling and Forecasting Inflation in Ghana

Vector Autoregression

End-Semester Examination MA 373 : Statistical Analysis on Financial Data

Time Series I Time Domain Methods

Econometrics II Heij et al. Chapter 7.1

10. Time series regression and forecasting

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Financial Time Series Analysis: Part II

ISSN Original Article Statistical Models for Forecasting Road Accident Injuries in Ghana.

Advanced Econometrics

Modelling Monthly Rainfall Data of Port Harcourt, Nigeria by Seasonal Box-Jenkins Methods

Forecasting Area, Production and Yield of Cotton in India using ARIMA Model

Lab: Box-Jenkins Methodology - US Wholesale Price Indicator

A stochastic modeling for paddy production in Tamilnadu

Austrian Inflation Rate

Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr. R. Tsay

Forecasting Bangladesh's Inflation through Econometric Models

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Using Analysis of Time Series to Forecast numbers of The Patients with Malignant Tumors in Anbar Provinc

STAT Financial Time Series

Time Series Outlier Detection

APPLIED MACROECONOMETRICS Licenciatura Universidade Nova de Lisboa Faculdade de Economia. FINAL EXAM JUNE 3, 2004 Starts at 14:00 Ends at 16:30

Forecasting Foreign Direct Investment Inflows into India Using ARIMA Model

at least 50 and preferably 100 observations should be available to build a proper model

Sugarcane Productivity in Bihar- A Forecast through ARIMA Model

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

Lecture 2: Univariate Time Series

MODELING MAXIMUM MONTHLY TEMPERATURE IN KATUNAYAKE REGION, SRI LANKA: A SARIMA APPROACH

ARIMA Models. Richard G. Pierse

Estimation of Parameters of Multiplicative Seasonal Autoregressive Integrated Moving Average Model Using Multiple Regression

Development of Demand Forecasting Models for Improved Customer Service in Nigeria Soft Drink Industry_ Case of Coca-Cola Company Enugu

Application of ARIMA Models in Forecasting Monthly Total Rainfall of Rangamati, Bangladesh

APPLIED ECONOMETRIC TIME SERIES 4TH EDITION

Information Criteria and Model Selection

Firstly, the dataset is cleaned and the years and months are separated to provide better distinction (sample below).

Econometric Forecasting

Vector Auto-Regressive Models

Study on Modeling and Forecasting of the GDP of Manufacturing Industries in Bangladesh

On Autoregressive Order Selection Criteria

Financial Econometrics Review Session Notes 3

VAR Models and Applications

Implementation of ARIMA Model for Ghee Production in Tamilnadu

Modeling Monthly Average Temperature of Dhahran City of Saudi-Arabia Using Arima Models

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay. Midterm

ARDL Cointegration Tests for Beginner

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

FORECASTING OF COTTON PRODUCTION IN INDIA USING ARIMA MODEL

Forecasting Egyptian GDP Using ARIMA Models

Estimating AR/MA models

Suan Sunandha Rajabhat University

Forecasting of the Austrian Inflation Rate

Comparing the Univariate Modeling Techniques, Box-Jenkins and Artificial Neural Network (ANN) for Measuring of Climate Index

A Course in Time Series Analysis

1 Quantitative Techniques in Practice

Transcription:

AMERICAN JOURNAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH 214, Science Huβ, http://www.scihub.org/ajsir ISSN: 2153-649X, doi:1.5251/ajsir.214.5.6.185.194 Time Series Forecasting: A Tool for Out - Sample Model Selection and Evaluation I. U. Moffat & E. A. Akpan Department of Mathematics and Statistics, University of Uyo, Nigeria ABSTRACT In time series analysis, several competing models may adequately represent a given set of data. At times, choosing the best model may be easy or difficult. However, there are many model selection criteria; it could be either in-sample criteria (which include Akaike s information criteria, Schwartz s information criteria, Hannan and Quinn information criteria ) or out-sample forecasts (which is often accomplished by using the first portion of the dataset for model estimation and the remaining portion as a holdout portion for forecast evaluation). Our motivation for this study was that, the in-sample model selection criteria are deficient in providing information about the future observations thereby paving a way for preference to out-sample forecast which is informative about the future observations. Equally, the fact that forecasting is not only useful in predicting future observations but also in model selection and evaluation when several competing models adequately represent a given series was an added motivation. Hence, relating to this study, we considered the data on the number of persons injured due to road traffic accidents in Akwa Ibom State as recorded in the Federal Road Safety Commission, State Command. We found that the four competing models, ARMA(1, ), ARMA(, 1), ARMA(1, 2) and ARMA(2, 2) were adequately representing the series. In terms of in-sample selection, their information criteria were almost the same. The graphs of the forecasts of these models indicated that the forecasts were closed to the actual values that are all within the 95% interval forecasts. However, on the basis of out-sample model selection and evaluation, ARMA(1, 2) model appeared to be the best performing out-sample forecasting model with the smallest mean square error. The implication of this study is that, time series forecasting is not only meant for predicting future observations but also in aiding model selection and evaluation. Keyword: Akwa Ibom State; ARMA model; In-sample forecasting; Model selection and evaluation; Out-sample forecasting; Traffic accident INTRODUCTION In time series, the ARIMA modeling provides a very powerful and flexible model based approach to forecasting. Building an ARIMA model involves Box and Jenkins three-stage iterative procedures based on identification, estimation and diagnostic checking (see Pankratz, 1983; Wei, 26; Box, Jenkins and Reinsel, 28). Although, the essence of modeling in time series is to use the model to forecast the future, forecasting is also useful in model selection and evaluation when there are several competing models. These competing models are compared in terms of goodness-of-fit (in-sample fit) and forecasting power (out- sample forecast). The in-sample fit criteria for model selection are Akaike s information criteria (AIC), Schwartz s information criteria and Hannan and Quinn information criteria (Wei, 26). On the other hand, Chatfield (2) provides an excellent opportunity to look at what is called out-sample behavior of time series data. That is, a time series will provide forecasts of new future observations which can be checked against what is actually observed. The motivation for this study is that a model selection depends not only on goodness-of-fit of a model to the data but also on the out - sample forecast ability of the model. Moreover, one big drawback of in-sample fit is the fact that it is not informative about their out - sample forecasting ability (Rossi and Sckhposyan, 21). Meanwhile, the out-sample forecast is accomplished when the data used for constructing the model are different from that used in forecasting evaluation. That is, the data is divided into two portions. The first portion is for model construction and the second is used for evaluating the forecasting performance with possibility of forecasting new future observations which can be checked against what is observed (Chatfield, 2; Wei, 26). Hence, this study is built towards model selection based on the performance and the ability of the model to forecast new future observations using

the data on road traffic accident in Akwa Ibom State, Nigeria. In Nigeria, the studies of Aworemi, Abdul-Azeez and Olabode (21); Atubi (213); Salako, Adegoke and Akanmu (214); etc. on road traffic accident have been considered within the framework of time series modeling. METHODOLOGY Information Criteria: There are several information criteria available to determine the order, p, of an AR process and the order, q, of MA(q) process, all of them are likelihood based. The well-known Akaike information criterion (AIC), (Akaike, 1973) is defined as AIC = 2 ln(likelihood) + 2 x (number of parameters) T T (1) where the likelihood function is evaluated at the maximum likelihood estimates and T the sample size. For a Gaussian AR (p) model, AIC reduces to AIC(P) = ln(σ P2 ) + 2P T (2) where σ P2 is the maximum likelihood estimate of σ a2, which is the variance of a t, and T is the sample size. The first term of the AIC in equation (2) measures the goodness-of-fit of the AR(p) model to the data whereas the second term is called the penalty function of the criterion because it penalizes a chosen model by the number of parameters used. Different penalty functions result in different information criteria. The next commonly used criterion function is the Schwarz information criterion (SIC), (Schwarz, 1978). For a Gaussian AR(p) model, the criterion is SIC(P) = ln (σ P2 ) + ( Pln(T) ). T (3) Another commonly used criterion function is the Hannan Quinn information criterion (HQIC), (Hannan and Quinn, 1979). For a Gaussian AR (p) model, the criterion is HQIC(P) = ln (σ P2 ) + ln {ln(t)} T (4) The penalty for each parameter used is 2 for AIC, ln (T) for SIC and ln{ln(t)} for HQIC. These penalty functions help to ensure selection of parsimonious models and to avoid choosing models with too many parameters. The AIC criterion asymptotically overestimates the order with positive probability, whereas the BIC and HQIC criteria estimate the order consistently under fairly general conditions. Autoregressive Moving Average (ARMA) Process. A natural extension of pure autoregressive and pure moving average processes is the mixed autoregressive moving average (ARMA) processes, which includes the autoregressive and moving average as special cases (Wei, 26). A stochastic process {X t } is an ARMA( p, q) process if {X t } is stationary and if for every t, φ(b)x t = θ(b)ε t (5) φ(b) = 1 φ 1 B φ 2 B 2 φ p B p is the autoregressive coefficient polynomial. θ(b) = 1 θ 1 B θ 2 B 2 θ q B q is the moving average coefficient polynomial. Autoregressive Integrated Moving Average (ARIMA) Model Box, Jenkins and Reinsel (28) considered the extension of ARMA model in (1) to deal with homogenous non-stationary time series in which X t, itself is non-stationary but its d th difference is a stationary ARMA model. Denoting the d th difference of X t by φ(b) = φ(b) d X t = θ(b)ε t (6) where φ(b) is the nonstationary autoregressive operator such that d of the roots of φ(b) = are unity and the remainder lie outside the unit circle. φ(b) is a stationary autoregressive operator. Forms for the Forecast According Box, Jenkins and Reinsel (28), the three different ways of expressing the forecasts are: i. Forecasts from Difference Equation The forecasts from difference equation is expressed as follows: [X t+l ] = X t(l) = φ 1 [X t+l 1 ] + + φ p+d [X t+l p d ] θ 1 [a t+l 1 ] θ q [a t+l q ] + [a t+l ] (7) ii. Forecasts in Integrated Form The forecasts in integrated form is as follows: X t+l = j= ψ j a t+l j (8) where ψ = 1 and the ψ weights may be obtained by equating the coefficients in φ(b)(1 + ψ 1 B + ψ 2 B 2 + ) = θ(b) (9) iii. Forecasts as a Weighted Average of previous Observations and Forecasts made at previous Lead Times from the same Origin 186

The forecasts as a weighted average of previous observations and forecasts made at previous lead times from the same origin is expressed as follows: [X t+l ] = X t(l) = j=1 π j [X t+l j ] + [a t+l ] (1) Measure of Predictive Accuracy: The common measure of predictive accuracy that have been used to evaluate the forecast performance of a single model in statistics is the mean square error (MSE). The use of MSE implies an underlying quadratic loss function and this is the assumed loss function for which the conditional mean is the best forecast, meaning that it is the minimum MSE forecast (Hamilton, 1994; Chatfield, 2; Tsay, 21). Minimum Mean Square Error Forecasts For ARMA: As detailed in Wei (26), the stationary ARMA model in equation (5) can be rewritten in a moving average representation: X t = ψ(b)ε t = ε t + ψ 1 ε t 1 + ψ 2 ε t 2 + (11) where ψ(b) = j= ψ j B j = θ(b) φ(b) and ψ = 1. For t = n + l, we obtain X n+l = j= ψ j ε n+l j Suppose that at time t = n, we have the observations, X n, X n 1, X n 2, and wish to forecast l- step ahead of future value X n+l as a linear combination of the observations X n, X n 1, X n 2, Since X t for t = n, n-1, n-2, can all be written in the form of (11), we can let the minimum mean square forecast X n(l) of X n+l be X n(l) = ψ l ε n + ψ l+1 ε n 1 + ψ l+2 ε n 2 + (12) The mean square error of the forecast is: l 1 j= j= E(X n+l X n(l)) 2 = σ ε 2 ψ j 2 + σ ε 2 [ψ l+j + ψ l+j ] 2 (13) which is easily seen to be minimized when ψ l+j = ψ l+j. Hence X n(l) = ψ l ε n + ψ l+1 ε n 1 + ψ l+2 ε n 2 + (14) Forecast Confidence Intervals: According to Pankratz (1983), if the random shocks are normally distributed and if we have estimated an appropriate ARIMA model with a sufficiently large sample, forecast from that model are approximately normally distributed. Thus, an approximate 95% interval forecast is given by: X t(l) ± 1.96 σ [e t (l)] (15) where e t (l) = X t+l X t(l) By interpretation, the 95% confidence interval implies that one is 95% confident that the interval will contain the observed value X t+l. Data analysis and discussion: In order to achieve our goal in this study, we employ the data on mortality due to road traffic accidents in Akwa Ibom State. Specifically, we dwell on the number of persons injured (denoted by X t ) from January, 1998 to December, 27. The plots for X t in Figure 1 appears to fluctuate around a common mean, implying the series are stationary. 9 8 7 6 5 4 3 2 1998 2 22 24 26 28 Fig. I: Number of Persons injured 187

Also, the augmented Dickey-Fuller (ADF) unit root test results confirm that the series is stationary since from Table I, the ADF test statistic is not less than all the Table I: Output of Augmented Dickey Fuller Test for Number of Persons injured Null Hypothesis: PERSON_INJURED has a unit root Exogenous: Constant, Linear Trend Lag Length: (Automatic - based on SIC, max=12) critical values in absolute value at 1%, 5% and 1% respectively. t-statistic Prob.* Augmented Dickey-Fuller test statistic -7.596866. Test critical values: 1% level -4.36983 5% level -3.44821 1% level -3.149135 *MacKinnon (1996) one-sided p-values. ARMA Modeling of X t :ARMA(1, ), ARMA(, 1), ARMA(1, 2) and ARMA(2, 2) models were fitted tentatively to X t as shown in Tables II, III, IV and V respectively. Also, the fact that the coefficients of acf and pacf of the residuals of the fitted models in Figures II, III, IV and V are approximately zeros, that is, they all fall within the significance bounds indicate that the models are adequate. Table II: Output of ARMA(1,) Model Model 1: ARMA, using observations 1998:1-26:12 (T = 18) Dependent variable: Personsinjured Standard errors based on Hessian Coefficient Std. Error Z p-value Const 46.656 1.14384 4.7887 <.1 *** phi_1 64653.929637 2.8468.44 *** Mean dependent var 46.6237 S.D. dependent var 9.137289 Mean of innovations.167 S.D. of innovations 8.769324 Log-likelihood 387.7777 Akaike criterion 781.5554 Schwarz criterio 789.618 Hannan-Quinn 784.8179 188

Residual ACF Residual PACF Figure II: ACF and PACF of Residuals of ARMA(1, ) Model Table III: Output of ARMA (, 1) Model Model 2: ARMA, using observations 1998:1-26:12 (T = 18) Dependent variable: Personsinjured Standard errors based on Hessian Coefficient Std. Error Z p-value Const 46.6374 1.4397 44.6731 <.1 *** theta_1 35395.8216 2.871.41 *** Mean dependent var 46.6237 S.D. dependent var 9.137289 Mean of innovations.148 S.D. of innovations 8.797413 Log-likelihood 388.1153 Akaike criterion 782.236 Schwarz criterion 7977 Hannan-Quinn 785.4931 Residual ACF Residual PACF Table IV: Output of ARMA(1, 2) Model Figure III: ACF and PACF of Residuals of ARMA(,1) Model Model 7: ARMA, using observations 1998:1-26:12 (T = 18) Dependent variable: Personsinjured 189

Standard errors based on Hessian Coefficient Std. Error Z p-value Const 46.6364 1.999 42.47 <.1 *** phi_1.92781.721495 12.8594 <.1 *** theta_1 1.2311 14351 1.766 <.1 *** theta_2.342535.98422 3.777.2 *** Mean dependent var 46.6237 S.D. dependent var 9.137289 Mean of innovations.2866 S.D. of innovations 8.582873 Log-likelihood 385.576 Akaike criterion 781.1411 Schwarz criterion 794.5518 Hannan-Quinn 786.5787 Residual ACF Residual PACF Fig. IV: ACF and PACF of Residuals of ARMA(1,2) Model Table V: Output of ARMA(2, 2) Model Model 3: ARMA, using observations 1998:1-26:12 (T = 18) Dependent variable: Personsinjured Standard errors based on Hessian Coefficient Std. Error Z p-value Const 46.6313.958837 48.6332 <.1 *** phi_1.622462.5344 11.6479 <.1 *** phi_2.87144.51367 16.9671 <.1 *** theta_1.486298.268653 18.113 <.1 *** theta_2 1.5495 18.1984 <.1 *** Mean dependent var 46.6237 S.D. dependent var 9.137289 Mean of innovations.383 S.D. of innovations 8.226924 Log-likelihood 383.352 Akaike criterion 778.615 Schwarz criterion 794.733 Hannan-Quinn 785.1355 19

Residual ACF Residual PACF Fig. V: ACF and PACF of Residuals of ARMA(2,2) Model Model Selection through Forecasting So far, we have four competing models that are adequate and our interest is towards selecting a model that best represents the series in terms of forecasting new future observations (out-of-sample forecast). Given the series X t which consist of 12 observations, we split the series into two portion: the first 18 observations are used for estimating the four models and the remaining 12 observations are used for the out-of-sample forecasting. Moreover, the graphs of forecasts from ARMA(1, ), ARMA(,1), ARMA(1, 2) and ARMA(2, 2) models as shown in Figures VI,VII, VIII and IX respectively, indicate that the forecasts are closed to the actual values that are all within the 95% interval forecasts 9 Personsinjured forecast 95 percent interval 8 7 6 5 4 3 2 23 24 25 26 27 28 Fig. VI: Forecast of ARMA(1, ) Model 191

9 Personsinjured forecast 95 percent interval 8 7 6 5 4 3 2 23 24 25 26 27 28 Fig. VII: Forecast of ARMA(,1) Model 9 Personsinjured forecast 95 percent interval 8 7 6 5 4 3 2 23 24 25 26 27 28 Fig. VIII: Forecast of ARMA(1, 2) Model 9 Personsinjured forecast 95 percent interval 8 7 6 5 4 3 2 23 24 25 26 27 28 192

The next step is to compare the out-sample forecasts of the competing models using MSE as the measure of accuracy. The model with the minimum MSE is the best performing forecasting model. From Table VI, ARMA(1, 2) model appears to have the minimum Fig. IX: Forecast of ARMA(2, 2) Model X t = 46.6364.9278X t 1 + a t + 1.2311a t 1 +.3425a t 2 MSE and thus gives the best representation of the out-sample forecast of X t. Hence, the estimated ARMA(1, 2) model presented is as follows: s.e: (1.999) (.721) (144) (.98) (16) z-value: (42.47) (-12.8594) (1.766) (3.777) p-value (<.1) (<.1) (<.1) (.2) [Excerpts from Table V] Table VI: Measure Predictive Accuracy Measure of Accuracy Model ARMA(1, ) ARMA(, 1) ARMA(1, 2) ARMA(2, 2) MSE 132.38 131.12 128.46 139.98 CONCLUSION It is always argued that a model that is best in the insample fitting may not necessarily give more genuine forecasts since it is the same set of data that used for model estimation is also used for forecast evaluation. By implication, model selection based on in-sample criteria such as Akaike information criteria, Schwarz information criteria and Hannan Quinn information criteria may not provide more genuine forecasts. Also, a model selected on the basis of in-sample criteria does not give information about the future observations. Hence, in this study, we dwell completely on the performance of the out- sample forecast to enhance the selection of a model. The outsample forecasting is advantageous over the insample forecasting in that, the model selection is based on how best the forecasts perform and able to provide information about future observations. Furthermore, to measure the accuracy of the outsample forecasts of the competing models, we apply the mean square error to minimize the error. The smaller the error, the better the forecasting power of that model according to MSE criterion. In relation to this study, we have four competing models, ARMA(1, ), ARMA(, 1), ARMA(1, 2) and ARMA(2, 2) that are adequate, and in terms of insample selection, their information criteria are almost the same. The graphs of the forecasts of these models indicate that the forecasts are closed to the actual values that are all within the 95% interval forecasts. However, on the basis of forecast evaluation, ARMA(1, 2) model appeared to be the best performing out-sample forecasting model with the smallest MSE. The implication of this study is that, time series forecasting is not only meant for predicting future observations but also in aiding model selection and evaluation. In addition, this study can be extended to cover the number of person killed due to road traffic accident. REFERENCE Akaike, H. (1973). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control, 19(6): 716-723. Atubi, A. O. (213).Time Series and Trend Analysis of Fatalities from Road Traffic Accident in Lagos State, Nigeria. Mediterranean Journal of Social Sciences. 4(1):251 Aworemi, J. R., Abdul-Azeez, I. A. and Olabode, S. O. (21). Analytical Studies of the Causal Factors of Road Traffic Crashes on South Western Nigeria. International Research Journals, 1(4) : 118 124. 193

Box, G.E.P., Jenkins, G. M. and Reinsel, G.C. (28). Time Series Analysis: Forecasting and Control. 3 rd ed., New Jersey: Wiley and sons. Chatfield, C. (2). Time Series Forecasting. 5 th ed.,new York: Chapman and Hall CRC Hamilton, J. D. (1994). Time Series Analysis. New Jersey: Princeton University Press, pp. 44 443. Hannan, E. and Quinn, B. (1979). The Determination of the Order of an Autoregression. Journal of Royal Statistical Society, Series B, 41, 19 195. Pankratz, A. (1983). Forecasting with Univariate Box- Jenkins Models: Concept and Cases. 1 st ed., New York: John Wiley & Sons Inc. Rossi, B. and Sekhposyan, T, (21). Understanding Model s Forecasting Performance. Available at Public.econ.duke.edu/~brossi/NBERNSF/sekhposyan. pdf. Salako, R. J., Adegoke, B. O. and Akanmu, T. A. (214). Time Series Analysis for Modeling and Detecting Seasonality Pattern of Auto-crash Cases Recorded at Federal Road Safety Commission, Osun Sector Command (RS111), Osogbo. International Journal of Engineering and Advanced Technology Studies, 2(4) : 25 34. Schwarz, G. (1978). Estimating the Dimension of a Model. Annals of Statistics, 6(2): 461-464 Tsay, R. S. (28). Analysis of Financial Time Series. 3 rd ed., New York: John Wiley & Sons Inc. Wei, W. W. S. (26). Time Series Analysis Univariate and Multivariate Methods, 2 nd ed., New York: Adison Westley. 194