Periodic autoregressive stochastic volatility

Similar documents
Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Econometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets

Lecture Notes based on Koop (2003) Bayesian Econometrics

Optimal Joint Detection and Estimation in Linear Models

Testing for Regime Switching: A Comment

Negative binomial quasi-likelihood inference for general integer-valued time series models

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Astrometric Errors Correlated Strongly Across Multiple SIRTF Images

Filtering and Likelihood Inference

Parametric Inference on Strong Dependence

Problem set 1 - Solutions

Bayesian Methods for Machine Learning

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Online Companion to Pricing Services Subject to Congestion: Charge Per-Use Fees or Sell Subscriptions?

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias

Trajectory Estimation for Tactical Ballistic Missiles in Terminal Phase Using On-line Input Estimator

Notes on Linear Minimum Mean Square Error Estimators

Online Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access

Markov Chain Monte Carlo

Testing jointly for structural changes in the error variance and coefficients of a linear regression model

STA 4273H: Statistical Machine Learning

Modeling Highway Traffic Volumes

Economics 241B Estimation with Instruments

Computational statistics

Introduction: structural econometrics. Jean-Marc Robin

Structural Macroeconometrics. Chapter 4. Summarizing Time Series Behavior

Modeling Expectations with Noncausal Autoregressions

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

Statistical Inference and Methods

Small area estimation under a two-part random effects model with application to estimation of literacy in developing countries

Bayesian Modeling of Conditional Distributions

Volatility. Gerald P. Dwyer. February Clemson University

Notes on Time Series Modeling

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

Modeling Expectations with Noncausal Autoregressions

Target Trajectory Estimation within a Sensor Network

Lecture Stat Information Criterion

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Wageningen Summer School in Econometrics. The Bayesian Approach in Theory and Practice

Estimation of Efficiency with the Stochastic Frontier Cost. Function and Heteroscedasticity: A Monte Carlo Study

GMM estimation of spatial panels

Modeling Expectations with Noncausal Autoregressions

Final Exam (Solution) Economics 501b Microeconomic Theory

Lecture 2: Univariate Time Series

Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models

When is a copula constant? A test for changing relationships

Combining Macroeconomic Models for Prediction

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

STA 4273H: Sta-s-cal Machine Learning

Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels.

Testing for a Trend with Persistent Errors

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

10. Time series regression and forecasting

Bayesian data analysis in practice: Three simple examples

Quantiles, Expectiles and Splines

Markov Chain Monte Carlo Methods for Stochastic Optimization

Chapter 2. GMM: Estimating Rational Expectations Models

Chapter 2. Dynamic panel data models

Particle Filtering Approaches for Dynamic Stochastic Optimization

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Introduction to Machine Learning CMU-10701

On computing Gaussian curvature of some well known distribution

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Markov Chain Monte Carlo methods

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Mining Big Data Using Parsimonious Factor and Shrinkage Methods

Noise constrained least mean absolute third algorithm

Stock index returns density prediction using GARCH models: Frequentist or Bayesian estimation?

xtunbalmd: Dynamic Binary Random E ects Models Estimation with Unbalanced Panels

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

17 : Markov Chain Monte Carlo

Stochastic Processes

Reversal in time order of interactive events: Collision of inclined rods

Chapter 1. GMM: Basic Concepts

ARIMA Modelling and Forecasting

TESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES*

Forecasting with a Real-Time Data Set for Macroeconomists

Gaussian processes. Basic Properties VAG002-

Measuring robustness

The Kuhn-Tucker Problem

Spatial panels: random components vs. xed e ects

4. A Physical Model for an Electron with Angular Momentum. An Electron in a Bohr Orbit. The Quantum Magnet Resulting from Orbital Motion.

May 2, Why do nonlinear models provide poor. macroeconomic forecasts? Graham Elliott (UCSD) Gray Calhoun (Iowa State) Motivating Problem

Time-Varying Quantiles

On the Power of Tests for Regime Switching

LECTURE 13: TIME SERIES I

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Time Series Analysis -- An Introduction -- AMS 586

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

Stat 516, Homework 1

Position in the xy plane y position x position

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

MCMC algorithms for fitting Bayesian models

Transcription:

MPRA Munich Personal RePEc Archie Periodic autoregressie stochastic olatility Abdelhakim Aknouche Uniersity of Science and Technology Houari Boumediene, Qassim Uniersity June 3 Online at https://mpra.ub.uni-muenchen.de/697/ MPRA Paper No. 697, posted 8 February 6 : UTC

Periodic autoregressie stochastic olatility Abdelhakim Aknouche October, Abstract This paper proposes a stochastic olatility model (P AR-SV ) in which the log-olatility follows a rst-order periodic autoregression. This model aims at representing time series with olatility displaying a stochastic periodic dynamic structure, and may then be seen as an alternatie to the familiar periodic GARCH process. The probabilistic structure of the proposed P AR-SV model such as periodic stationarity and autocoariance structure are rst studied. Then, parameter estimation is examined through the quasi-maximum likelihood (QML) method where the likelihood is ealuated using the prediction error decomposition approach and Kalman ltering. In addition, a Bayesian MCMC method is also considered, where the posteriors are gien from conjugate priors using the Gibbs sampler in which the augmented olatilities are sampled from the Griddy Gibbs technique in a single-moe way. As a-byproduct, period selection for the P AR-SV is carried out using the (conditional) Deiance Information Criterion (DIC). A simulation study is undertaken to assess the performances of the QML and Bayesian Griddy Gibbs estimates. Applications of Bayesian P AR-SV modeling to daily, quarterly and monthly S&P returns are considered. Keywords and phrases: Periodic stochastic olatility, periodic autoregression, QML ia prediction error decomposition and Kalman ltering, Bayesian Griddy Gibbs sampler, single-moe approach, DIC. Mathematics Subject Classi cation: AMS Primary 6M; Secondary 6F99 Proposed running head: Periodic AR Stochastic olatility.. Introduction Oer the past three decades, stochastic olatility (SV ) models introduced by Taylor (98) hae played an important role in modelling nancial time series which are characterized by a time-arying olatility feature. This class of models is often iewed as a better formal alternatie to ARCH-type models because the Faculty of Mathematics, Uniersity of Science and Technology Houari Boumediene, Algiers, Algeria, e-mail: aknouche_ab@yahoo.com.

olatility is itself drien by an exogenous innoation, a fact that is consistent with nance theory, although it makes the model relatiely more di cult to estimate. Seeral extensions of the original SV formulation hae been proposed in the literature to account for further olatility features such as long memory, simultaneous dependence, excess kurtosis, leerage e ect and change in regime (e.g. Harey et al, 994; Ghysels et al, 996; Breidt, 997; Breidt et al, 998; So et al, 998; Chib et al, ; Caralho and Lopes, 7; Omori et al, 7; Nakajima and Omori, 9). Howeer, it seems that most of the proposed formulations hae been deoted to time-inariant olatility parameters and hence they could not meaningfully explain time series whose olatility structure changes oer time, in particular olatility displaying a stochastic periodic pattern that cannot be accounted for by time-inariant SV -type models. In order to describe periodicity in the olatility, Tsiakas (6) proposed arious interesting and parsimonious time-arying stochastic olatility models in which the olatility parameters are expressed as deterministic periodic functions of time with appropriate exogenous ariables. The proposed models called "periodic stochastic olatility" (P SV ) hae been successfully applied to model the eolution of daily S&P returns. This is an eidence that the periodically changing structure may characterize time series olatility. Howeer, the P SV formulations are by de nition especially well adapted to a kind of deterministic periodicity in the second moment and hence they might neglect a possible stochastic periodicity in these moments (see e.g. Ghysels and Osborn, for the di erence between deterministic and stochastic periodicity). A complementary approach which seems to be appropriate in capturing stochastic periodicity in the olatility is to consider a linear time-inariant representation for the olatility equation inoling seasonal lags, leading to a seasonal SV speci cation (see e.g. Ghysels et al, 996). Howeer, because of the time-inariance of the olatility parameters, the seasonal SV model may be too restrictie in representing periodicity and a model with periodic time-arying parameters seems to be more releant. Indeed, as pointed out by Bollersle and Ghysels (996, p. 4) many nancial time series encountered in practice are such that neglecting periodic time-ariation in the corresponding olatility equation gie rise to a loss in forecast e ciency, which is more seere in the GARCH model than in linear ARM A. This has motiated Bollersle and Ghysels (996) to propose the periodic GARCH (P -GARCH) formulation in which the parameters ary periodically oer time in order to capture the stochastic periodicity pattern in the conditional second moment. At present the P -GARCH model is among the most important models for describing periodic time series olatility (see e.g. Bollersle and Ghysels, 996; Taylor, 6; Koopman et al, 7; Osborn et al, 8; Regnard and Zakoïan, ; Sigauke and Chikobu, ; Aknouche and Al-Eid, ). Howeer, despite the recognized releance of the P -GARCH model, an alternatie periodic SV for stochastic periodicity is in fact needed for many reasons. First, it is well known that an SV -like model is more exible than a GARCH type model because the olatility in the latter is only drien by the past of the obsered process which constitutes a restrictie limitation. Second, compared to SV -type models, the probability structure of P -GARCH models

is relatiely more complex to obtain (Aknouche and Bibi, 9). Finally, compared to the P -GARCH, the P AR-SV easily allows to simple multiariate generalizations. In this paper we propose to model stochastic periodicity in the olatility through a model that generalizes the standard SV equation so that the parameters ary periodically oer time. Thus, in the proposed model termed periodic autoregressie stochastic olatility (P AR-SV ) the log-olatility process follows a rst-order periodic autoregression and may be generalized so as to hae any linear periodic representation. This model may be seen as an extension of the models of Tsiakas (6) to include periodic feature in the autoregressie dynamic of the log-olatility equation. The structure and probability properties of the proposed model such as periodic stationarity, autocoariance structure and relationship with multiariate stochastic olatility models are rst studied. In particular, periodic ARM A (P ARM A) representations for the logarithm of the squared P AR-SV process are proposed. Then, parameter estimation is conducted ia the quasi-maximum likelihood (QM L) method, properties of which are discussed. In addition, Bayesian estimation approach using Marko Chains Monte Carlo (MCMC) techniques is also considered. Speci cally, a Gibbs sampler is used to estimate the joint posterior distribution of the parameters and the augmented olatility while calling for the Griddy Gibbs procedure when estimating the conditional posterior distribution of the augmented parameters. On the other hand, selection of the period of the P AR-SV model is carried out using the (conditional) Deiance Information Criterion (DIC). Simulation experiments are undertaken to assess nite-sample performances of the QM LE and the Bayesian Griddy Gibbs methods. Moreoer, empirical applications to modeling series of daily, quarterly and monthly S&P returns are conducted in order to appreciate the usefulness of the proposed P AR-SV model. In the particular daily return case, a ariant of the P AR-SV model with missing alues, dealing with the "day-of-the-week" e ect is applied. The rest of this paper proceeds as follows. Section proposes the P AR-SV model and studies its main probabilistic properties. In Section 3, the quasi-maximum likelihood method ia prediction error decomposition and Kalman ltering is adopted. Moreoer, a single-moe Bayesian approach by means of the Griddy Gibbs (BGG) sampler is proposed. In particular, some M CM C diagnostic tools are presented and period selection in P AR-SV models is carried out using the DIC. Through a simulation study, Section 4 examines the behaior of the QML and BGG methods in nite samples. Section applies the P AR-SV speci cation to model daily, quarterly and monthly S&P returns using the Bayesian Griddy Gibbs method. Finally, Section 6 concludes. 3

. The P AR-SV and its main probabilistic properties In this paper, we say that a stochastic process f" t ; t Zg has a periodic autoregressie stochastic olatility representation with period S (P AR-SV S in short) if it is gien by 8 < " t = p h t t : log (h t ) = t + t log (h t ) + t e t, t Z; (:a) where the parameters t ; t ; and t are S-periodic oer t (i.e. t = t+sn 8n Z and so on) and the period S is the smallest positie integer erifying the latter relationship. f( t ; e t ); t Zg is a sequence of independent and identically distributed (i:i:d:) random ectors with mean (; ) and coariance matrix I (I stands for the identity matrix of dimension ). We hae called model (:a) periodic autoregressie stochastic olatility rather than shortly periodic stochastic olatility because the log-olatility is rather drien by a rst-order periodic autoregression and also in order to make distinction between model (:a) and the periodic stochastic olatility (P SV ) model proposed by Tsiakas (6). In fact, the P AR-SV model (:a) may be generalized so that h t satis es any stable periodic ARMA (henceforth P ARMA) representation. Note that when t =, model (:a) reduces to Tsiakas s (6) model if we take t to be an appropriate deterministic periodic function of time. In that case, the e ect of any current shock in the innoation e t in uences only the present olatility and does not a ect its future eolution. This is the case of what is called deterministic periodicity. If, in contrast, t 6= for some t, the log-olatility equation inoles lagged alues of the log-olatility process. Therefore, the log-olatility consists at any time of an accumulation of past shocks, so that present shocks a ect more or less the future log-olatility eolution, depending on the stability of the log-olatility equation (see the periodic stationarity condition (:) below). This case is commonly named stochastic periodicity in the olatility. It should be noted that although h t is conentionally called olatility, it is not the conditional ariance of the obsered process gien its past information in the familiar sense as in ARCH-type models. This is because h t is instead F t -measurable and so E " t =F t = E (ht =F t ) 6= h t, where F t is the -Algebra generated by f" u ; u tg. Neertheless, E (h t ) = E " t and E " t =h t = ht as in the ARCH-type case. To emphasize the periodicity of the model, let t = ns + for n Z and S. Then model (:a) may be written as follows 8 < " ns+ = p h ns+ ns+ : log (h ns+ ) = + log (h ns+ ) + e ns+, n Z; S; (:b) where by season ( S) we mean the channel f; + S; + S; :::g with corresponding parameters ; and. From (:b) the log-olatility appears to be a Marko chain, which is not homogeneous as in time-inariant stochastic olatility models, but is rather periodically homogeneous due to the periodic time-ariation of 4

parameters. This may relatiely complicate studying the probabilistic structure of the P AR-SV model. As is common in periodic time arying modeling, a routine approach is to write (:b) as a time-inariant multiariate SV model by embedding seasons, S (see e.g. Gladyshe, 96 and Tiao and Grupe, 98 for periodic linear models) and then studying the property of this latter. More precisely, de ne the S-ariate sequences fh n ; n Zg, f" n ; n Zg by H n = (h ns+ ; :::; h ns+s ) and " n = (" ns+ ; :::; " ns+s ). Then model (:b) may be cast in the following multiariate SV form 8 < : " n = diag H n n log H n = B log H n + n, n Z; (:) where n = ns+ ; :::; ns+s, diag (a) stands for the diagonal matrix formed by the entries of the ector a in the gien order. The notations H n and log H n denote the S-ectors de ned respectiely by H n () = p hns+ and log H n () = log (h ns+ ) ( S). The matrices B and n in (:) are gien by B = B @ : : : : : :.... : : : S Q S = ; n = C B A @ SS SP k= +ns +ns + +ns S Qk =. S k+ns ; C A S with ns+ = + e ns+ ( S). Howeer, this approach has the main drawback that aailable methods for analyzing multiariate SV models do not consider the particular structure of the coe cients in (:) and it may be di cult to conclude on model (:). Thus, studying probabilistic and statistical properties of model (:) directly may be simpler and better than studying them through model (:). This implies that periodic stochastic olatility modelling cannot be triially deduced from existing multiariate SV analysis. In the sequel, we study the structure of model (:) using mainly the direct approach. Throughout this paper, we frequently use solutions of the following ordinary di erence equation u t = a t + b t u t ; t Z; (:3a) S with S-periodic coe cients a t and b t. Recall that the solution is gien, under the requirement that Q b < =, by! S Y SX jy u ns+ = b b i a j, S; n Z. (:3b) = j= i= First, we hae the following result which proides a necessary and su cient condition for strict periodic stationarity (see Aknouche and Bibi, 9 for the de nition of strict periodic stationarity). Theorem. (Strict periodic stationarity)

The P AR-SV equation gien by (:) admits a unique (nonanticipatie) strictly periodically stationary and periodically ergodic solution gien for n Z and S by 8 9 P j S Q >< j= i j X jy >= i= " ns+ = ns+ exp B + @ SQ i j e ns+ j C ; (:4) A >: j= i= >; = where the series in (:4) conerges almost surely, if and only if, S Y < : (:) = Proof The result obiously follows from standard linear periodic autoregression (P AR) theory while using (:3) (see e.g. Aknouche and Bibi, 9). So, details are omitted. From Theorem. we see that the monodromy coe cient S Q is the analog of the persistent parameter = S in the case of time-inariant SV and standard GARCH models. If, howeer, Q, then clearly there = does not exist a nonanticipatie strictly periodically stationary solution of (:) like (:4). Other properties such as periodic geometric ergodicity and strong mixing are obious. Let rst say that a strictly periodically stationary stochastic process f" t ; t Zg is called geometrically periodically ergodic if and only if the corresponding multiariate strictly stationary process f" t ; t Zg gien by " n = (" ns+ ; :::; " ns+s ) is geometrically ergodic in the classical sense (see e.g. Meyn and Tweedie, 9 for the de nition of geometric ergodicity). Theorem. (Geometric periodic ergodicity) S Under the condition Q <, the process f" t ; t Zg de ned by (:) is geometrically periodically = ergodic. Moreoer, if initialized from its inariant measure, then flog h t ; t Zg and hence f" t ; t Zg are periodically -mixing with exponential decay. Proof The result follows from geometric ergodicity of the ector autoregression flog H n ; n Zg gien by (:), which may easily be established using Meyn and Tweedie s (9) results (see also Dais and Mikosch, 9). Gien the form of the periodically stationary solution (:4), it is easy to gie its second-order properties. Assuming the following condition Q ;j < ; for all S ; (:6) j= where we hae the following result. ;j = E exp!! jy i j e j ; i= 6

Theorem.3 (Second-order periodic stationarity) Under conditions (:) and (:6), the series in (:4) conerges in the mean square sense and the process gien by (:4) is also second-order periodically stationary. Proof Routine computation shows that under (:) and (:6) the series in (:4); X jy i j e ns+ j ; j= i= conerges in mean square. Moreoer, under these conditions, it is clear that f" t ; t Zg gien by (:4) is a periodic white noise with periodic ariance since E (" t ) =, E (" t " t h ) = (h > ) and, while using (:3), P j S Q j= i j V ar (" ns+ ) = E B @ exp X jy i= B + @ SQ i j e ns+ j CC AA j= i= = exp B @ P j S j= i= = Q i j Q C SQ ;j ; S : (:7) A j= = In the case of Gaussian log-olatility innoations fe t ; t Zg, (i.e. e t N(; )) it is also possible to obtain more explicit results while reducing assumptions of Theorem.3. Using the fact that if X N(; ) then E(exp(X)) = exp( ) for all non null real constant, we obtain! ;j = exp jy j i ; (:8) and condition (:6) of niteness of Q j= SQ ;j reduces to the periodic stationarity condition (:): < : = Moreoer, using (:8) and (:3) the ariance of the process gien by (:7) may be expressed more explicitly as follows V ar (" ns+ ) = exp B @ = exp B @ = exp B @ P j S Q j= i= P j S j= i= SP j= i= S = i= i j Q C exp jy Q A j j= i= Q i j C SQ A exp @ X jy i A j j= i= jq = i j SQ = + SP jq j= i= i! i j C SQ A : (:9) = 7

For example, the ariance V ar (" ns+ ) of the process is gien respectiely, for S = and S = 3, by + V ar (" n+ ) = exp + + + + V ar (" n+ ) = exp + + V ar (" 3n+ ) = exp 3 + 3 + 3 + V ar (" 3n+ ) = exp + 3 + 3 3 + V ar (" 3n+3 ) = exp 3 + 3 + 3 Next, the autocoariance of the squared process " t ; t Z ; ; + 3 + 3, 3 + + 3 3 3 + 3 + 3 3 ; : is proided. This one is useful in identifying the model and deriing certain estimation methods such as simple and generalized methods of moments. Let " (h) = E " ns+ " ns+ h E " ns+ E " ns+ h. Theorem.4 (Autocoariance structure of " t ; t Z ) i) Under (:), (:6) and the conditions Q j= ;j h;j < and E 4 < we hae SP jq i j " () = exp B @ j= i= C @E 4 X jy Q SQ E @exp @ A i j e j AA j=h i= j= = ;j A (:a) " (h) = hx jy @E @exp @! Q ;j h;j exp B @ j= j= i= SP j= i= hy i j e j + + jq i j + S P jq j= i= Q S = i= i! X jy i j e j AA j=h i= h i h j C A ; h > : (:b) Proof Using (:4) direct calculation gies E " ns+" X jy X jy ns+ h = E @exp @ i j e ns+ j + h i h j e ns+ h j AA exp B @ P j S j= i= Q i j Q S = + j= i= P j S j= i= j= i= Q h i h j C SQ A E ns+ ns+ = h ; (:) under niteness of the latter expectations. When in particular h =, combining (:7) and (:) we get (:a) under niteness of E 4. 8

For h >, because of the independence structure of f t ; t Zg one obtains giing (:b). E " ns+" ns+ h = exp B @ j= i= SP jq j= i= i j + S P j=h i= jq j= i= Q S = h i h j C A hx jy X jy X jy E @exp @ i j e j + i j e j + h i h j e h j AA E exp h P SP jq i j + S P j= i= = exp B @ SQ = jq i j e j + + h Q j= i= jq j= i= i= i j= i= h i h j C A P jq j=h i= i j e j!! Expressions of the S kurtoses Kurt () ( S) of the P AR-SV S model may be gien from (:9) and (:) by Kurt () = E 4 E 4 : Q j Q j= E exp Q j= i= i= i j e j! E exp j Q ; S; (:) i j e j By the Cauchy-Schwartz inequality, this clearly shows that the P AR-SV is characterized by excess Kurtosis for all channels f; :::; Sg. In particular, under the normality assumption on the innoations, the second-order periodic stationarity reduces to E( 4 SQ ) < and <. So from (:8), expression (:) reduces to = Kurt () = E 4 ; S: ; The autocoariance function has also more explicit form in the case of Gaussian fe t ; t Zg. Corollary. (Autocoariance structure of " t ; t Z under normality of fe t ; t Zg) Under the same assumptions of Theorem.4 and if fe t ; t Zg is Gaussian then,! S Y S! X jy S Y SX jy () = exp @ i j + i A j E 4 " = j= i= = j= i= ; (:3a) 9

(h) = exp B @ " SP B @ exp B @ jq j= i= SP i j + S P jq i j j= i= SQ = jq j= i= Q S = hy i= h i h j + i SP jq j= i= i j C SQ A = C A C, h > : (:3b) A Proof For Gaussian innoations, we use again the fact that if X N(; ) then E(exp(X)) = exp( ). Therefore, (:3a) follows from (:a) and (:9). For h > we hae! S E " ns+" Y SX jy SX jy ns+ h = exp @ @ i j + h i h j AA i= =! hq exp jy Q j= i j exp @ j= hy + i= j= i=! jy i i= j= i= i A j : After tedious but straightforward calculation, the autocoariance function at lag h (h > ) simpli es for Gaussian innoations to B (h) = exp @ " which is (:3b). SP jq j= i= X jy exp @ = exp B @ j=h i= SP jq j= i= B @ exp B @ SP i S P j+ jq h i h j j= i= S Q = hy i j + i j + S P jq i j j= i= SQ = i= jq j= i= Q S = hy i= i! h C X jy A 4exp @ A h i h j + i C A C A ; j= i= i j A 3 X jy exp @ i A j : j= i= SP jq j= i= i j C SQ A = It is worth noting that expanding the exponential function in (:3b) under the periodic stationarity condition (:), the autocoariance function " (h) of the squared process " t ; t Z has the following equialent form as h! and so " " hy (h) K i= i K SY =! h=s (h) conerges geometrically to zero as h!, where K is an appropriate real constant. Howeer, the decreasing of " (h) is not compatible with the recurrence equation that satisfy periodic ARM A ;

(P ARMA) autocoariances and we can conclude that the squared process " t ; t Z does not admit a P ARM A autocoariance representation. Neertheless, the logarithmed squared process log " t ; t Z has in fact a P ARMA autocoariance structure. Considering the following notations Y t = log " t, Xt = log h t, u t = log t, E log t = u and V ar log t = u, we hae from (:) Y t = X t + u t : (:4) Theorem. (P ARMA (; ) representation of log " t ; t Z ) Under assumption (:) and niteness of u, the process fy t ; t Zg has a P ARMA S (; ) representation gien by Y ns+ Y = Y ns+ Y + ns+ ns+ ; S; t Z; (:a) where Y = E (Y ns+ ), 8 < = : (+ ) u + p ((+ ) u + )(( ) u + ) u if u 6= if u =, S; (:b) and f t ; t Zg is a periodic white noise with periodic ariance 8 >< SQ u if 6= ; = V ar ns+ = = SQ >: if = =, S: (:c) Proof The second-order structure of fx t ; t Zg is gien form (:) while using (:3), X = E (X ns+ ) = + E (X ns+ ) = X () = V ar X ns+ = E X ns+ + = SQ = SP j= jq i j, i= SQ SP jq = j= i= i j X (h) = Co (X ns+ ; X ns+ h ) = X (h ) ; ; h > : Therefore, using (:4) we hae Y = E (Y ns+ ) = E (X ns+ ) + E (u ns+ ) = Y () = V ar (Y ns+ ) = V ar (X ns+ ) + u = SQ SP j= jq = i= SQ SP jq = j= i= i j + u, i j + u Y (h) = X (h) = X (h ) = ::: h+ X h () S Q SP = ::: h+ jq h i h j = j= i= ; h > :

Clearly, the process fy t ; t Zg has a P ARMA representation since Y (h) = Y (h ), for h >. To identify the parameters of its representation we use expressions of Y (h) for h = ;. If fy t ; t Zg has a P ARMA representation (:a) then for all S Y () = Y () + ; ( + ( )) Y () = Y () ;: (:d) Hence, if u 6= we hae for all S; + ( ) = Y () Y () Y () Y () Y () = X () X () + u) X () = Y () X () u = Y () Y () u u = + u : (:e) u The latter equation admits, for all S, two solutions one of which is with modulus less than (j j < ) and is gien by (:b). Such a choice clearly ensures that S Q j j <, but it is not unique. Moreoer, when S Q showing (:c). = 6= using (:d), the ariance of f t ; t Zg is ; = Y () Y () = X () + u = Y () ; If, howeer, u = the relationship Y (h) = Y (h ) also holds for h = and so the process fy t ; t Zg is a pure rst-order periodic autoregression (P AR()) with = for all. When S Q =, the process fy t ; t Zg is a strong periodic white noise (an independent and periodically distributed, i:p:d: sequence) and so = for all (see also Francq and Zakoïan, 6 for the particular non-periodic case S = ). It is worth noting that representation (:a) is not unique. = Indeed, in contrast with time-inariant ARM A models for which an ARM A process may be uniquely identi ed from its autocoariance function (see Brockwell and Dais, 99), it is not always possible to build a unique P ARMA model from an autocoariance function haing P ARM A structure. Howeer, we may enumerate all possible representations from soling (:d) and choosing the best one tting the obsered series. The resulting representation will

be abusiely said the P ARM A representation. Such a representation is useful for obtaining predictions for the process log " t ; t Z. It may also be used to obtain approximate predictions for the squared process " t ; t Z as this latter does not admit a P ARMA representation (see Section 4.). If we denote by b" t+h=t = E " t+h =" t ; " t ; ::: the mean-square prediction of " t+h based on " t ; " t ; :::; then b" t+h=t may be approximated by \ C exp log " t+h=t ; where \ log " t+h=t = E log " t+h = log " t ; log " t ; ::: ; and C is a normalization factor. The constant C is introduced to minimize the bias due to using incorrectly the following relationship \ exp log " t+h=t = \ exp log " t+h=t as we know from Jensen s inequality that the latter equality is in fact not true. Typically, one can take C as the sample ariance of log " t ; t = ; :::; T. ; 3. Parameter estimation of the P AR-SV model In this Section we consider two estimation methods for the P AR-SV model. The rst one is a QML method based on prediction-error decomposition of a corresponding linear periodic state-space model. This method which uses Kalman ltering to obtain linear predictors and error prediction ariances is used as a Benchmark to the second proposed method, which is based on the Bayesian approach. In this method, from gien conjugate priors, the conditional posteriors are obtained from the Gibbs sampler in which the conditional posterior of the augmented olatilities is drien ia the Griddy-Gibbs technique. In the rest of this Section we consider a series " = (" ; : : : ; " T ) generated from model (:) with sample-size T = NS supposed without loss of generality multiple of the period S. The ector of model parameters is denoted by =! ; where! = (! ;! ; :::;! S ),! = ( ; ) and = ; ; :::; S. 3. QM LE ia prediction error decomposition and Kalman ltering Taking in (:) the logarithm of the square of " t we obtain the following linear periodic state space-model 8 < Y ns+ = + X ns+ + eu ns+, n Z; S; (3:) : X ns+ = + X ns+ + e ns+ where as in the aboe Y +ns = log " ns+, XnS+ = log (h ns+ ), u ns+ = log ns+ ; = E (uns+ ), eu ns+ = u ns+ and u = V ar (u ns+ ). When f t ; t Zg is standard Gaussian, the mean and ariance 3

of log ns+ can accurately be approximated by ln :7 and = respectiely, where (:) is the gamma function. Note, howeer, that the linear state-space model (3:) is not Gaussian, unless i) e is Gaussian, ii) e and are independent and iii) has the same distribution as exp (X=) for some X normally distributed with mean zero and ariance. In what follows we assume for simplicity of exposition that is standard Gaussian, but the QML method we present below is still alid when is not Gaussian and een when and u are unknown. Let Y = (Y ; : : : ; Y T ) be the series of log-squares corresponding to " = (" ; : : : ; " T ) (i.e. Y t = log " t ; t T ), which is generated from (3:) with true parameter. The quasi-likelihood function l Q (; Y ) ealuated at a generic parameter may be written ia the prediction error decomposition as follows log(l Q (; Y )) = T log() TX log(f t ) + (Y! t by t/t ) ; (3:) t= where Y b t/t = X b tjt +, Xtjt b is the best predictor of the state X t based on the obserations Y ; :::; Y t. with mean square errors P t=t = E X t X b t/t and Ft = E Y t Y b t/t A QML estimate b QML of the true is the maximizer of log(l Q (; Y )) oer some compact parametric space, where l Q (; Y ) it is ealuated as if the linear state space model (3:) was Gaussian. Thus the best state predictor b X tjt the state prediction error ariance P t=t context of model (3:) is described by the following recursions bx t/t = t bx t /t + P t =t Ft = t P t =t Pt =t F t P t=t F t = P t=t + u F t and may be recursiely computed using the Kalman lter, which in the Y t b X t /t + t + t ; t T; (3:3a) while remembering that t, t and t are S-periodic oer t. The start-up alues of (3:3a) are calculated on the basis of: b X= = E (X ) and P = = V ar (X ). Using results of Section, we then get bx = = P j S Q j= i= S i j Q = and P = = P j S Q j= i= S i j. (3:3b) Q = Recursions (3:3) may also be used in a reerse form for smoothing purposes, i.e. to obtain the best linear predictor e X t of X t based on Y ; : : : ; Y T, from which we get estimates of the unobsered olatilities h t ( t T ). Consistency and asymptotic normality of the QM L estimate may be established using standard theory of linear (non-gaussian) signal plus noise models with time-inariant parameters (Dunsmuir, 979). For this, we inoke the corresponding multiariate time-inariant model (:) which we transform to a linear form as follows 8 < : Y n = log H n + n, n Z; (3:4) log H n = B log H n + n 4

where Y n and n are S-ectors such that Y n () = Y +ns, and n () = u +ns ( S) and where log H n ; B and n are gien by (:). Using (3:4), we can call for the theory in Dunsmuir (979) to yield the asymptotic ariance of the QMLE under the niteness of the moment E Y 4 +ns (see also Ruiz, 994 and Harey et al, 994). Of course, the QMLE would be asymptotically e cent if we assume that e t is Gaussian, e and and are independent, and has the same distribution as exp (X=), where X N(; ). In that case, log N(; ) and the lienar state space (3:) would be also Gaussian. Therefore, the QMLE reduces to the exact maximum likelihood estimate (MLE). Howeer, the assumption that log N(; ) seems to hae a little interest in practice. 3.. Bayesian inference ia Gibbs sampling Adopting the Bayesian approach, the parameter ector of the model and the unobsered olatilities h = (h ; h ; :::; h T ) which are also considered as augmented parameters, are iewed as random with a certain prior distribution f (; h). Gien a series " = (" ; : : : ; " T ) generated from the P AR-SV S model (:) with Gaussian innoations, the goal is to make inference about the joint posterior distribution, f (; h="), of (; h) gien ". the parameters h,!; ; ; :::; S Because of the periodic structure of the P AR-SV model it is natural to assume that are independent of each other. Thus, the joint posterior distribution f (; h=") = f!; ; h=" can be estimated using Gibbs sampling proided we can draw samples from any of the S + conditional posterior distributions f!="; ; h, f =";!; fg ; h ( S) and f h=";!;, where x ftg denotes the ector obtained from x after remoing its t-th component x t. Since the posterior distribution of the olatility parameter f h=";!; has a rather complicated expression, we sample it element-by-element as done by Jacquier et al (994). Thus, the Gibbs sampler for sampling from the joint posterior distribution f!; ; h=" reduces to drawing samples from any of the T +S+ conditional posterior distributions f!="; ; h, f =";!; fg ; h, ( S) and f h t =";!; ; h ftg ; ( t T ). Under normality of the olatility proxies and using standard linear regression theory with an appropriate adaptation to the P AR form of the log-olatility equation (:), the conditional posteriors f!="; ; h and f =";!; fg ; h, ( S) may be determined directly from gien conjugate priors f (!) and f, ( S). Howeer, like the non-periodic SV case (Jacquier et al, 994), direct draws from the distribution f h t =";!; ; h ftg are not possible because it has unusual form. Neertheless, unlike Jacquier et al (994) which used a Metropolis-Hasting chain after determining the form of f h t =";!; ; h ftg except for a scaling factor, we use the Griddy-Gibbs procedure as in Tsay () because in our periodic context its implementation seems much simpler.

3... Prior and posterior sampling analysis a) Sampling the log-olatility periodic autoregressie parameter! Before giing the conditional posterior distribution f!="; ; h through some conjugate prior distributions and linear regression theory, we rst write the P AR log-olatility equation as a standard linear regression. Setting H ns+ = @ ; :::; ; ; log (h {z } ns+ ) ; ; :::; A, model (:b) for t = ; :::; NS may be rewritten in the following {z } times S times periodically homoskedastic linear regression or also as a standard regression log (h ns+ ) = H ns+! + e ns+ ; S; n N ; (3:a) log (h ns+ ) = H ns+! + e ns+ ; S; n N ; (3:b) with i:i:d: Gaussian errors. Assuming known the ariances ( S) and the initial obseration h, the least squares estimate b! W LS of!, based on (3:b), (which is just the weighted least squares estimate of! based on (3:a)) has the following form b! W LS = NX SX n= = H ns+ HnS+! N X and is normally distributed with mean! and coariance matrix SX n= = H ns+ log (h ns+ ) ; = NX SX n= = H ns+ HnS+!. (3:6) Under assumption (3:b), information of the data about! is contained in the weighted least squares estimate b! W LS of!. To get a closed-form expression for the conditional posterior f!="; ; h we use a conjugate prior for!. This prior distribution is Gaussian, i.e.! N! ;, where the hyperparameters! ; are known and are xed so that to hae a quite reasonably di use prior yet informatie. Thus, using standard regression theory (Box and Tiao, 973; Tsay, ) the conditional posterior distribution of! gien "; ; h is: where = Some remarks are in order: NX SX n= = NX! =!="; ; h N (! ; ) ; (3:7a)! H ns+ HnS+ + (3:7b) SX n= =! H ns+ log (h ns+ ) +! : (3:7c) 6

i) The matrix gien by (3:6) is block diagonal. So if we assume that is also block diagonal, then we obtain the same result as if we assume that the seasonal parameters! ;! ; :::;! S are independent of each other, and each one has a conjugate prior with hyperparameters, say! and ( S), that are appropriate components of! and. ii) Faster and more stable computation of! and in (3:7) which does not inole any matrix inersion (in contrast with (3:7b)) may be obtained while setting! =! NS, = NS and recursiely then computing the latter quantities using the well-known recursie least squares (RLS) algorithm (see Ljung and Söderström, 983, Lemma.) which is gien by! ns+ =! ns+ + ns+ H ns+ log (h ns+ ) HnS+! ns+ ns+ = ns+ + H ns+ ns+ ns+ H ns+h ns+ ns+ + H ns+ ns+ H ns+ H ns+ ; S n N ; (3:8a) with starting alues! =! and =. (3:8b) This may improe the numerical stability and computation time tied to the whole estimation method, especially for a large period S. b) Sampling the log-olatility periodic ariance parameters ; S We also use conjugate priors for ; S to get a closed form expression for the conditional posterior of gien data and the other parameters fg. Such priors are proided by the inerted Khi-squared distribution: a a ; S; (3:9a) where a = ( S). Gien the parameters! and h, if we de ne e ns+ = log (h ns+ ) log (h ns+ ) ; S; n N ; (3:9b) then e ; e +S ; :::; e (N )S+ iin ;, S. From standard Bayesian linear regression theory (see e.g. Tsay, ) the conditional posterior distribution of ; S, gien the data and the remainder parameters is an inerted Khi-squared distribution with degree of freedom a + N, that is a P N + n= e ns+ =";!; fg ; h a +N ; S: (3:9c) c) Sampling the augmented olatility parameters h = (h ; h ; :::; h T ) Now, it remains to sample from the conditional posterior distribution f h t ="; ; h ftg, t = ; ; :::; T. Let us rst gie the expression of this distribution (except for a multiplicatie constant) and we will show how to (indirectly) draw samples from it using the Griddy Gibbs technique. Because of the Markoian (but non-homogeneous) structure of 7

the olatility process fh t ; t Zg and the conditional independence of " t and h t h (h 6= ) gien h t, it follows that for any < t < T: f h t ="; ; h ftg = f (h t=h t ; ) f (h t+ =h t ; ) f (" t =; h t ) f (h t+ =h t ; ) f (" t =; h t ; h t+ ) _ f (h t =h t ; ) f (h t+ =h t ; ) f (" t =; h t ) : (3:) Using the fact that " t =; h t " t =h t N (; h t ), log (h t ) = log (h t ) ; N t + t log (h t ) ; t ; and d log(h t ) = h t dh t, formula (3:) becomes where " f h t ="; ; h ftg _ p exp t (log (h t ) h 3 t h t t ) ; < t < T; (3:a) t t = t+ ( t + t log (h t )) + t t+ (log (h t+ ) t+ ) t+ + t t+ (3:b) t = t+ t t+ + t : (3:c) t+ Note that in (3:a) we hae used the well-known formula (see Box and Tiao, 973, p. 48) A (x a) + B (x b) = (x c) (A + B) + (a b) AB, where c = (Aa + Bb)=(A + B) proided that A + B 6=. A + B For the two end-points h and h T we may simply use a naie approach which consists of assuming h xed so that the sampling starts with t = and use the fact that log (h T ) =; log (h T ) N( T + T log (h T ) ; T ). Alternatiely, we may also use a forecast of h T + and a backward prediction of h and employ again formula (3:) for < t < T +. In that case, we forecast h T +, on the basis of the log-olatility equation of model (:), by using a -step ahead forecast \ log (ht ) (), at the origin T, which is gien from (:) by \ log (ht ) () = T + + T + T + T + T log (h T ). The backward forecast of h is obtained using a -step ahead backward forecast on the basis of the backward periodic autoregression (Sakai and Ohno, 997) associated to the P AR log-olatility. Once the conditional posterior f h t ="; ; h ftg is determined except for a scale factor, we may use some indirect sampling algorithms to draw the olatility h t. Jacquier et al (994) used the rejection Metropolis- Hasting algorithm. Alternatiely, following Tsay () we call for the Griddy-Gibbs technique (Ritter and Tanner, 99) which consists in: i) Choosing a grid of m points from a gien interal [h t ; h tm ] of h t : h t h t ::: h tm ; then ealuating the conditional posterior f h t ="; ; h ftg ia (3:) (ignoring the normalization constant) at each one of these points, giing f ti = f h ti ="; ; h ftg, i = ; :::; m. ii) Building from the alues f t ; f t ; :::; f tm the discrete distribution p (:) de ned at h ti ( i m) f ti by p (h ti ) = P m j= f. This may be seen as an approximation to the inerse cumulatie distribution of tj f h t ="; ; h ftg. 8

iii) Generating a number from the uniform distribution on (; ) and transforming it using the discrete distribution p (:) obtained in ii) to get a random draw for h t. It is worth noting that the choice of the grid [h t ; h tm ] is crucial for e ciency of the Griddy algorithm. We follow here a similar deice by Tsay (), which consists of taking the range of h t, at the l-th Gibbs iteration, to be [h m t ; h M ], where t h m t = :6 max h () t ; h (l ) t, h M t = :4 min h () t ; h (l ) t ; (3:) (l ) h t and h () t being, respectiely, the estimate of h t for the (l )-th iteration and initial alue. 3... Bayes Griddy Gibbs sampler for P AR-SV The following algorithm summarizes the Gibbs sampler for drawing from the conditional posterior distribution f (; h=") gien ". For l = ; ; :::; M, consider the notation h (l) = h (l) ; :::; h(l), T! (l) =. (l) ; (l) ; :::; (l) S ; (l) S and (l) = (l) ; (l) ; :::; (l) S Algorithm 3. Step Specify starting alues h (),! () and (). Step Repeat for l = ; ; :::; M ; Draw! (l+) from f!="; (l) ; h (l) using (3:7a) and (3:8). Draw (l+) from f =";! (l+) ; h (l) using (3:9b) and (3:9c). Repeat for t = ; ; :::; T = NS Griddy Gibbs: Select a grid of m points h (l+) ti : h (l+) t h (l+) t ::: h (l+) tm. For i m calculate f (l+) ti = f h (l+) ti ="; (l) ; h (l) ftg from (3:). De ne the inerse distribution p h (l+) f (l+) ti ti = P m j= f, i m. (l+) tj Generate a number u from the uniform (; ) distribution. Transform u using the inerse distribution p (:) to get h (l+) t, which is considered as a draw from f h t ="; (l+) ; h (l) ftg. Step Return the alues h (l),! (l) and (l), l = ; :::; M. 3..3. Inference and prediction using the Gibbs sampler for P AR-SV Once sampling from the posterior distribution f (; h="), statistical inference for the P AR-SV model may be easily made. 9

The Bayes Griddy-Gibbs parameter estimate b BGG of is taken to be the posterior mean = E (=") which is, under the Marko chain ergodic theorem, approximated with any desired degree of accuracy by b BGG = M M+l X l=l (l) ; where (l) is the l-th draw of from f (; h=") gien by Algorithm 3., l is the burn-in size, i.e. the number of initial draws discarded, and M is the number of draws. Smoothing and forecasting olatility are obtained as a by-product of the Bayes Griddy-Gibbs method. The smoothed alue, h t = E (h t ="), of h t ( t T ) is obtained while sampling from the distribution f (h t =") which in turn is the marginal of the posterior distribution f (; h="). So E (h t =") may be accurately approximated by M P M+l l=l h (l) t where h (l) t is the l-th draw of h t from f (; h t ="). Forecasting future alues h T + ; h T + ; ::; h T +k are getting either as in the aboe using the log-olatility equation with the Bayes parameter estimates or directly while sampling from the predictie distribution f (h T + ; h T + ; ::; h T +k =") (see also Jacquier et al, 994). 3..4 MCMC diagnostics It is important to discuss the numerical properties of the proposed BGG method in which the olatilities are sampled element by element. Despite the ease of implementation, it is well documented that the main drawback of the single-moe approach (e.g. Kim et al, 998) is that the posterior draws are often highly correlated thereby resulting in a slow mixing and so a slow conergence properties. Among seeral M CM C diagnostic measures, we consider here the Relatie Numerical Ine ciency (RN I) (e.g. Geweke, 989; Geyer, 99), which is gien by RNI = + BX K k B bk ; where B = is the bandwidth, K (:) is the Parzen kernel (e.g. k= Kim et al, 998) and b k the sample autocorrelation at lag k of the BGG parameter draws. The RNI indicates in fact on the ine ciency due to the serial correlation of the BGG draws (see also Geweke, 989; Tsiakas, 6). Another M CM C diagnostic measure (Geweke, 989) we use here is the Numerical Standard Error (NSE), which is the square root of the estimated asymptotic ariance of the MCMC estimator. In fact, the NSE is gien by! u BX NSE = t M b + K k B bk ; where b k is the sample autocoariance at lag k of the BGG parameter draws and M is the number of draws. k=

3.. Period selection ia the Deiance Information Criterion An important issue in P AR-SV modeling is the selection of the period S. This problem is especially more pronounced for modeling daily returns because their periodicity is not as obious as in quarterly or monthly data. Although many authors (e.g. Franses and Paap, ; Tsiakas, 6) hae emphasized the day-ofthe-week e ect in daily stock returns, which often entails a period of S =, the period selection problem in periodic olatility models remains a challenging problem. Standard order selection measures such as the AIC and BIC, which require the speci cation of the number of free parameters in each model, are not applicable for comparing complex Bayesian hierarchical models like the P AR-SV model. This is because in the P AR- SV model, the number of free parameters, which is augmented by the latent olatilities that are in fact not independent but Markoian, is not well de ned (cf. Berg et al, 4). For a long time, the Bayes factor has been iewed as the best way to carry out Bayesian model comparison. Howeer, its calculation based on ealuating the marginal likelihood requires extremely high-dimensional integration, and this would be more computationally demanding especially for P AR-SV model which inoles a larger number of parameters augmented by the olatilities, exceeding the sample size. In this paper, we will carry out period selection using rather the Deiance Information Criterion (DIC), which may be iewed as a trade-o between model adequacy and model complexity (Spiegelhalter et al, ). Such a criterion, which represents a Bayesian generalization of the AIC, is easily obtained from M CM C draws, needing no extra-calculations. The (conditional) DIC as introduced by Spiegelhalter et al () is de ned in the context of P AR-SV S to be DIC (S) = 4E ;h=" (log (f ("=; h))) + log f "=; h ; where f ("=; h) is the (conditional) likelihood of the P AR-SV model for a gien period S and ; h = E ((; h)=") is the posterior mean of (; h). From the Griddy-Gibbs draws, the expectation E ;h=" (log (f ("=; h))) can be estimated by aeraging the conditional log-likelihood, log f ("=; h), oer the posterior draws of (; h). Further, the joint posterior mean estimate of (; h) can be approximated by the mean of the posterior draws of ( (l) ; h (l) ). Using the fact that f ("=; h) := f ("=h) = P T t= log (h t ) + " t h t, the DIC (S) is estimated by where h (l) t l +M X M l=l TX t= log h (l) t! + " t h (l) t TX t= " log h t + t ; h t denotes the l-th BGG draw of h t from f (h t =" t ; ), M is the number of draws, l is the burn-in size and h t := E (h t =") is estimated by M the smallest DIC alue. P l+m l=l h (l) t ( t n). Of course, a model is preferred if it has Since the DIC is random and for the same tted series it may change alue from a MCMC draw to another, it is useful to get its corresponding numerical standard error. Howeer, as pointed out by Berg et al

(4), non e cient method has been deeloped for calculating reasonably accurate Monte Carlo standard errors of DIC. Neertheless, following the recommendation of Zhu and Carlin () we simply replicate the calculation of DIC some G times and estimate V ar(dic) by its sample ariance, giing a broad indication of the implied ariability of DIC. Note nally that for the class of latent ariable models to which belongs the P AR-SV, there are in fact seeral alternatie de nitions of the DIC depending on the di erent concepts of the likelihood used (complete, obsered, conditional) and the one we worked with here is the conditional DIC as categorized by Celeux et al (6). We hae aoided using the obsered DIC because, like the Bayes factor, it is based on ealuating the marginal likelihood whose computation is typically ery time-consuming. 4. Simulation study: Finite-sample performance of the QM L and BGG estimates In this Section, a simulation study is undertaken to assess the performance of the QM L, BGG Bayes estimates in nite samples. Concerning nite-properties of the QM L and BGG estimates, three instances of the Gaussian P AR-SV model with period S = are considered and are reported respectiely in Table 4., Table 4. and Table 4.3. The parameter = ; ; ; ; ; are chosen for each instance in order to be in accordance with empirical eidence. In particular, for the three instances the persistence parameter equals :9, :9 and :99 respectiely. We hae also set small alues for and because it is a critical case for the performance of the QMLE as pointed out by Ruiz (994) and Harey et al (994) in the standard SV case. The choice of S = is only motiated by computational and time-consuming considerations. For each instance, we hae considered replications of P AR-SV series with sample size, for which we calculated the QM L and Bayes estimates. Mean of estimates ( b QML and b BGG ) and their standard deiations (Std) oer the replications are reported in Tables 4.-4.3. For the QM L method a non linear optimization routine is required. We hae applied a Gauss-Newton type algorithm starting from di erent alues of the parameter estimate. For the Bayes Griddy Gibbs estimate, we hae taken the same prior distributions for! = ( ; ; ; ) across instances:! N (! ; diag (:; :; :; :)),! = (; ; ; ) ; ; ; which are quite di use, but proper. Concerning initial parameter alues, the initial olatility h () in the

Gibbs sampler is taken to be the olatility generated by the tted GARCH (; ), that is h () = h G where 8 < " t = p h G t t ; t Z; : h G t = ' + ' " t + h G t while the initial log-olatility parameter estimate () is taken to be the ordinary least-squares estimate of based on the series log h (). Furthermore, in the Griddy Gibbs iteration, h t is generated using grid points and the range of h t at the l-th Gibbs iteration is taken as in (3:). Finally, the Gibbs sampler is run for iterations from which we discarded the rst iterations. True alue : : :9 : :3 QMLE : :78 :36 :9348 :69 :37 Std (:374) (:69) (:743) (:38) (:96) (:836) BGG :4 :9979 :98 :96 :3 :964 Std (:373) (:8) (:4) (:7) (:7) (:6) Table 4.: Instance - Simulation results for QML and BGG on a Gaussian P AR-SV with T = : True alue : : :9 : : QMLE :8 :799 :7 :9849 :394 :7 Std :396 :87 :643 :3 :697 :48 BGG :4:939 :999 :3 :9 :4 :69 Std :33 :66 :3 : :3 :93 Table 4.: Instance - Simulation results for QML and BGG on a Gaussian P AR-SV with T =. 3

True alue : : :99 : : QMLE :3 :384 :773 :93 : :68 Std (:733) (:6) (:66) (:64) (:487) (:493) BGG :36 : :99 :9767 :66 :8 Std (:3) (:) (:) (:3) (:) (:3) Table 4.3: Instance 3- Simulation results for QML and BGG on a Gaussian P AR-SV with T =. It can be obsered that the parameters are quite well estimated by the two methods with an obious superiority of the Bayes estimate oer the QM LE. Indeed, in all instances the BGG estimate (BGGE) greatly dominates the QMLE in the sense that it has smaller bias and standard deiations. We also obsere that the QMLE proides poor estimates as small as the ariance parameters and. From a theoretical point of iew, it would be interesting to compare the QM LE and BGGE when log N (; ), i.e. when exp (X=) with X N (; ). In that case, as emphasized in Section 3, the QMLE reduces to the MLE and it would be more (asymptotically) e cient than the BGGE. So through simulations, the QM LE would (in principle) perform better than the BGGE for P AR-SV series with quite large sample size. Howeer, the BGG method should be adapted to the case of distribution exp (X=), which may entails a lot of e ort for that distribution (exp (N (; ) =)) that seems to hae a little interest in practice.. Application to the S&P returns For the sake of illustration, we propose to t Gaussian P AR-SV models (:) with arious periods to the returns on the S&P (closing alue) index. In order to highlight many possible alues of the P AR-SV period, three types of datasets are considered namely daily, quarterly and monthly S&P returns. For the three series considered, we use the Bayes Griddy Gibbs estimate thanks to its good nite-sample properties, with number of iterations M = and burn-in. As in Section 4, we take the initial olatility h () to be the olatility generated by the tted GARCH (; ) while the initial log-olatility parameter estimate () is taken to be the ordinary least-squares estimate of based on the series log h (). We hae in fact aoided to use the olatility tted by the periodic GARCH (P GARCH (; )) model as initial alue h () because of some numerical di culties in the corresponding QML estimation when S becomes large (once S 3). In the Gibbs step, the olatility h (l) is drawn across P AR-SV models using the Griddy-Gibbs technique using 4