Periodic autoregressive stochastic volatility
|
|
- Jody Watts
- 5 years ago
- Views:
Transcription
1 MPRA Munich Personal RePEc Archie Periodic autoregressie stochastic olatility Abdelhakim Aknouche Uniersity of Science and Technology Houari Boumediene, Qassim Uniersity June 3 Online at MPRA Paper No. 697, posted 8 February 6 : UTC
2 Periodic autoregressie stochastic olatility Abdelhakim Aknouche October, Abstract This paper proposes a stochastic olatility model (P AR-SV ) in which the log-olatility follows a rst-order periodic autoregression. This model aims at representing time series with olatility displaying a stochastic periodic dynamic structure, and may then be seen as an alternatie to the familiar periodic GARCH process. The probabilistic structure of the proposed P AR-SV model such as periodic stationarity and autocoariance structure are rst studied. Then, parameter estimation is examined through the quasi-maximum likelihood (QML) method where the likelihood is ealuated using the prediction error decomposition approach and Kalman ltering. In addition, a Bayesian MCMC method is also considered, where the posteriors are gien from conjugate priors using the Gibbs sampler in which the augmented olatilities are sampled from the Griddy Gibbs technique in a single-moe way. As a-byproduct, period selection for the P AR-SV is carried out using the (conditional) Deiance Information Criterion (DIC). A simulation study is undertaken to assess the performances of the QML and Bayesian Griddy Gibbs estimates. Applications of Bayesian P AR-SV modeling to daily, quarterly and monthly S&P returns are considered. Keywords and phrases: Periodic stochastic olatility, periodic autoregression, QML ia prediction error decomposition and Kalman ltering, Bayesian Griddy Gibbs sampler, single-moe approach, DIC. Mathematics Subject Classi cation: AMS Primary 6M; Secondary 6F99 Proposed running head: Periodic AR Stochastic olatility.. Introduction Oer the past three decades, stochastic olatility (SV ) models introduced by Taylor (98) hae played an important role in modelling nancial time series which are characterized by a time-arying olatility feature. This class of models is often iewed as a better formal alternatie to ARCH-type models because the Faculty of Mathematics, Uniersity of Science and Technology Houari Boumediene, Algiers, Algeria, aknouche_ab@yahoo.com.
3 olatility is itself drien by an exogenous innoation, a fact that is consistent with nance theory, although it makes the model relatiely more di cult to estimate. Seeral extensions of the original SV formulation hae been proposed in the literature to account for further olatility features such as long memory, simultaneous dependence, excess kurtosis, leerage e ect and change in regime (e.g. Harey et al, 994; Ghysels et al, 996; Breidt, 997; Breidt et al, 998; So et al, 998; Chib et al, ; Caralho and Lopes, 7; Omori et al, 7; Nakajima and Omori, 9). Howeer, it seems that most of the proposed formulations hae been deoted to time-inariant olatility parameters and hence they could not meaningfully explain time series whose olatility structure changes oer time, in particular olatility displaying a stochastic periodic pattern that cannot be accounted for by time-inariant SV -type models. In order to describe periodicity in the olatility, Tsiakas (6) proposed arious interesting and parsimonious time-arying stochastic olatility models in which the olatility parameters are expressed as deterministic periodic functions of time with appropriate exogenous ariables. The proposed models called "periodic stochastic olatility" (P SV ) hae been successfully applied to model the eolution of daily S&P returns. This is an eidence that the periodically changing structure may characterize time series olatility. Howeer, the P SV formulations are by de nition especially well adapted to a kind of deterministic periodicity in the second moment and hence they might neglect a possible stochastic periodicity in these moments (see e.g. Ghysels and Osborn, for the di erence between deterministic and stochastic periodicity). A complementary approach which seems to be appropriate in capturing stochastic periodicity in the olatility is to consider a linear time-inariant representation for the olatility equation inoling seasonal lags, leading to a seasonal SV speci cation (see e.g. Ghysels et al, 996). Howeer, because of the time-inariance of the olatility parameters, the seasonal SV model may be too restrictie in representing periodicity and a model with periodic time-arying parameters seems to be more releant. Indeed, as pointed out by Bollersle and Ghysels (996, p. 4) many nancial time series encountered in practice are such that neglecting periodic time-ariation in the corresponding olatility equation gie rise to a loss in forecast e ciency, which is more seere in the GARCH model than in linear ARM A. This has motiated Bollersle and Ghysels (996) to propose the periodic GARCH (P -GARCH) formulation in which the parameters ary periodically oer time in order to capture the stochastic periodicity pattern in the conditional second moment. At present the P -GARCH model is among the most important models for describing periodic time series olatility (see e.g. Bollersle and Ghysels, 996; Taylor, 6; Koopman et al, 7; Osborn et al, 8; Regnard and Zakoïan, ; Sigauke and Chikobu, ; Aknouche and Al-Eid, ). Howeer, despite the recognized releance of the P -GARCH model, an alternatie periodic SV for stochastic periodicity is in fact needed for many reasons. First, it is well known that an SV -like model is more exible than a GARCH type model because the olatility in the latter is only drien by the past of the obsered process which constitutes a restrictie limitation. Second, compared to SV -type models, the probability structure of P -GARCH models
4 is relatiely more complex to obtain (Aknouche and Bibi, 9). Finally, compared to the P -GARCH, the P AR-SV easily allows to simple multiariate generalizations. In this paper we propose to model stochastic periodicity in the olatility through a model that generalizes the standard SV equation so that the parameters ary periodically oer time. Thus, in the proposed model termed periodic autoregressie stochastic olatility (P AR-SV ) the log-olatility process follows a rst-order periodic autoregression and may be generalized so as to hae any linear periodic representation. This model may be seen as an extension of the models of Tsiakas (6) to include periodic feature in the autoregressie dynamic of the log-olatility equation. The structure and probability properties of the proposed model such as periodic stationarity, autocoariance structure and relationship with multiariate stochastic olatility models are rst studied. In particular, periodic ARM A (P ARM A) representations for the logarithm of the squared P AR-SV process are proposed. Then, parameter estimation is conducted ia the quasi-maximum likelihood (QM L) method, properties of which are discussed. In addition, Bayesian estimation approach using Marko Chains Monte Carlo (MCMC) techniques is also considered. Speci cally, a Gibbs sampler is used to estimate the joint posterior distribution of the parameters and the augmented olatility while calling for the Griddy Gibbs procedure when estimating the conditional posterior distribution of the augmented parameters. On the other hand, selection of the period of the P AR-SV model is carried out using the (conditional) Deiance Information Criterion (DIC). Simulation experiments are undertaken to assess nite-sample performances of the QM LE and the Bayesian Griddy Gibbs methods. Moreoer, empirical applications to modeling series of daily, quarterly and monthly S&P returns are conducted in order to appreciate the usefulness of the proposed P AR-SV model. In the particular daily return case, a ariant of the P AR-SV model with missing alues, dealing with the "day-of-the-week" e ect is applied. The rest of this paper proceeds as follows. Section proposes the P AR-SV model and studies its main probabilistic properties. In Section 3, the quasi-maximum likelihood method ia prediction error decomposition and Kalman ltering is adopted. Moreoer, a single-moe Bayesian approach by means of the Griddy Gibbs (BGG) sampler is proposed. In particular, some M CM C diagnostic tools are presented and period selection in P AR-SV models is carried out using the DIC. Through a simulation study, Section 4 examines the behaior of the QML and BGG methods in nite samples. Section applies the P AR-SV speci cation to model daily, quarterly and monthly S&P returns using the Bayesian Griddy Gibbs method. Finally, Section 6 concludes. 3
5 . The P AR-SV and its main probabilistic properties In this paper, we say that a stochastic process f" t ; t Zg has a periodic autoregressie stochastic olatility representation with period S (P AR-SV S in short) if it is gien by 8 < " t = p h t t : log (h t ) = t + t log (h t ) + t e t, t Z; (:a) where the parameters t ; t ; and t are S-periodic oer t (i.e. t = t+sn 8n Z and so on) and the period S is the smallest positie integer erifying the latter relationship. f( t ; e t ); t Zg is a sequence of independent and identically distributed (i:i:d:) random ectors with mean (; ) and coariance matrix I (I stands for the identity matrix of dimension ). We hae called model (:a) periodic autoregressie stochastic olatility rather than shortly periodic stochastic olatility because the log-olatility is rather drien by a rst-order periodic autoregression and also in order to make distinction between model (:a) and the periodic stochastic olatility (P SV ) model proposed by Tsiakas (6). In fact, the P AR-SV model (:a) may be generalized so that h t satis es any stable periodic ARMA (henceforth P ARMA) representation. Note that when t =, model (:a) reduces to Tsiakas s (6) model if we take t to be an appropriate deterministic periodic function of time. In that case, the e ect of any current shock in the innoation e t in uences only the present olatility and does not a ect its future eolution. This is the case of what is called deterministic periodicity. If, in contrast, t 6= for some t, the log-olatility equation inoles lagged alues of the log-olatility process. Therefore, the log-olatility consists at any time of an accumulation of past shocks, so that present shocks a ect more or less the future log-olatility eolution, depending on the stability of the log-olatility equation (see the periodic stationarity condition (:) below). This case is commonly named stochastic periodicity in the olatility. It should be noted that although h t is conentionally called olatility, it is not the conditional ariance of the obsered process gien its past information in the familiar sense as in ARCH-type models. This is because h t is instead F t -measurable and so E " t =F t = E (ht =F t ) 6= h t, where F t is the -Algebra generated by f" u ; u tg. Neertheless, E (h t ) = E " t and E " t =h t = ht as in the ARCH-type case. To emphasize the periodicity of the model, let t = ns + for n Z and S. Then model (:a) may be written as follows 8 < " ns+ = p h ns+ ns+ : log (h ns+ ) = + log (h ns+ ) + e ns+, n Z; S; (:b) where by season ( S) we mean the channel f; + S; + S; :::g with corresponding parameters ; and. From (:b) the log-olatility appears to be a Marko chain, which is not homogeneous as in time-inariant stochastic olatility models, but is rather periodically homogeneous due to the periodic time-ariation of 4
6 parameters. This may relatiely complicate studying the probabilistic structure of the P AR-SV model. As is common in periodic time arying modeling, a routine approach is to write (:b) as a time-inariant multiariate SV model by embedding seasons, S (see e.g. Gladyshe, 96 and Tiao and Grupe, 98 for periodic linear models) and then studying the property of this latter. More precisely, de ne the S-ariate sequences fh n ; n Zg, f" n ; n Zg by H n = (h ns+ ; :::; h ns+s ) and " n = (" ns+ ; :::; " ns+s ). Then model (:b) may be cast in the following multiariate SV form 8 < : " n = diag H n n log H n = B log H n + n, n Z; (:) where n = ns+ ; :::; ns+s, diag (a) stands for the diagonal matrix formed by the entries of the ector a in the gien order. The notations H n and log H n denote the S-ectors de ned respectiely by H n () = p hns+ and log H n () = log (h ns+ ) ( S). The matrices B and n in (:) are gien by B = : : : : : :.... : : : S Q S = ; n = C B SS SP k= +ns +ns + +ns S Qk =. S k+ns ; C A S with ns+ = + e ns+ ( S). Howeer, this approach has the main drawback that aailable methods for analyzing multiariate SV models do not consider the particular structure of the coe cients in (:) and it may be di cult to conclude on model (:). Thus, studying probabilistic and statistical properties of model (:) directly may be simpler and better than studying them through model (:). This implies that periodic stochastic olatility modelling cannot be triially deduced from existing multiariate SV analysis. In the sequel, we study the structure of model (:) using mainly the direct approach. Throughout this paper, we frequently use solutions of the following ordinary di erence equation u t = a t + b t u t ; t Z; (:3a) S with S-periodic coe cients a t and b t. Recall that the solution is gien, under the requirement that Q b < =, by! S Y SX jy u ns+ = b b i a j, S; n Z. (:3b) = j= i= First, we hae the following result which proides a necessary and su cient condition for strict periodic stationarity (see Aknouche and Bibi, 9 for the de nition of strict periodic stationarity). Theorem. (Strict periodic stationarity)
7 The P AR-SV equation gien by (:) admits a unique (nonanticipatie) strictly periodically stationary and periodically ergodic solution gien for n Z and S by 8 9 P j S Q >< j= i j X jy >= i= " ns+ = ns+ exp B SQ i j e ns+ j C ; (:4) A >: j= i= >; = where the series in (:4) conerges almost surely, if and only if, S Y < : (:) = Proof The result obiously follows from standard linear periodic autoregression (P AR) theory while using (:3) (see e.g. Aknouche and Bibi, 9). So, details are omitted. From Theorem. we see that the monodromy coe cient S Q is the analog of the persistent parameter = S in the case of time-inariant SV and standard GARCH models. If, howeer, Q, then clearly there = does not exist a nonanticipatie strictly periodically stationary solution of (:) like (:4). Other properties such as periodic geometric ergodicity and strong mixing are obious. Let rst say that a strictly periodically stationary stochastic process f" t ; t Zg is called geometrically periodically ergodic if and only if the corresponding multiariate strictly stationary process f" t ; t Zg gien by " n = (" ns+ ; :::; " ns+s ) is geometrically ergodic in the classical sense (see e.g. Meyn and Tweedie, 9 for the de nition of geometric ergodicity). Theorem. (Geometric periodic ergodicity) S Under the condition Q <, the process f" t ; t Zg de ned by (:) is geometrically periodically = ergodic. Moreoer, if initialized from its inariant measure, then flog h t ; t Zg and hence f" t ; t Zg are periodically -mixing with exponential decay. Proof The result follows from geometric ergodicity of the ector autoregression flog H n ; n Zg gien by (:), which may easily be established using Meyn and Tweedie s (9) results (see also Dais and Mikosch, 9). Gien the form of the periodically stationary solution (:4), it is easy to gie its second-order properties. Assuming the following condition Q ;j < ; for all S ; (:6) j= where we hae the following result. ;j = E exp!! jy i j e j ; i= 6
8 Theorem.3 (Second-order periodic stationarity) Under conditions (:) and (:6), the series in (:4) conerges in the mean square sense and the process gien by (:4) is also second-order periodically stationary. Proof Routine computation shows that under (:) and (:6) the series in (:4); X jy i j e ns+ j ; j= i= conerges in mean square. Moreoer, under these conditions, it is clear that f" t ; t Zg gien by (:4) is a periodic white noise with periodic ariance since E (" t ) =, E (" t " t h ) = (h > ) and, while using (:3), P j S Q j= i j V ar (" ns+ ) = E exp X jy i= B SQ i j e ns+ j CC AA j= i= = exp P j S j= i= = Q i j Q C SQ ;j ; S : (:7) A j= = In the case of Gaussian log-olatility innoations fe t ; t Zg, (i.e. e t N(; )) it is also possible to obtain more explicit results while reducing assumptions of Theorem.3. Using the fact that if X N(; ) then E(exp(X)) = exp( ) for all non null real constant, we obtain! ;j = exp jy j i ; (:8) and condition (:6) of niteness of Q j= SQ ;j reduces to the periodic stationarity condition (:): < : = Moreoer, using (:8) and (:3) the ariance of the process gien by (:7) may be expressed more explicitly as follows V ar (" ns+ ) = exp = exp = exp P j S Q j= i= P j S j= i= SP j= i= S = i= i j Q C exp jy Q A j j= i= Q i j C SQ A X jy i A j j= i= jq = i j SQ = + SP jq j= i= i! i j C SQ A : (:9) = 7
9 For example, the ariance V ar (" ns+ ) of the process is gien respectiely, for S = and S = 3, by + V ar (" n+ ) = exp V ar (" n+ ) = exp + + V ar (" 3n+ ) = exp V ar (" 3n+ ) = exp V ar (" 3n+3 ) = exp Next, the autocoariance of the squared process " t ; t Z ; ; , ; : is proided. This one is useful in identifying the model and deriing certain estimation methods such as simple and generalized methods of moments. Let " (h) = E " ns+ " ns+ h E " ns+ E " ns+ h. Theorem.4 (Autocoariance structure of " t ; t Z ) i) Under (:), (:6) and the conditions Q j= ;j h;j < and E 4 < we hae SP jq i j " () = exp j= i= 4 X jy Q A i j e j AA j=h i= j= = ;j A (:a) " (h) = @! Q ;j h;j exp j= j= i= SP j= i= hy i j e j + + jq i j + S P jq j= i= Q S = i= i! X jy i j e j AA j=h i= h i h j C A ; h > : (:b) Proof Using (:4) direct calculation gies E " ns+" X jy X jy ns+ h i j e ns+ j + h i h j e ns+ h j AA exp P j S j= i= Q i j Q S = + j= i= P j S j= i= j= i= Q h i h j C SQ A E ns+ ns+ = h ; (:) under niteness of the latter expectations. When in particular h =, combining (:7) and (:) we get (:a) under niteness of E 4. 8
10 For h >, because of the independence structure of f t ; t Zg one obtains giing (:b). E " ns+" ns+ h = exp j= i= SP jq j= i= i j + S P j=h i= jq j= i= Q S = h i h j C A hx jy X jy X i j e j + i j e j + h i h j e h j AA E exp h P SP jq i j + S P j= i= = exp SQ = jq i j e j + + h Q j= i= jq j= i= i= i j= i= h i h j C A P jq j=h i= i j e j!! Expressions of the S kurtoses Kurt () ( S) of the P AR-SV S model may be gien from (:9) and (:) by Kurt () = E 4 E 4 : Q j Q j= E exp Q j= i= i= i j e j! E exp j Q ; S; (:) i j e j By the Cauchy-Schwartz inequality, this clearly shows that the P AR-SV is characterized by excess Kurtosis for all channels f; :::; Sg. In particular, under the normality assumption on the innoations, the second-order periodic stationarity reduces to E( 4 SQ ) < and <. So from (:8), expression (:) reduces to = Kurt () = E 4 ; S: ; The autocoariance function has also more explicit form in the case of Gaussian fe t ; t Zg. Corollary. (Autocoariance structure of " t ; t Z under normality of fe t ; t Zg) Under the same assumptions of Theorem.4 and if fe t ; t Zg is Gaussian then,! S Y S! X jy S Y SX jy () = i j + i A j E 4 " = j= i= = j= i= ; (:3a) 9
11 (h) = exp " SP exp jq j= i= SP i j + S P jq i j j= i= SQ = jq j= i= Q S = hy i= h i h j + i SP jq j= i= i j C SQ A = C A C, h > : (:3b) A Proof For Gaussian innoations, we use again the fact that if X N(; ) then E(exp(X)) = exp( ). Therefore, (:3a) follows from (:a) and (:9). For h > we hae! S E " ns+" Y SX jy SX jy ns+ h i j + h i h j AA i= =! hq exp jy Q j= i j j= hy + i= j= i=! jy i i= j= i= i A j : After tedious but straightforward calculation, the autocoariance function at lag h (h > ) simpli es for Gaussian innoations to B (h) = " which is (:3b). SP jq j= i= X jy = exp j=h i= SP jq j= i= exp SP i S P j+ jq h i h j j= i= S Q = hy i j + i j + S P jq i j j= i= SQ = i= jq j= i= Q S = hy i= i! h C X jy A A h i h j + i C A C A ; j= i= i j A 3 X jy i A j : j= i= SP jq j= i= i j C SQ A = It is worth noting that expanding the exponential function in (:3b) under the periodic stationarity condition (:), the autocoariance function " (h) of the squared process " t ; t Z has the following equialent form as h! and so " " hy (h) K i= i K SY =! h=s (h) conerges geometrically to zero as h!, where K is an appropriate real constant. Howeer, the decreasing of " (h) is not compatible with the recurrence equation that satisfy periodic ARM A ;
12 (P ARMA) autocoariances and we can conclude that the squared process " t ; t Z does not admit a P ARM A autocoariance representation. Neertheless, the logarithmed squared process log " t ; t Z has in fact a P ARMA autocoariance structure. Considering the following notations Y t = log " t, Xt = log h t, u t = log t, E log t = u and V ar log t = u, we hae from (:) Y t = X t + u t : (:4) Theorem. (P ARMA (; ) representation of log " t ; t Z ) Under assumption (:) and niteness of u, the process fy t ; t Zg has a P ARMA S (; ) representation gien by Y ns+ Y = Y ns+ Y + ns+ ns+ ; S; t Z; (:a) where Y = E (Y ns+ ), 8 < = : (+ ) u + p ((+ ) u + )(( ) u + ) u if u 6= if u =, S; (:b) and f t ; t Zg is a periodic white noise with periodic ariance 8 >< SQ u if 6= ; = V ar ns+ = = SQ >: if = =, S: (:c) Proof The second-order structure of fx t ; t Zg is gien form (:) while using (:3), X = E (X ns+ ) = + E (X ns+ ) = X () = V ar X ns+ = E X ns+ + = SQ = SP j= jq i j, i= SQ SP jq = j= i= i j X (h) = Co (X ns+ ; X ns+ h ) = X (h ) ; ; h > : Therefore, using (:4) we hae Y = E (Y ns+ ) = E (X ns+ ) + E (u ns+ ) = Y () = V ar (Y ns+ ) = V ar (X ns+ ) + u = SQ SP j= jq = i= SQ SP jq = j= i= i j + u, i j + u Y (h) = X (h) = X (h ) = ::: h+ X h () S Q SP = ::: h+ jq h i h j = j= i= ; h > :
13 Clearly, the process fy t ; t Zg has a P ARMA representation since Y (h) = Y (h ), for h >. To identify the parameters of its representation we use expressions of Y (h) for h = ;. If fy t ; t Zg has a P ARMA representation (:a) then for all S Y () = Y () + ; ( + ( )) Y () = Y () ;: (:d) Hence, if u 6= we hae for all S; + ( ) = Y () Y () Y () Y () Y () = X () X () + u) X () = Y () X () u = Y () Y () u u = + u : (:e) u The latter equation admits, for all S, two solutions one of which is with modulus less than (j j < ) and is gien by (:b). Such a choice clearly ensures that S Q j j <, but it is not unique. Moreoer, when S Q showing (:c). = 6= using (:d), the ariance of f t ; t Zg is ; = Y () Y () = X () + u = Y () ; If, howeer, u = the relationship Y (h) = Y (h ) also holds for h = and so the process fy t ; t Zg is a pure rst-order periodic autoregression (P AR()) with = for all. When S Q =, the process fy t ; t Zg is a strong periodic white noise (an independent and periodically distributed, i:p:d: sequence) and so = for all (see also Francq and Zakoïan, 6 for the particular non-periodic case S = ). It is worth noting that representation (:a) is not unique. = Indeed, in contrast with time-inariant ARM A models for which an ARM A process may be uniquely identi ed from its autocoariance function (see Brockwell and Dais, 99), it is not always possible to build a unique P ARMA model from an autocoariance function haing P ARM A structure. Howeer, we may enumerate all possible representations from soling (:d) and choosing the best one tting the obsered series. The resulting representation will
14 be abusiely said the P ARM A representation. Such a representation is useful for obtaining predictions for the process log " t ; t Z. It may also be used to obtain approximate predictions for the squared process " t ; t Z as this latter does not admit a P ARMA representation (see Section 4.). If we denote by b" t+h=t = E " t+h =" t ; " t ; ::: the mean-square prediction of " t+h based on " t ; " t ; :::; then b" t+h=t may be approximated by \ C exp log " t+h=t ; where \ log " t+h=t = E log " t+h = log " t ; log " t ; ::: ; and C is a normalization factor. The constant C is introduced to minimize the bias due to using incorrectly the following relationship \ exp log " t+h=t = \ exp log " t+h=t as we know from Jensen s inequality that the latter equality is in fact not true. Typically, one can take C as the sample ariance of log " t ; t = ; :::; T. ; 3. Parameter estimation of the P AR-SV model In this Section we consider two estimation methods for the P AR-SV model. The rst one is a QML method based on prediction-error decomposition of a corresponding linear periodic state-space model. This method which uses Kalman ltering to obtain linear predictors and error prediction ariances is used as a Benchmark to the second proposed method, which is based on the Bayesian approach. In this method, from gien conjugate priors, the conditional posteriors are obtained from the Gibbs sampler in which the conditional posterior of the augmented olatilities is drien ia the Griddy-Gibbs technique. In the rest of this Section we consider a series " = (" ; : : : ; " T ) generated from model (:) with sample-size T = NS supposed without loss of generality multiple of the period S. The ector of model parameters is denoted by =! ; where! = (! ;! ; :::;! S ),! = ( ; ) and = ; ; :::; S. 3. QM LE ia prediction error decomposition and Kalman ltering Taking in (:) the logarithm of the square of " t we obtain the following linear periodic state space-model 8 < Y ns+ = + X ns+ + eu ns+, n Z; S; (3:) : X ns+ = + X ns+ + e ns+ where as in the aboe Y +ns = log " ns+, XnS+ = log (h ns+ ), u ns+ = log ns+ ; = E (uns+ ), eu ns+ = u ns+ and u = V ar (u ns+ ). When f t ; t Zg is standard Gaussian, the mean and ariance 3
15 of log ns+ can accurately be approximated by ln :7 and = respectiely, where (:) is the gamma function. Note, howeer, that the linear state-space model (3:) is not Gaussian, unless i) e is Gaussian, ii) e and are independent and iii) has the same distribution as exp (X=) for some X normally distributed with mean zero and ariance. In what follows we assume for simplicity of exposition that is standard Gaussian, but the QML method we present below is still alid when is not Gaussian and een when and u are unknown. Let Y = (Y ; : : : ; Y T ) be the series of log-squares corresponding to " = (" ; : : : ; " T ) (i.e. Y t = log " t ; t T ), which is generated from (3:) with true parameter. The quasi-likelihood function l Q (; Y ) ealuated at a generic parameter may be written ia the prediction error decomposition as follows log(l Q (; Y )) = T log() TX log(f t ) + (Y! t by t/t ) ; (3:) t= where Y b t/t = X b tjt +, Xtjt b is the best predictor of the state X t based on the obserations Y ; :::; Y t. with mean square errors P t=t = E X t X b t/t and Ft = E Y t Y b t/t A QML estimate b QML of the true is the maximizer of log(l Q (; Y )) oer some compact parametric space, where l Q (; Y ) it is ealuated as if the linear state space model (3:) was Gaussian. Thus the best state predictor b X tjt the state prediction error ariance P t=t context of model (3:) is described by the following recursions bx t/t = t bx t /t + P t =t Ft = t P t =t Pt =t F t P t=t F t = P t=t + u F t and may be recursiely computed using the Kalman lter, which in the Y t b X t /t + t + t ; t T; (3:3a) while remembering that t, t and t are S-periodic oer t. The start-up alues of (3:3a) are calculated on the basis of: b X= = E (X ) and P = = V ar (X ). Using results of Section, we then get bx = = P j S Q j= i= S i j Q = and P = = P j S Q j= i= S i j. (3:3b) Q = Recursions (3:3) may also be used in a reerse form for smoothing purposes, i.e. to obtain the best linear predictor e X t of X t based on Y ; : : : ; Y T, from which we get estimates of the unobsered olatilities h t ( t T ). Consistency and asymptotic normality of the QM L estimate may be established using standard theory of linear (non-gaussian) signal plus noise models with time-inariant parameters (Dunsmuir, 979). For this, we inoke the corresponding multiariate time-inariant model (:) which we transform to a linear form as follows 8 < : Y n = log H n + n, n Z; (3:4) log H n = B log H n + n 4
16 where Y n and n are S-ectors such that Y n () = Y +ns, and n () = u +ns ( S) and where log H n ; B and n are gien by (:). Using (3:4), we can call for the theory in Dunsmuir (979) to yield the asymptotic ariance of the QMLE under the niteness of the moment E Y 4 +ns (see also Ruiz, 994 and Harey et al, 994). Of course, the QMLE would be asymptotically e cent if we assume that e t is Gaussian, e and and are independent, and has the same distribution as exp (X=), where X N(; ). In that case, log N(; ) and the lienar state space (3:) would be also Gaussian. Therefore, the QMLE reduces to the exact maximum likelihood estimate (MLE). Howeer, the assumption that log N(; ) seems to hae a little interest in practice. 3.. Bayesian inference ia Gibbs sampling Adopting the Bayesian approach, the parameter ector of the model and the unobsered olatilities h = (h ; h ; :::; h T ) which are also considered as augmented parameters, are iewed as random with a certain prior distribution f (; h). Gien a series " = (" ; : : : ; " T ) generated from the P AR-SV S model (:) with Gaussian innoations, the goal is to make inference about the joint posterior distribution, f (; h="), of (; h) gien ". the parameters h,!; ; ; :::; S Because of the periodic structure of the P AR-SV model it is natural to assume that are independent of each other. Thus, the joint posterior distribution f (; h=") = f!; ; h=" can be estimated using Gibbs sampling proided we can draw samples from any of the S + conditional posterior distributions f!="; ; h, f =";!; fg ; h ( S) and f h=";!;, where x ftg denotes the ector obtained from x after remoing its t-th component x t. Since the posterior distribution of the olatility parameter f h=";!; has a rather complicated expression, we sample it element-by-element as done by Jacquier et al (994). Thus, the Gibbs sampler for sampling from the joint posterior distribution f!; ; h=" reduces to drawing samples from any of the T +S+ conditional posterior distributions f!="; ; h, f =";!; fg ; h, ( S) and f h t =";!; ; h ftg ; ( t T ). Under normality of the olatility proxies and using standard linear regression theory with an appropriate adaptation to the P AR form of the log-olatility equation (:), the conditional posteriors f!="; ; h and f =";!; fg ; h, ( S) may be determined directly from gien conjugate priors f (!) and f, ( S). Howeer, like the non-periodic SV case (Jacquier et al, 994), direct draws from the distribution f h t =";!; ; h ftg are not possible because it has unusual form. Neertheless, unlike Jacquier et al (994) which used a Metropolis-Hasting chain after determining the form of f h t =";!; ; h ftg except for a scaling factor, we use the Griddy-Gibbs procedure as in Tsay () because in our periodic context its implementation seems much simpler.
17 3... Prior and posterior sampling analysis a) Sampling the log-olatility periodic autoregressie parameter! Before giing the conditional posterior distribution f!="; ; h through some conjugate prior distributions and linear regression theory, we rst write the P AR log-olatility equation as a standard linear regression. Setting H ns+ ; :::; ; ; log (h {z } ns+ ) ; ; :::; A, model (:b) for t = ; :::; NS may be rewritten in the following {z } times S times periodically homoskedastic linear regression or also as a standard regression log (h ns+ ) = H ns+! + e ns+ ; S; n N ; (3:a) log (h ns+ ) = H ns+! + e ns+ ; S; n N ; (3:b) with i:i:d: Gaussian errors. Assuming known the ariances ( S) and the initial obseration h, the least squares estimate b! W LS of!, based on (3:b), (which is just the weighted least squares estimate of! based on (3:a)) has the following form b! W LS = NX SX n= = H ns+ HnS+! N X and is normally distributed with mean! and coariance matrix SX n= = H ns+ log (h ns+ ) ; = NX SX n= = H ns+ HnS+!. (3:6) Under assumption (3:b), information of the data about! is contained in the weighted least squares estimate b! W LS of!. To get a closed-form expression for the conditional posterior f!="; ; h we use a conjugate prior for!. This prior distribution is Gaussian, i.e.! N! ;, where the hyperparameters! ; are known and are xed so that to hae a quite reasonably di use prior yet informatie. Thus, using standard regression theory (Box and Tiao, 973; Tsay, ) the conditional posterior distribution of! gien "; ; h is: where = Some remarks are in order: NX SX n= = NX! =!="; ; h N (! ; ) ; (3:7a)! H ns+ HnS+ + (3:7b) SX n= =! H ns+ log (h ns+ ) +! : (3:7c) 6
18 i) The matrix gien by (3:6) is block diagonal. So if we assume that is also block diagonal, then we obtain the same result as if we assume that the seasonal parameters! ;! ; :::;! S are independent of each other, and each one has a conjugate prior with hyperparameters, say! and ( S), that are appropriate components of! and. ii) Faster and more stable computation of! and in (3:7) which does not inole any matrix inersion (in contrast with (3:7b)) may be obtained while setting! =! NS, = NS and recursiely then computing the latter quantities using the well-known recursie least squares (RLS) algorithm (see Ljung and Söderström, 983, Lemma.) which is gien by! ns+ =! ns+ + ns+ H ns+ log (h ns+ ) HnS+! ns+ ns+ = ns+ + H ns+ ns+ ns+ H ns+h ns+ ns+ + H ns+ ns+ H ns+ H ns+ ; S n N ; (3:8a) with starting alues! =! and =. (3:8b) This may improe the numerical stability and computation time tied to the whole estimation method, especially for a large period S. b) Sampling the log-olatility periodic ariance parameters ; S We also use conjugate priors for ; S to get a closed form expression for the conditional posterior of gien data and the other parameters fg. Such priors are proided by the inerted Khi-squared distribution: a a ; S; (3:9a) where a = ( S). Gien the parameters! and h, if we de ne e ns+ = log (h ns+ ) log (h ns+ ) ; S; n N ; (3:9b) then e ; e +S ; :::; e (N )S+ iin ;, S. From standard Bayesian linear regression theory (see e.g. Tsay, ) the conditional posterior distribution of ; S, gien the data and the remainder parameters is an inerted Khi-squared distribution with degree of freedom a + N, that is a P N + n= e ns+ =";!; fg ; h a +N ; S: (3:9c) c) Sampling the augmented olatility parameters h = (h ; h ; :::; h T ) Now, it remains to sample from the conditional posterior distribution f h t ="; ; h ftg, t = ; ; :::; T. Let us rst gie the expression of this distribution (except for a multiplicatie constant) and we will show how to (indirectly) draw samples from it using the Griddy Gibbs technique. Because of the Markoian (but non-homogeneous) structure of 7
19 the olatility process fh t ; t Zg and the conditional independence of " t and h t h (h 6= ) gien h t, it follows that for any < t < T: f h t ="; ; h ftg = f (h t=h t ; ) f (h t+ =h t ; ) f (" t =; h t ) f (h t+ =h t ; ) f (" t =; h t ; h t+ ) _ f (h t =h t ; ) f (h t+ =h t ; ) f (" t =; h t ) : (3:) Using the fact that " t =; h t " t =h t N (; h t ), log (h t ) = log (h t ) ; N t + t log (h t ) ; t ; and d log(h t ) = h t dh t, formula (3:) becomes where " f h t ="; ; h ftg _ p exp t (log (h t ) h 3 t h t t ) ; < t < T; (3:a) t t = t+ ( t + t log (h t )) + t t+ (log (h t+ ) t+ ) t+ + t t+ (3:b) t = t+ t t+ + t : (3:c) t+ Note that in (3:a) we hae used the well-known formula (see Box and Tiao, 973, p. 48) A (x a) + B (x b) = (x c) (A + B) + (a b) AB, where c = (Aa + Bb)=(A + B) proided that A + B 6=. A + B For the two end-points h and h T we may simply use a naie approach which consists of assuming h xed so that the sampling starts with t = and use the fact that log (h T ) =; log (h T ) N( T + T log (h T ) ; T ). Alternatiely, we may also use a forecast of h T + and a backward prediction of h and employ again formula (3:) for < t < T +. In that case, we forecast h T +, on the basis of the log-olatility equation of model (:), by using a -step ahead forecast \ log (ht ) (), at the origin T, which is gien from (:) by \ log (ht ) () = T + + T + T + T + T log (h T ). The backward forecast of h is obtained using a -step ahead backward forecast on the basis of the backward periodic autoregression (Sakai and Ohno, 997) associated to the P AR log-olatility. Once the conditional posterior f h t ="; ; h ftg is determined except for a scale factor, we may use some indirect sampling algorithms to draw the olatility h t. Jacquier et al (994) used the rejection Metropolis- Hasting algorithm. Alternatiely, following Tsay () we call for the Griddy-Gibbs technique (Ritter and Tanner, 99) which consists in: i) Choosing a grid of m points from a gien interal [h t ; h tm ] of h t : h t h t ::: h tm ; then ealuating the conditional posterior f h t ="; ; h ftg ia (3:) (ignoring the normalization constant) at each one of these points, giing f ti = f h ti ="; ; h ftg, i = ; :::; m. ii) Building from the alues f t ; f t ; :::; f tm the discrete distribution p (:) de ned at h ti ( i m) f ti by p (h ti ) = P m j= f. This may be seen as an approximation to the inerse cumulatie distribution of tj f h t ="; ; h ftg. 8
20 iii) Generating a number from the uniform distribution on (; ) and transforming it using the discrete distribution p (:) obtained in ii) to get a random draw for h t. It is worth noting that the choice of the grid [h t ; h tm ] is crucial for e ciency of the Griddy algorithm. We follow here a similar deice by Tsay (), which consists of taking the range of h t, at the l-th Gibbs iteration, to be [h m t ; h M ], where t h m t = :6 max h () t ; h (l ) t, h M t = :4 min h () t ; h (l ) t ; (3:) (l ) h t and h () t being, respectiely, the estimate of h t for the (l )-th iteration and initial alue Bayes Griddy Gibbs sampler for P AR-SV The following algorithm summarizes the Gibbs sampler for drawing from the conditional posterior distribution f (; h=") gien ". For l = ; ; :::; M, consider the notation h (l) = h (l) ; :::; h(l), T! (l) =. (l) ; (l) ; :::; (l) S ; (l) S and (l) = (l) ; (l) ; :::; (l) S Algorithm 3. Step Specify starting alues h (),! () and (). Step Repeat for l = ; ; :::; M ; Draw! (l+) from f!="; (l) ; h (l) using (3:7a) and (3:8). Draw (l+) from f =";! (l+) ; h (l) using (3:9b) and (3:9c). Repeat for t = ; ; :::; T = NS Griddy Gibbs: Select a grid of m points h (l+) ti : h (l+) t h (l+) t ::: h (l+) tm. For i m calculate f (l+) ti = f h (l+) ti ="; (l) ; h (l) ftg from (3:). De ne the inerse distribution p h (l+) f (l+) ti ti = P m j= f, i m. (l+) tj Generate a number u from the uniform (; ) distribution. Transform u using the inerse distribution p (:) to get h (l+) t, which is considered as a draw from f h t ="; (l+) ; h (l) ftg. Step Return the alues h (l),! (l) and (l), l = ; :::; M Inference and prediction using the Gibbs sampler for P AR-SV Once sampling from the posterior distribution f (; h="), statistical inference for the P AR-SV model may be easily made. 9
21 The Bayes Griddy-Gibbs parameter estimate b BGG of is taken to be the posterior mean = E (=") which is, under the Marko chain ergodic theorem, approximated with any desired degree of accuracy by b BGG = M M+l X l=l (l) ; where (l) is the l-th draw of from f (; h=") gien by Algorithm 3., l is the burn-in size, i.e. the number of initial draws discarded, and M is the number of draws. Smoothing and forecasting olatility are obtained as a by-product of the Bayes Griddy-Gibbs method. The smoothed alue, h t = E (h t ="), of h t ( t T ) is obtained while sampling from the distribution f (h t =") which in turn is the marginal of the posterior distribution f (; h="). So E (h t =") may be accurately approximated by M P M+l l=l h (l) t where h (l) t is the l-th draw of h t from f (; h t ="). Forecasting future alues h T + ; h T + ; ::; h T +k are getting either as in the aboe using the log-olatility equation with the Bayes parameter estimates or directly while sampling from the predictie distribution f (h T + ; h T + ; ::; h T +k =") (see also Jacquier et al, 994) MCMC diagnostics It is important to discuss the numerical properties of the proposed BGG method in which the olatilities are sampled element by element. Despite the ease of implementation, it is well documented that the main drawback of the single-moe approach (e.g. Kim et al, 998) is that the posterior draws are often highly correlated thereby resulting in a slow mixing and so a slow conergence properties. Among seeral M CM C diagnostic measures, we consider here the Relatie Numerical Ine ciency (RN I) (e.g. Geweke, 989; Geyer, 99), which is gien by RNI = + BX K k B bk ; where B = is the bandwidth, K (:) is the Parzen kernel (e.g. k= Kim et al, 998) and b k the sample autocorrelation at lag k of the BGG parameter draws. The RNI indicates in fact on the ine ciency due to the serial correlation of the BGG draws (see also Geweke, 989; Tsiakas, 6). Another M CM C diagnostic measure (Geweke, 989) we use here is the Numerical Standard Error (NSE), which is the square root of the estimated asymptotic ariance of the MCMC estimator. In fact, the NSE is gien by! u BX NSE = t M b + K k B bk ; where b k is the sample autocoariance at lag k of the BGG parameter draws and M is the number of draws. k=
22 3.. Period selection ia the Deiance Information Criterion An important issue in P AR-SV modeling is the selection of the period S. This problem is especially more pronounced for modeling daily returns because their periodicity is not as obious as in quarterly or monthly data. Although many authors (e.g. Franses and Paap, ; Tsiakas, 6) hae emphasized the day-ofthe-week e ect in daily stock returns, which often entails a period of S =, the period selection problem in periodic olatility models remains a challenging problem. Standard order selection measures such as the AIC and BIC, which require the speci cation of the number of free parameters in each model, are not applicable for comparing complex Bayesian hierarchical models like the P AR-SV model. This is because in the P AR- SV model, the number of free parameters, which is augmented by the latent olatilities that are in fact not independent but Markoian, is not well de ned (cf. Berg et al, 4). For a long time, the Bayes factor has been iewed as the best way to carry out Bayesian model comparison. Howeer, its calculation based on ealuating the marginal likelihood requires extremely high-dimensional integration, and this would be more computationally demanding especially for P AR-SV model which inoles a larger number of parameters augmented by the olatilities, exceeding the sample size. In this paper, we will carry out period selection using rather the Deiance Information Criterion (DIC), which may be iewed as a trade-o between model adequacy and model complexity (Spiegelhalter et al, ). Such a criterion, which represents a Bayesian generalization of the AIC, is easily obtained from M CM C draws, needing no extra-calculations. The (conditional) DIC as introduced by Spiegelhalter et al () is de ned in the context of P AR-SV S to be DIC (S) = 4E ;h=" (log (f ("=; h))) + log f "=; h ; where f ("=; h) is the (conditional) likelihood of the P AR-SV model for a gien period S and ; h = E ((; h)=") is the posterior mean of (; h). From the Griddy-Gibbs draws, the expectation E ;h=" (log (f ("=; h))) can be estimated by aeraging the conditional log-likelihood, log f ("=; h), oer the posterior draws of (; h). Further, the joint posterior mean estimate of (; h) can be approximated by the mean of the posterior draws of ( (l) ; h (l) ). Using the fact that f ("=; h) := f ("=h) = P T t= log (h t ) + " t h t, the DIC (S) is estimated by where h (l) t l +M X M l=l TX t= log h (l) t! + " t h (l) t TX t= " log h t + t ; h t denotes the l-th BGG draw of h t from f (h t =" t ; ), M is the number of draws, l is the burn-in size and h t := E (h t =") is estimated by M the smallest DIC alue. P l+m l=l h (l) t ( t n). Of course, a model is preferred if it has Since the DIC is random and for the same tted series it may change alue from a MCMC draw to another, it is useful to get its corresponding numerical standard error. Howeer, as pointed out by Berg et al
23 (4), non e cient method has been deeloped for calculating reasonably accurate Monte Carlo standard errors of DIC. Neertheless, following the recommendation of Zhu and Carlin () we simply replicate the calculation of DIC some G times and estimate V ar(dic) by its sample ariance, giing a broad indication of the implied ariability of DIC. Note nally that for the class of latent ariable models to which belongs the P AR-SV, there are in fact seeral alternatie de nitions of the DIC depending on the di erent concepts of the likelihood used (complete, obsered, conditional) and the one we worked with here is the conditional DIC as categorized by Celeux et al (6). We hae aoided using the obsered DIC because, like the Bayes factor, it is based on ealuating the marginal likelihood whose computation is typically ery time-consuming. 4. Simulation study: Finite-sample performance of the QM L and BGG estimates In this Section, a simulation study is undertaken to assess the performance of the QM L, BGG Bayes estimates in nite samples. Concerning nite-properties of the QM L and BGG estimates, three instances of the Gaussian P AR-SV model with period S = are considered and are reported respectiely in Table 4., Table 4. and Table 4.3. The parameter = ; ; ; ; ; are chosen for each instance in order to be in accordance with empirical eidence. In particular, for the three instances the persistence parameter equals :9, :9 and :99 respectiely. We hae also set small alues for and because it is a critical case for the performance of the QMLE as pointed out by Ruiz (994) and Harey et al (994) in the standard SV case. The choice of S = is only motiated by computational and time-consuming considerations. For each instance, we hae considered replications of P AR-SV series with sample size, for which we calculated the QM L and Bayes estimates. Mean of estimates ( b QML and b BGG ) and their standard deiations (Std) oer the replications are reported in Tables For the QM L method a non linear optimization routine is required. We hae applied a Gauss-Newton type algorithm starting from di erent alues of the parameter estimate. For the Bayes Griddy Gibbs estimate, we hae taken the same prior distributions for! = ( ; ; ; ) across instances:! N (! ; diag (:; :; :; :)),! = (; ; ; ) ; ; ; which are quite di use, but proper. Concerning initial parameter alues, the initial olatility h () in the
24 Gibbs sampler is taken to be the olatility generated by the tted GARCH (; ), that is h () = h G where 8 < " t = p h G t t ; t Z; : h G t = ' + ' " t + h G t while the initial log-olatility parameter estimate () is taken to be the ordinary least-squares estimate of based on the series log h (). Furthermore, in the Griddy Gibbs iteration, h t is generated using grid points and the range of h t at the l-th Gibbs iteration is taken as in (3:). Finally, the Gibbs sampler is run for iterations from which we discarded the rst iterations. True alue : : :9 : :3 QMLE : :78 :36 :9348 :69 :37 Std (:374) (:69) (:743) (:38) (:96) (:836) BGG :4 :9979 :98 :96 :3 :964 Std (:373) (:8) (:4) (:7) (:7) (:6) Table 4.: Instance - Simulation results for QML and BGG on a Gaussian P AR-SV with T = : True alue : : :9 : : QMLE :8 :799 :7 :9849 :394 :7 Std :396 :87 :643 :3 :697 :48 BGG :4:939 :999 :3 :9 :4 :69 Std :33 :66 :3 : :3 :93 Table 4.: Instance - Simulation results for QML and BGG on a Gaussian P AR-SV with T =. 3
25 True alue : : :99 : : QMLE :3 :384 :773 :93 : :68 Std (:733) (:6) (:66) (:64) (:487) (:493) BGG :36 : :99 :9767 :66 :8 Std (:3) (:) (:) (:3) (:) (:3) Table 4.3: Instance 3- Simulation results for QML and BGG on a Gaussian P AR-SV with T =. It can be obsered that the parameters are quite well estimated by the two methods with an obious superiority of the Bayes estimate oer the QM LE. Indeed, in all instances the BGG estimate (BGGE) greatly dominates the QMLE in the sense that it has smaller bias and standard deiations. We also obsere that the QMLE proides poor estimates as small as the ariance parameters and. From a theoretical point of iew, it would be interesting to compare the QM LE and BGGE when log N (; ), i.e. when exp (X=) with X N (; ). In that case, as emphasized in Section 3, the QMLE reduces to the MLE and it would be more (asymptotically) e cient than the BGGE. So through simulations, the QM LE would (in principle) perform better than the BGGE for P AR-SV series with quite large sample size. Howeer, the BGG method should be adapted to the case of distribution exp (X=), which may entails a lot of e ort for that distribution (exp (N (; ) =)) that seems to hae a little interest in practice.. Application to the S&P returns For the sake of illustration, we propose to t Gaussian P AR-SV models (:) with arious periods to the returns on the S&P (closing alue) index. In order to highlight many possible alues of the P AR-SV period, three types of datasets are considered namely daily, quarterly and monthly S&P returns. For the three series considered, we use the Bayes Griddy Gibbs estimate thanks to its good nite-sample properties, with number of iterations M = and burn-in. As in Section 4, we take the initial olatility h () to be the olatility generated by the tted GARCH (; ) while the initial log-olatility parameter estimate () is taken to be the ordinary least-squares estimate of based on the series log h (). We hae in fact aoided to use the olatility tted by the periodic GARCH (P GARCH (; )) model as initial alue h () because of some numerical di culties in the corresponding QML estimation when S becomes large (once S 3). In the Gibbs step, the olatility h (l) is drawn across P AR-SV models using the Griddy-Gibbs technique using 4
Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationEconometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets
Econometrics II - EXAM Outline Solutions All questions hae 5pts Answer each question in separate sheets. Consider the two linear simultaneous equations G with two exogeneous ariables K, y γ + y γ + x δ
More informationLecture Notes based on Koop (2003) Bayesian Econometrics
Lecture Notes based on Koop (2003) Bayesian Econometrics A.Colin Cameron University of California - Davis November 15, 2005 1. CH.1: Introduction The concepts below are the essential concepts used throughout
More informationOptimal Joint Detection and Estimation in Linear Models
Optimal Joint Detection and Estimation in Linear Models Jianshu Chen, Yue Zhao, Andrea Goldsmith, and H. Vincent Poor Abstract The problem of optimal joint detection and estimation in linear models with
More informationTesting for Regime Switching: A Comment
Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara
More informationNegative binomial quasi-likelihood inference for general integer-valued time series models
MPRA Munich Personal RePEc Archive Negative binomial quasi-likelihood inference for general integer-valued time series models Abdelhakim Aknouche and Sara Bendjeddou and Nassim Touche Faculty of Mathematics,
More informationECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications
ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty
More informationChapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models
Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................
More informationAstrometric Errors Correlated Strongly Across Multiple SIRTF Images
Astrometric Errors Correlated Strongly Across Multiple SIRTF Images John Fowler 28 March 23 The possibility exists that after pointing transfer has been performed for each BCD (i.e. a calibrated image
More informationFiltering and Likelihood Inference
Filtering and Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania July 10, 2011 Jesús Fernández-Villaverde (PENN) Filtering and Likelihood July 10, 2011 1 / 79 Motivation Introduction
More informationParametric Inference on Strong Dependence
Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation
More informationProblem set 1 - Solutions
EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationEstimating the Number of Common Factors in Serially Dependent Approximate Factor Models
Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University
More informationOnline Companion to Pricing Services Subject to Congestion: Charge Per-Use Fees or Sell Subscriptions?
Online Companion to Pricing Serices Subject to Congestion: Charge Per-Use Fees or Sell Subscriptions? Gérard P. Cachon Pnina Feldman Operations and Information Management, The Wharton School, Uniersity
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More informationAsymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias
Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias Zhiyi Zhang Department of Mathematics and Statistics Uniersity of North Carolina at Charlotte Charlotte, NC 28223 Abstract
More informationTrajectory Estimation for Tactical Ballistic Missiles in Terminal Phase Using On-line Input Estimator
Proc. Natl. Sci. Counc. ROC(A) Vol. 23, No. 5, 1999. pp. 644-653 Trajectory Estimation for Tactical Ballistic Missiles in Terminal Phase Using On-line Input Estimator SOU-CHEN LEE, YU-CHAO HUANG, AND CHENG-YU
More informationNotes on Linear Minimum Mean Square Error Estimators
Notes on Linear Minimum Mean Square Error Estimators Ça gatay Candan January, 0 Abstract Some connections between linear minimum mean square error estimators, maximum output SNR filters and the least square
More informationOnline Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access
Online Appendix to: Marijuana on Main Street? Estating Demand in Markets with Lited Access By Liana Jacobi and Michelle Sovinsky This appendix provides details on the estation methodology for various speci
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationTesting jointly for structural changes in the error variance and coefficients of a linear regression model
Testing jointly for structural changes in the error ariance and coefficients of a linear regression model Jing Zhou Boston Uniersity Pierre Perron Boston Uniersity September 25, 27 Abstract We proide a
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationModeling Highway Traffic Volumes
Modeling Highway Traffic Volumes Tomáš Šingliar1 and Miloš Hauskrecht 1 Computer Science Dept, Uniersity of Pittsburgh, Pittsburgh, PA 15260 {tomas, milos}@cs.pitt.edu Abstract. Most traffic management
More informationEconomics 241B Estimation with Instruments
Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationIntroduction: structural econometrics. Jean-Marc Robin
Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider
More informationStructural Macroeconometrics. Chapter 4. Summarizing Time Series Behavior
Structural Macroeconometrics Chapter 4. Summarizing Time Series Behavior David N. DeJong Chetan Dave The sign of a truly educated man is to be deeply moved by statistics. George Bernard Shaw This chapter
More informationModeling Expectations with Noncausal Autoregressions
MPRA Munich Personal RePEc Archive Modeling Expectations with Noncausal Autoregressions Markku Lanne and Pentti Saikkonen Department of Economics, University of Helsinki, Department of Mathematics and
More informationMarkov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1
Markov-Switching Models with Endogenous Explanatory Variables by Chang-Jin Kim 1 Dept. of Economics, Korea University and Dept. of Economics, University of Washington First draft: August, 2002 This version:
More informationThe main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.
1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationSmall area estimation under a two-part random effects model with application to estimation of literacy in developing countries
Surey Methodology, December 008 35 Vol. 34, No., pp. 35-49 Small area estimation under a two-part random effects model with application to estimation of literacy in deeloping countries Danny Pfeffermann,
More informationBayesian Modeling of Conditional Distributions
Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings
More informationVolatility. Gerald P. Dwyer. February Clemson University
Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationECONOMETRICS FIELD EXAM Michigan State University May 9, 2008
ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within
More informationModeling Expectations with Noncausal Autoregressions
Modeling Expectations with Noncausal Autoregressions Markku Lanne * Pentti Saikkonen ** Department of Economics Department of Mathematics and Statistics University of Helsinki University of Helsinki Abstract
More informationTarget Trajectory Estimation within a Sensor Network
Target Trajectory Estimation within a Sensor Network Adrien Ickowicz IRISA/CNRS, 354, Rennes, J-Pierre Le Cadre, IRISA/CNRS,354, Rennes,France Abstract This paper deals with the estimation of the trajectory
More informationLecture Stat Information Criterion
Lecture Stat 461-561 Information Criterion Arnaud Doucet February 2008 Arnaud Doucet () February 2008 1 / 34 Review of Maximum Likelihood Approach We have data X i i.i.d. g (x). We model the distribution
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationBootstrapping Long Memory Tests: Some Monte Carlo Results
Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin University College Dublin and Cass Business School. July 2004 - Preliminary Abstract We investigate the bootstrapped
More informationWageningen Summer School in Econometrics. The Bayesian Approach in Theory and Practice
Wageningen Summer School in Econometrics The Bayesian Approach in Theory and Practice September 2008 Slides for Lecture on Qualitative and Limited Dependent Variable Models Gary Koop, University of Strathclyde
More informationEstimation of Efficiency with the Stochastic Frontier Cost. Function and Heteroscedasticity: A Monte Carlo Study
Estimation of Efficiency ith the Stochastic Frontier Cost Function and Heteroscedasticity: A Monte Carlo Study By Taeyoon Kim Graduate Student Oklahoma State Uniersity Department of Agricultural Economics
More informationGMM estimation of spatial panels
MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted
More informationModeling Expectations with Noncausal Autoregressions
MPRA Munich Personal RePEc Archive Modeling Expectations with Noncausal Autoregressions Markku Lanne and Pentti Saikkonen Department of Economics, University of Helsinki, Department of Mathematics and
More informationFinal Exam (Solution) Economics 501b Microeconomic Theory
Dirk Bergemann and Johannes Hoerner Department of Economics Yale Uniersity Final Exam (Solution) Economics 5b Microeconomic Theory May This is a closed-book exam. The exam lasts for 8 minutes. Please write
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationLecture 6: Univariate Volatility Modelling: ARCH and GARCH Models
Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models Prof. Massimo Guidolin 019 Financial Econometrics Winter/Spring 018 Overview ARCH models and their limitations Generalized ARCH models
More informationWhen is a copula constant? A test for changing relationships
When is a copula constant? A test for changing relationships Fabio Busetti and Andrew Harvey Bank of Italy and University of Cambridge November 2007 usetti and Harvey (Bank of Italy and University of Cambridge)
More informationCombining Macroeconomic Models for Prediction
Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools
More informationIntroduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016
Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationEstimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels.
Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels. Pedro Albarran y Raquel Carrasco z Jesus M. Carro x June 2014 Preliminary and Incomplete Abstract This paper presents and evaluates
More informationTesting for a Trend with Persistent Errors
Testing for a Trend with Persistent Errors Graham Elliott UCSD August 2017 Abstract We develop new tests for the coe cient on a time trend in a regression of a variable on a constant and time trend where
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More information10. Time series regression and forecasting
10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the
More informationBayesian data analysis in practice: Three simple examples
Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to
More informationQuantiles, Expectiles and Splines
Quantiles, Expectiles and Splines Giuliano De Rossi and Andrew Harvey* Faculty of Economics, Cambridge University February 9, 2007 Abstract A time-varying quantile can be tted to a sequence of observations
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationChapter 2. GMM: Estimating Rational Expectations Models
Chapter 2. GMM: Estimating Rational Expectations Models Contents 1 Introduction 1 2 Step 1: Solve the model and obtain Euler equations 2 3 Step 2: Formulate moment restrictions 3 4 Step 3: Estimation and
More informationChapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models School of Economics and Management - University of Geneva Christophe Hurlin, Université of Orléans University of Orléans April 2018 C. Hurlin (University of Orléans)
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:
More informationLECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT
MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes
More informationEcon 423 Lecture Notes: Additional Topics in Time Series 1
Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationOn computing Gaussian curvature of some well known distribution
Theoretical Mathematics & Applications, ol.3, no.4, 03, 85-04 ISSN: 79-9687 (print), 79-9709 (online) Scienpress Ltd, 03 On computing Gaussian curature of some well known distribution William W.S. Chen
More informationBootstrapping Long Memory Tests: Some Monte Carlo Results
Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin Nu eld College, Oxford and Lancaster University. December 2005 - Preliminary Abstract We investigate the bootstrapped
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationDynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models
6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories
More informationMining Big Data Using Parsimonious Factor and Shrinkage Methods
Mining Big Data Using Parsimonious Factor and Shrinkage Methods Hyun Hak Kim 1 and Norman Swanson 2 1 Bank of Korea and 2 Rutgers University ECB Workshop on using Big Data for Forecasting and Statistics
More informationNoise constrained least mean absolute third algorithm
Noise constrained least mean absolute third algorithm Sihai GUAN 1 Zhi LI 1 Abstract: he learning speed of an adaptie algorithm can be improed by properly constraining the cost function of the adaptie
More informationStock index returns density prediction using GARCH models: Frequentist or Bayesian estimation?
MPRA Munich Personal RePEc Archive Stock index returns density prediction using GARCH models: Frequentist or Bayesian estimation? Ardia, David; Lennart, Hoogerheide and Nienke, Corré aeris CAPITAL AG,
More informationxtunbalmd: Dynamic Binary Random E ects Models Estimation with Unbalanced Panels
: Dynamic Binary Random E ects Models Estimation with Unbalanced Panels Pedro Albarran* Raquel Carrasco** Jesus M. Carro** *Universidad de Alicante **Universidad Carlos III de Madrid 2017 Spanish Stata
More information1 The Multiple Regression Model: Freeing Up the Classical Assumptions
1 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions were crucial for many of the derivations of the previous chapters. Derivation of the OLS estimator
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationStochastic Processes
Stochastic Processes Stochastic Process Non Formal Definition: Non formal: A stochastic process (random process) is the opposite of a deterministic process such as one defined by a differential equation.
More informationReversal in time order of interactive events: Collision of inclined rods
Reersal in time order of interactie eents: Collision of inclined rods Published in The European Journal of Physics Eur. J. Phys. 27 819-824 http://www.iop.org/ej/abstract/0143-0807/27/4/013 Chandru Iyer
More informationChapter 1. GMM: Basic Concepts
Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating
More informationARIMA Modelling and Forecasting
ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first
More informationTESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES*
The Manchester School Vol 67 No. 1 January 1999 1463^6786 39^59 TESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES* by I. C. ANDRADE University of Southampton A. D. CLARE ISMA Centre,
More informationForecasting with a Real-Time Data Set for Macroeconomists
Uniersity of Richmond UR Scholarship Repository Economics Faculty Publications Economics 12-2002 Forecasting with a Real-Time Data Set for Macroeconomists Tom Stark Dean D. Croushore Uniersity of Richmond,
More informationGaussian processes. Basic Properties VAG002-
Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity
More informationMeasuring robustness
Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More informationSpatial panels: random components vs. xed e ects
Spatial panels: random components vs. xed e ects Lung-fei Lee Department of Economics Ohio State University l eeecon.ohio-state.edu Jihai Yu Department of Economics University of Kentucky jihai.yuuky.edu
More information4. A Physical Model for an Electron with Angular Momentum. An Electron in a Bohr Orbit. The Quantum Magnet Resulting from Orbital Motion.
4. A Physical Model for an Electron with Angular Momentum. An Electron in a Bohr Orbit. The Quantum Magnet Resulting from Orbital Motion. We now hae deeloped a ector model that allows the ready isualization
More informationMay 2, Why do nonlinear models provide poor. macroeconomic forecasts? Graham Elliott (UCSD) Gray Calhoun (Iowa State) Motivating Problem
(UCSD) Gray with May 2, 2012 The with (a) Typical comments about future of forecasting imply a large role for. (b) Many studies show a limited role in providing forecasts from. (c) Reviews of forecasting
More informationTime-Varying Quantiles
Time-Varying Quantiles Giuliano De Rossi and Andrew Harvey Faculty of Economics, Cambridge University July 19, 2006 Abstract A time-varying quantile can be tted to a sequence of observations by formulating
More informationOn the Power of Tests for Regime Switching
On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating
More informationLECTURE 13: TIME SERIES I
1 LECTURE 13: TIME SERIES I AUTOCORRELATION: Consider y = X + u where y is T 1, X is T K, is K 1 and u is T 1. We are using T and not N for sample size to emphasize that this is a time series. The natural
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationMultivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]
1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet
More informationTime Series Analysis -- An Introduction -- AMS 586
Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data
More informationTIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA
CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationPosition in the xy plane y position x position
Robust Control of an Underactuated Surface Vessel with Thruster Dynamics K. Y. Pettersen and O. Egeland Department of Engineering Cybernetics Norwegian Uniersity of Science and Technology N- Trondheim,
More informationTAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω
ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More information