Internet Appendix to Are Stocks Really Less Volatile in the Long Run?
|
|
- Owen Reeves
- 6 years ago
- Views:
Transcription
1 Internet Appendix to Are Stocks Really Less Volatile in the Long Run? by Ľuboš Pástor and Robert F. Stambaugh January 1, 211
2 B1. Roadmap This Appendix is organized as follows. A simple illustration demonstrating the effects of parameter uncertainty on long-horizon predictive variance is provided in Section B2. The Bayesian empirical methodology for System 2 is presented in Section B3. Additional empirical evidence that complements the evidence presented in the paper is reported in Section B4. That section first presents the estimates of predictive regressions for both annual and quarterly data, followed by various robustness results regarding the long-horizon predictive variance of stock returns. B2. Parameter uncertainty: A simple illustration In the paper, we compute Var(r T,T+k D T ) and its components empirically, incorporating parameter uncertainty via Bayesian posterior distributions. Here we use a simpler setting to illustrate the effects of parameter uncertainty on multiperiod return variance. All the notation that is not defined here is defined in Section 2 of the paper. The elements of the parameter vector φ are viewed as random, given that they are unknown to an investor. For the purpose of this illustration, we make several assumptions related to φ with the objective of simplifying the variance ratio calculations in the presence of parameter uncertainty. Our first assumption is that, given data D T, the elements of φ are distributed independently of each other. We also make two assumptions about the deviations of the conditional mean from the unconditional mean: given D T, µ T E r is uncorrelated with E r, and the quantity z T b T E r is fixed across φ. Next, let ρ µb denote the true unconditional correlation between µ T and b T, ρ µb Corr(µ T, b T φ) (this correlation is true in that it depends on the true parameter values φ, and it is unconditional in that it does not condition on D T ). If the observed predictors capture µ T perfectly, then ρ µb = 1; otherwise ρ µb < 1. We make the homoskedasticity assumption that q T is constant across D T, which implies that q T = (1 ρ 2 µb)σ 2 µ = (1 ρ 2 µb)r 2 σ 2 r, (B1) where σµ 2 and σ2 r are the true unconditional variances of µ t and r t+1, respectively (i.e., σµ 2 Var(µ t φ) and σr 2 Var(r t+1 φ)). The parameter vector is φ = [β, R 2, ρ uw, E r, σ r, ρ µb ]. Finally, we specify τ such that Var(E r D T ) = τe(σr 2 D T), (B2) so that the uncertainty about the unconditional mean return E r is as large as the imprecision in a sample mean computed over a sample of length 1/τ. 1
3 None of the above simplifying assumptions hold in the Bayesian empirical framework in the paper; they are made for the purpose of this simple illustration only. Given these assumptions, we are able to characterize the variance ratio in a parsimonious fashion that does not depend on the unconditional variance of single-period returns. The variance of the k-period return can be decomposed as follows: Var(r T,T+k D T ) = E {Var(r T,T+k φ, µ T, D T ) D T } + Var {E(r T,T+k φ, µ T, D T ) D T } = E {Var(r T,T+k φ, µ T ) D T } + Var {E(r T,T+k φ, µ T ) D T } = E { kσr(1 2 R 2 )[1 + 2 dρ uw A(k) + d } 2 B(k)] D T { +Var ke r + 1 } βk 1 β (µ T E r ) D T = ke(σr 2 D T)E { (1 R 2 )[1 + 2 dρ uw A(k) + d 2 B(k)] D T } { } 1 β +k 2 k Var(E r D T ) + Var 1 β (µ T E r ) D T = ke(σ 2 r D T )E { (1 R 2 )[1 + 2 dρ uw A(k) + d 2 B(k)] D T } +k 2 τe(σ 2 r D T) + Var { 1 β k 1 β (µ T E r ) D T }. (B3) The next to last equality uses the property that E r is uncorrelated with µ T E r, and the last equality uses (B2). Next observe that { } 1 β k Var 1 β (µ T E r ) D T { [ ] } 1 β k = E Var 1 β (µ T E r ) φ, D T D T { [ ] } 1 β k +Var E 1 β (µ T E r ) φ, D T D T ( ) 1 β k 2 = E Var(µ T φ, D T ) D T 1 β { } 1 β k +Var 1 β E(µ T E r φ, D T ) D T ( ) 1 β k 2 = E q T D T 1 β { } 1 β k +Var 1 β (b T E r ) D T ( ) 1 β k 2 = E σ 2 1 β rr 2 (1 ρ 2 µb) D T { } 1 β k +Var 1 β z T D T 2
4 ( ) 1 β k 2 = E D T 1 β E(σ2 r D T)E { R 2 (1 ρ 2 µb ) D } T { } 1 β +zt 2 k Var 1 β D T. (B4) Substituting the right-hand side of (B4) for the last term in (B3) then gives Var(r T,T+k D T ) = ke(σr D 2 T ))E { (1 R 2 )[1 + 2 dρ uw A(k) + d } 2 B(k)] D T ( ) 1 β +k 2 τe(σr D 2 T ) + E(σr D 2 k 2 T )E D T 1 β E { R 2 (1 ρ 2 µb) D T } { } 1 β +zt 2 k Var 1 β D T. (B5) When k = 1, equation (B5) simplifies to Var(r T,T+1 D T ) = E(σ 2 r D T ) [ 1 + τ E(R 2 D T )E(ρ 2 µb D T ) ]. (B6) Observe that the k-period variance in (B5) depends on the value of zt 2, which enters the last term multiplied by the variance of (1 β k )/(1 β). This dependence makes sense: when µ T is estimated to be farther from the unconditional mean E r, so that the absolute value of z T is large, uncertainty about the speed with which µ T reverts to E r is more important. To achieve a further algebraic simplification, we evaluate the variance in (B5) by setting zt 2 equal to the posterior mean of the true unconditional mean of zt 2 : z 2 T = E[E(z 2 T φ) D T ]. (B7) To evaluate the right-hand side of (B7), first note that Var(z T φ) = Var(b T φ), since z T = b T E r and E r is in φ. Also observe that E(z T φ) =, since across samples (D T s), the expected conditional mean b T is the unconditional mean E r. Therefore, E(zT 2 φ) = Var(z T φ) = Var(b T φ). We thus have E[E(zT 2 φ) D T] = E[Var(b T φ) D T ] = E[ρ 2 µbvar(µ T φ) D T ] = E[ρ 2 µbσrr 2 2 D T ] = E(ρ 2 µb D T )E(σr D 2 T )E(R 2 D T ). (B8) We compute the k-period variance ratio, V (k) = Var(r T,T+k D T ) k Var(r T,T+1 D T ), 3 (B9)
5 where Var(r T,T+1 D T ) is given in equation (B6) and Var(r T,T+k D T ) denotes the value of (B5) obtained by substituting the right-hand side of (B8) for zt 2. Note that the variance ratio in equation (B9) does not depend on E(σr 2 D T). We set τ = 1/2 in equation (B2), so that the uncertainty about the unconditional mean return E r corresponds to the imprecision in a 2-year sample mean. To specify the uncertainty for the remaining parameters, we choose the probability densities displayed in Figure A1, whose medians are.86 for β,.12 for R 2, -.66 for ρ uw, and.7 for ρ µb. Table A1 displays the 2-year variance ratio, V (2), under different specifications of uncertainty about the parameters. In the first row, β, R 2, ρ uw, and E r are held fixed, by setting the first three parameters equal to their medians and by setting τ = in (B2). Successive rows then specify one or more of those parameters as uncertain, by drawing from the densities in Figure A1 (for β, R 2, and ρ uw ) or setting τ = 1/2 (for E r ). For each row, ρ µb is either fixed at one of the values,.7 (its median), and 1, or it is drawn from its density in Figure A1. Note that the return variances are unconditional when ρ µb = and conditional on full knowledge of µ T when ρ µb = 1. Table A1 shows that when all parameters are fixed, V (2) < 1 at all levels of conditioning (all values of ρ µb ). That is, in the absence of parameter uncertainty, the values in the first row range from.95 at the unconditional level to.77 when µ T is fully known. Thus, this fixed-parameter specification is consistent with mean reversion playing a dominant role, causing the return variance (per period) to be lower at the long horizon. Rows 2 through 5 specify one of the parameters β, R 2, ρ uw, and E r as uncertain. Uncertainty about β exerts the strongest effect, raising V (2) by 17% to 26% (depending on ρ µb ), but uncertainty about any one of these parameters raises V (2). In the last row of Table A1, all parameters are uncertain, and the values of V (2) substantially exceed 1, ranging from 1.17 (when ρ µb = 1) to 1.45 (when ρ µb = ). Even though the density for ρ uw in Figure A1 has almost all of its mass below, so that mean reversion is almost certainly present, parameter uncertainty causes the long-run variance to exceed the short-run variance. As discussed in the paper, uncertainty about E r implies V (k) as k. We can see from Table A1 that uncertainty about E r contributes nontrivially to V (2), but somewhat less than uncertainty about β or R 2 and only slightly more than uncertainty about ρ uw. With uncertainty about only the latter three parameters, V (2) is still well above 1, especially when ρ µb < 1. Thus, although uncertainty about E r must eventually dominate variance at sufficiently long horizons, it does not do so here at the 2-year horizon. The variance ratios in Table A1 increase as ρ µb decreases. In other words, less knowledge about µ T makes long-run variance greater relative to short-run variance. We also see that drawing ρ µb 4
6 from its density in Figure A1 produces the same values of V (2) as fixing ρ µb at its median. B3. Predictive System 2 System 2 is implemented in the paper for one asset, but it can be specified for n assets, in which case r t+1 and π t+1 are n 1 vectors. We begin the analysis with that more general form and then later restrict it to the single-asset setting. With multiple assets, System 2 is given by r t+1 x t+1 π t+1 = a θ + A 12 I A 22 A 33 r t x t π t + u t+1 v t+1 η t+1, (B1) which can also be written as r t+1 E r x t+1 E x π t+1 E π = A 12 I A 22 A 33 r t E r x t E x π t E π + u t+1 v t+1 η t+1, (B11) where E x = (I A 22 ) 1 θ, E r = a + A 12 E x, and without loss of generality E π =. We begin working with multiple assets, so that not only x t but also r t and π t are vectors. We assume the errors in (B11) are i.i.d. across t = 1,..., T : u t v t η t N, Σ uu Σ uv Σ uη Σ vu Σ vv Σ vη Σ ηu Σ ηv Σ ηη. (B12) We assume the eigenvalues of both A 22 and A 33 lie inside the unit circle. As the elements of Σ ηη approach zero, the above model approaches the perfect-predictor specification, [ rt+1 E r x t+1 E x ] = [ A12 A 22 ] [ rt E r x t E x ] + [ ut+1 v t+1 ]. (B13) Let Ā denote the entire coefficient matrix in (B11), and let Σ denote the entire covariance matrix in (B12). Define the vector ζ t = r t x t π t, and let V ζζ denote its unconditional covariance matrix. Then V ζζ = V rr V rx V rπ V xr V xx V xπ V πr V πx V ππ 5 = ĀV ζζā + Σ, (B14) (B15)
7 which can be solved as vec (V ζζ ) = [I (Ā Ā)] 1 vec(σ), (B16) using the well known identity vec (DFG) = (G D)vec (F). Let z t denote the vector of the observed data at time t, ] z t = [ rt Denote the data we observe through time t as D t = (z 1,..., z t ), and note that our complete data consist of D T. Also define [ ] [ ] [ ] Er Vrr V E z =, V E zz = rx Vrπ, V x V xr V zπ =. (B17) xx V xπ x t. Let φ denote the full set of parameters in the model, φ (Ā, Σ, E z), and let π denote the full time series of π t, t = 1,..., T. To obtain the joint posterior distribution of φ and π, denoted by p(φ, π D T ), we use an MCMC procedure in which we alternate between drawing π from the conditional posterior p(π φ, D T ) and drawing φ from the conditional posterior p(φ π, D T ). The procedure for drawing π from p(π φ, D T ) is described in Section B3.1. The procedure for drawing φ from p(φ π, D T ) p(φ)p(d T, π φ) is described in Section B3.2. B3.1. Drawing π given the parameters To draw the time series of the unobservable values of π t conditional on the current parameter draws, we apply the forward filtering, backward sampling (FFBS) approach developed by Carter and Kohn (1994) and Frühwirth-Schnatter (1994). See also West and Harrison (1997, chapter 15). B Filtering The first stage follows the standard methodology of Kalman filtering. Define a t = E(π t D t 1 ) b t = E(π t D t ) e t = E(z t π t, D t 1 ) (B18) f t = E(z t D t 1 ) P t = Var(π t D t 1 ) Q t = Var(π t D t ) (B19) R t = Var(z t π t, D t 1 ) S t = Var(z t D t 1 ) G t = Cov(z t, π t D t 1) (B2) Conditioning on φ is assumed throughout this section but suppressed in the notation for convenience. First observe that π 1 D N(a 1, P 1 ), (B21) 6
8 where D denotes the null information set, so that the unconditional moments of π are given by a 1 = E π = and P 1 = V ππ. Also, z 1 D N(f 1, S 1 ), (B22) where f 1 = E z and S 1 = V zz. Note that and that where G 1 = V zπ z 1 π 1, D N(e 1, R 1 ), (B23) (B24) e 1 R 1 Combining this density with equation (B21) using Bayes rule gives = f 1 + G 1 P 1 1 (π 1 a 1 ) (B25) = S 1 G 1 P 1 1 G 1. (B26) π 1 D 1 N(b 1, Q 1 ), (B27) where b 1 Q 1 = a 1 + P 1 (P 1 + G 1R 1 1 G 1 ) 1 G 1R 1 1 (z 1 f 1 ) (B28) = P 1 (P 1 + G 1R 1 1 G 1 ) 1 P 1. (B29) Continuing in this fashion, we find that all conditional densities are normally distributed, and we obtain all the required moments for t = 2,..., T : a t = A 33 b t 1 (B3) f t = S t = G t = [ Er + A 12 (x t 1 E x ) + b t 1 E x + A 22 (x t 1 E x ) [ Qt 1 [ Qt 1 A 33 ] ] + + [ Σuu Σ uv Σ vu [ Σuη Σ vη ] Σ vv ] ] (B31) (B32) (B33) P t = A 33 Q t 1 A 33 + Σ ηη (B34) e t R t b t Q t = f t + G t P 1 t (π t a t ) (B35) = S t G t P 1 t G t (B36) = a t + P t (P t + G tr 1 t G t ) 1 G tr 1 t (z t f t ) (B37) = a t + G ts 1 t (z t f t ) (B38) = P t (P t + G t R 1 t G t ) 1 P t. (B39) 7
9 The values of {a t, b t, Q t, S t, G t, P t } for t = 1,..., T are retained for the next stage. Equations (B32) through (B34) are derived as [ St ] G t G t P t = Var(ζ t D t 1 ) = ĀVar(ζ t 1 D t 1 )Ā + Σ = Ā Ā + Σ Q t 1 = Q t 1 Q t 1 A 33 A 33 Q t 1 A 33 Q t 1 A 33 + Σ uu Σ uv Σ uη Σ vu Σ vv Σ vη Σ ηu Σ ηv Σ ηη. B Sampling drawing π We wish to draw (π 1,..., π T ) conditional on D T (and the parameters, φ). The backwardsampling approach relies on the Markov property of the evolution of ζ t and the resulting identity, p(ζ 1,..., ζ T D T ) = p(ζ T D T )p(ζ T 1 ζ T, D T 1 ) p(ζ 1 ζ 2, D 1 ). (B4) We first sample π T from p(π T D T ), the normal density obtained in the last step of the filtering. Then, for t = T 1, T 2,..., 1, we sample π t from the conditional density p(ζ t ζ t+1, D t ). (Note that the first two subvectors of ζ t are already observed and thus need not be sampled.) To obtain that conditional density, first note that ([ ] [ ]) ft+1 St+1 G ζ t+1 D t N, t+1 a t+1 G, (B41) t+1 P t+1 and ζ t D t N r t x t b t, Cov(ζ t, ζ t+1 D t ) = Var(ζ t D t )Ā = = Q t Q t Q t Q t A 33 8, A 12 A 22 I A 33. (B42) (B43)
10 Therefore, where and ζ t ζ t+1, D t N(h t, H t ), h t = E(ζ t D t ) + [ Cov(ζ t, ζ t+1 D t ) ] [Var(ζ t+1 D t )] 1 [ζ t+1 E(ζ t+1 D t )] = r t x t b t + Q t Q t A 33 [ St+1 G t+1 G t+1 P t+1 ] 1 [ zt+1 f t+1 π t+1 a t+1 ] (B44) H t = Var(ζ t D t ) [ Cov(ζ t, ζ t+1 D t ) ] [Var(ζ t+1 D t )] 1 [ Cov(ζ t, ζ t+1 D t ) ] = Q t Q t Q t A 33 [ St+1 G t+1 G t+1 P t+1 ] 1 Q t A 33 Q t The mean and covariance matrix of π t are taken as the relevant elements of h t and H t. In the rest of the Appendix, we discuss the special case (implemented in the paper) in which r t and π t are scalars. The dimensions of Ā and Σ are then (K +2) (K +2), where K is the number of predictors in the vector x t. In this case, A 12 is a K 1 vector, which we denote as b (distinct from b t, defined previously), and A 33 is a scalar that we denote as δ. We also denote A 22 as simply A (distinct from Ā, defined previously). B3.2. B Drawing the parameters given π Prior distributions With r t and π t being scalars, Σ = σu 2 σ uv σ uη σ vu Σ vv σ vη σ ηu σ ηv ση 2 = [ Σξξ σ ξη σ ξη ση 2 ], where ξ t [u t v t]. We wish to be informative about ση, 2 to varying degrees, while being noninformative about Σ ξξ and σ ξη. This objective is accomplished by specifying a prior for Σ equal to the posterior that obtains when a diffuse prior for Σ is combined with a hypothetical sample of T observations of η t and S observations of ξ t, where S T and the shorter period is a subset of the longer. This posterior is given by Stambaugh (1997), except that we impose the additional restriction that the population means of η t and ξ t are zero. The posterior in Stambaugh (1997) 9
11 relies on a change of variables, which we adopt here as well. Specifically, c = (1/ση)σ 2 ξη, and Ω = Σ ξξ σηcc 2. The hypothetical T observations of η t produce the sample variance ση,. 2 The S observations of ξ t and η t produce the sample covariance matrix ˆΣ, giving ˆσ η, 2, ĉ and ˆΩ. (All hypothetical second moments are non-central; equivalently the hypothetical sample means are zero.) With the change in variables from Σ to (ση, 2 c, Ω), the latter quantities have the posterior, taken here as the prior, given by p(ση, 2 c, Ω) = p(ση)p(c Ω)p(Ω), 2 where 1 ση 2 T ση, 2 (B45) χ 2 T K 1 ( ) 1 c Ω N ĉ, Ω (B46) S ˆσ η, 2 Ω IW(S ˆΩ, S K). (B47) Be specifying different values of σ 2 η, and T, we vary the degree to which the prior imposes a belief that σ 2 η is close to zero, where the limiting case of σ 2 η = corresponds to perfect predictors. The larger we make T, the greater is the prior information we supply about σ 2 η. We set S = K+1, where K is the number of predictors, which makes the prior on c and Ω essentially noninformative (as informative as a sample of only K + 1 observations, where K = 1 or 3). The specification of ˆΣ is thus inconsequential, and we simply set it to a scalar times the identity matrix. For the cases with imperfect predictors, we set T = K + 2 and then set σ η, equal to either.5 or.1 when using annual data and either.1 or.3 when using quarterly data. We also wish to be informative about δ, the autocorrelation of π t. Here we specify priors for δ identical to those specified for the autocorrelation (β) of the conditional mean µ t in the predictive system analyzed previously. Specifically, the prior for δ is a truncated Normal distribution, with the mean and standard deviation of the corresponding non-truncated distribution denoted as δ and 1 The densities are given as follows: p(ση 2 ) σ (T K+1) η exp { T σ 2 η, 2σ 2 η } ( p(σ η ) ση (T K) exp ( 1 p(c Ω) Ω (K+1) 2 exp 1 2 (c ĉ ) 1 Ω) S ˆσ η, 2 (c ĉ ) { 12 } tr(s ˆΩ )Ω 1. p(ω) Ω (S +2) 2 exp { T σ 2 η, 2σ 2 η }) 1
12 σ δ. The distribution is then truncated to satisfy the stationarity requirement δ < 1. We set σ δ to.25 with annual data and to.15 with quarterly data, and we set δ to.99 in both cases. The priors on the remaining parameters are non-informative, except for the condition required for stationary of x t that ρ(a), the spectral radius of A, be less than 1. Specifically, define Then the prior for ψ is given by [ ] a θ ψ = vec b A. δ (B48) { p(ψ) exp 1 2 (ψ ψ) } Vψ 1 (ψ ψ) 1 S, (B49) where ψ = [ δ ] [ σ 2, V ψ = ψ I (K+1) 2 σδ 2 ], (B5) σ 2 ψ is set to a large number, 1 S equals 1 under the conditions that ρ(a) < 1 and δ < 1, and 1 S equals otherwise. We also assume that the priors for Σ and ψ are independent. B Posterior distributions Define y t+1 = r t+1 π t x t+1 π t+1 y =, Y t = y 2. y T [ IK+1 [1 x t], Y = π t Y 1. Y T 1 ], υ = The sample representation of the predictive system in (B1) is then y = Y ψ + υ,, υ t+1 = υ 2. υ T. u t+1 v t+1 η t+1 for ψ defined in (B48). For tractability we employ the conditional likelihood, which treats values in period t = 1 as non-stochastic. We can then apply Gibbs sampling to draw ψ and Σ. This simplification seems reasonable, given that T = 26 for our main results. 2 2 An alternative would be to employ the exact likelihood that includes the density of the values at t = 1 and then use the Metropolis-Hastings algorithm, with the conditional posteriors given here used as proposal densities., 11
13 B Drawing ψ Given the prior for ψ in (B49), we can apply standard results from the multivariate regression model (e.g., Zellner, 1971) to obtain the full conditional posterior for ψ as ψ N( ψ, Ṽψ) 1 S (B51) where and. ψ = Ṽ ψ [ V 1 ψ ψ + Y (I T 1 Σ 1 )y ] Ṽ ψ = [ V 1 ψ + Y (I T 1 Σ 1 )Y ] 1 B Drawing Σ Define ˆΣ = 1 [ ] T 1 ˆΣξξ (y t+1 Y t ψ)(y t+1 Y t ψ) ˆσ = ξη T 1 ˆσ t=1 ξη ˆσ η 2 [ ] Σ Σ = ξξ σ ξη = 1 S ( SˆΣ + (T 1)ˆΣ ), S = S + (T 1) σ ξη σ 2 η c = (1/ σ 2 η) σ ξη, Ω = Σ ξξ σ 2 η c c, σ 2 η = 1 T ( T σ 2 η, + (T 1)ˆσ 2 η ), T = T + (T 1). A draw of Σ is constructed by drawing (ση 2, c, Ω). The joint full conditional of the latter is given by p(σ 2 η, c, Ω ) = p(σ 2 η )p(c Ω, )p(ω ), where the posteriors on the right-hand side are of the same form as (B45) through (B47): σ 2 η T σ 2 η χ 2 T K 1 ( ) (B52) 1 c Ω, N c, Ω (B53) S σ η 2 Ω IW( S Ω, S K). (B54) The draw of Σ is then obtained by computing Σ ξξ = Ω + σ 2 ηcc and σ ξη = σ 2 ηc. Our results based on 25, draws from the posterior distribution. First, we generate a sequence of 76, draws. We discard the first 1, draws as a burn-in and take every third draw from the rest to obtain a series of 25, draws that exhibit little serial correlation. 12
14 B3.3. Predictive variance In addition to the notation from equations (B11), (B12), and (B14), define also E ζ = E r E x, ɛ t = u t v t η t, (B55) and Equation (B11) can then be written as ζ e t = ζ t E ζ. ζ e t+1 = Āζe t + ɛ t+1. (B56) (B57) For i > 1, successive substitution using (B57) gives ζ e t+i = Āi ζ e t + Āi 1 ɛ t+1 + Āi 2 ɛ t ɛ t+i. (B58) Define ζ T,T+k = ζt,t+k e = Summing (B57) over k periods then gives k ζ T+i, i=1 k ζt+i e = ζ T,T+k ke ζ. i=1 ζ e T,T+k = ( k Ā )ζ i t e + ( I + Ā + + Āk 1) ɛ t+1 i=1 + ( I + Ā + + Āk 2) ɛ t ɛ t+k = ( Λ k+1 I)ζ e t + Λ k ɛ t+1 + Λ k 1 ɛ t ɛ t+k, (B59) where Λ i = I + Ā + + Āi 1 = (I Ā) 1 (I Āi ). (B6) It then follows that E ( ζ e T,T+k D T, φ, π T ) = ( Λk+1 I)ζ e T, E (ζ T,T+k D T, φ, π T ) = ( Λ k+1 I)ζ e T + ke ζ, (B61) 13
15 and Var (ζ T,T+k D T, φ, π T ) = Var ( ζ e T,T+k D T, φ, π T ) = k Λ i Σ Λ i. The first and second moments of (B61) given D T and φ are given by r T E r E(ζ T,T+k D T, φ) = ( Λ k+1 I) x T E x + ke ζ b T i=1 (B62) (B63) and Var [E (ζ T,T+k D T, φ, π T ) D T, φ] = ( Λ k+1 I) Q T ( Λ k+1 I). (B64) Combining (B62) and (B64) gives Var (ζ T,T+k D T, φ) = E [Var (ζ T,T+k D T, φ, π T ) D T, φ] + Var [E(ζ T,T+k D T, φ, π T ) D T, φ] = k Λ i Σ Λ i + ( Λ k+1 I) i=1 Q T ( Λ k+1 I). (B65) By evaluating (B63) and (B65) for repeated draws of φ from its posterior, the predictive variance of ζ T,T+k can be computed using the decomposition, Var(ζ T,T+k D T ) = E {Var(ζ T,T+k φ, D T ) D T } + Var {E(ζ T,T+k φ, D T ) D T }. (B66) Finally, the predictive variance of r T,T+k is the (1,1) element of Var(ζ T,T+k D T ). B3.4. Perfect predictors If the predictors are perfect, then π t is absent from the first equation in (B1) and the third equation simply drops out. In that case the model consists of the two equations, r t+1 = a + b x t + u t+1 (B67) x t+1 = θ + Ax t + v t+1, (B68) combined with the distributional assumption on the residuals, [ ] ([ ] [ ut σ 2 N, u σ uv v t σ vu Σ vv ]). (B69) 14
16 This perfect-predictor model obtains as the limiting case of the above imperfect-predictor setting as ση 2. Our perfect-predictor results can be computed in that manner, by setting T to a large number and σ η, to a small number. For completeness, however, we also provide here the simplified calculations that arise in that special case. In this case φ denotes the full set of parameters in equations (B75), (B68), and (B69), and Σ denotes the covariance matrix in (B69). Let B denote the matrix of coefficients in (B75) and (B68), [ ] a θ B = b A, and let ψ = vec(b). B Posterior distributions under perfect predictors We specify the prior distribution on φ as p(φ) = p(ψ)p(σ). The prior on ψ is p(ψ) 1 S, where 1 S equals 1 if ρ(a) < 1 and equals otherwise. The prior on Σ is p(σ) Σ (K+2)/2. Define the following notation: r = [r 2 r 3 r T ], Q + = [x 2 x 3 x T ], Q = [x 1 x 2 x T 1 ], X = [ι T 1 Q], where ι T 1 denotes a (T 1) 1 vector of ones, Y = [r Q + ], ˆB = (X X) 1 X Y, and S = (Y X ˆB) (Y X ˆB). We first draw Σ 1 from a Wishart distribution with T K 2 degrees of freedom and parameter matrix S 1. Given that draw of Σ 1, we then draw ψ from a normal distribution with mean ˆψ = vec ( ˆB) and covariance matrix Σ (X X) 1. That draw of φ is retained as a draw from p(φ D T ) if ρ(a) < 1. B Predictive variance under perfect predictors The conditional moments of the k-period return r T,T+k are given by E(r T,T+k D T, φ) = ka + b Ψ k 1 θ + b Λ k x T (B7) ( k 1 ) Var(r T,T+k D T, φ) = kσu 2 + 2b Ψ k 1 σ vu + b Λ i Σ vv Λ i b, (B71) where i=1 Λ i = I + A + + A i 1 = (I A) 1 (I A i ) (B72) Ψ k 1 = Λ 1 + Λ Λ k 1 = (I A) 1 [ki (I A) 1 (I A k )]. (B73) The first term in (B71) reflects i.i.d. uncertainty. The second term reflects correlation between unexpected returns and innovations in future x T+i s, which deliver innovations in future µ T+i s. 15
17 That term can be positive or negative and captures any mean reversion. The third term, always positive, reflects uncertainty about future x T+i s, and thus uncertainty about future µ T+i s. This third term, which contains a summation, can also be written without the summation as ( k 1 b i=1 Λ i Σ vv Λ i ) b = (b b ) [ (I A) 1 (I A) 1] [ ki Λ k I I Λ k +(I A A) 1 (I (A A) k ) ] vec (Σ vv ). Applying the standard variance decomposition Var(r T,T+k D T ) = E{Var(r T,T+k D T, φ) D T } + Var{E(r T,T+k D T, φ) D T }, (B74) the predictive variance Var(r T,T+k D T ) can be computed as the sum of the posterior mean of the right-hand side of equation (B71) and the posterior variance of the right-hand side of equation (B7). These posterior moments are computed from the posterior draws of φ, which are described in Section B B4. Additional empirical results B4.1. Predictive regressions This section reports the results from standard predictive regressions, r t+1 = a + b x t + e t+1, (B75) for various combinations of the three predictors used in the paper. The results, obtained by OLS, are reported in Table A2. Panel A reports the results based on annual data; the results based on quarterly data are reported in Panel B. In both panels, the first three regressions contain just one predictor, while the fourth regression contains all three. The table reports the estimated coefficients as well as the t-statistics, along with the bootstrapped p-values associated with these t-statistics as well as with the R In the bootstrap, we repeat the following procedure 2, times: (i) Resample T pairs of (ˆv t, ê t ), with replacement, from the set of OLS residuals from regressions (B68) and (B75); (ii) Build up the time series of x t, starting from the unconditional mean and iterating forward on equation (B68), using the OLS estimates (ˆθ, Â) and the resampled values of ˆv t ; (iii) Construct the time series of returns, r t, by adding the resampled values of ê t to the sample mean (i.e., under the null that returns are not predictable); (iv) Use the resulting series of x t and r t to estimate regressions (B68) and (B75) by OLS. The bootstrapped p-value associated with the reported t-statistic (or R 2 ) is the relative frequency with which the reported quantity is smaller than its 2, counterparts bootstrapped under the null of no predictability. 16
18 Table A2 shows that all the predictors exhibit significant ability to predict returns, especially in the multivariate regressions that involve all three predictors. In those regressions, the estimated correlation between e t+1 and the estimated innovation in expected return, b v t+1, is negative. Pástor and Stambaugh (29) suggest this correlation as a diagnostic in predictive regressions, with a negative value being what one would hope to see for predictors able to deliver a reasonable proxy for expected return. B4.2. Long-horizon predictive variance This section provides detailed robustness results on the long-horizon predictive variance, expanding the evidence reported in the paper for both predictive systems. We examine the following cases: Annual data: Baseline case reported in the paper System 1: Table A3; Figures A2, A3 System 2: Figure A18, left column Annual data: First subperiod System 1: Table A4; Figures A4, A5 System 2: Figure A19, left column Annual data: Second subperiod System 1: Table A5; Figures A6, A7 System 2: Figure A19, right column Annual data: One instead of three predictors System 1: Table A6; Figures A8, A9 System 2: Figure A2, left column Annual data: Excess instead of real returns System 1: Table A7; Figures A1, A11 System 2: Figure A21, left column Quarterly data: Baseline case reported in the paper System 1: Table A8; Figures A12, A13 System 2: Figure A18, right column Quarterly data: One instead of three predictors System 1: Table A9; Figures A14, A15 System 2: Figure A2, right column Quarterly data: Excess instead of real returns System 1: Table A1; Figures A16, A17 17
19 System 2: Figure A21, right column We do not report subperiod results for quarterly data because the (post-war) quarterly sample is already rather short compared to the 26-year annual sample. Parameter uncertainty generally plays a larger role in shorter samples. 18
20 Table A1 Effects of Parameter Uncertainty on 2-Year Variance Ratio The table displays the ratio (1/2)Var(r T,T+2 D T )/Var(r T+1 D T ), where D T is information used by an investor at time T. The value of the ratio is computed under various parametric scenarios for β (autocorrelation of the conditional expected return µ t ), R 2 (fraction of variance in r t+1 explained by µ t ), ρ uw (correlation between unexpected returns and innovations in expected returns), ρ µb (correlation between µ T and its best available estimate given D T ), and E r (the unconditional mean return). For β, R 2, ρ uw, and ρ µb, each parameter is either drawn from its density in Figure A1 when uncertain or set to a fixed value. The parameters β, R 2, and ρ uw are set to their medians when held fixed, while ρ µb is fixed at its median as well as and 1. The medians are.86 for β,.12 for R 2, -.66 for ρ uw, and.7 for ρ µb. The variance of E r given D T is either (when fixed) or 1/2 times the expected variance of one-year returns (when uncertain). fixed (F) or uncertain (U) ρ µb fixed at ρ µb β R 2 ρ uw E r.7 1 uncertain F F F F U F F F F U F F F F U F F F F U U U U F U U U U
21 Table A2 Predictive Regressions This table summarizes the results from predictive regressions r t = a + b x t 1 + e t, where r t denotes real log stock market return and x t 1 contains the predictors (listed in the column headings) lagged by one year. Innovations in expected returns are constructed as b v t, where v t contains the disturbances estimated in a vector autoregression for the predictors, x t = θ +Ax t 1 +v t. The table reports the estimated slope coefficients ˆb, the correlation Corr(e t, b v t ) between unexpected returns and innovations in expected returns, and the (unadjusted) R 2 from the predictive regression. The independent variables are rescaled to have unit variance. The correlations and R 2 s are reported in percent (i.e., 1). The OLS t-statistics are given in parentheses ( ). The t-statistic of Corr(e t, b v t ) is computed as the t-statistic of the slope from the regression of the sample residuals ê t on ˆbˆv t. The p-values associated with all t-statistics and R 2 s are computed by bootstrapping and reported in brackets [ ]. Each p-value is the relative frequency with which the reported quantity is smaller than its 2, counterparts bootstrapped under the null of no predictability. See footnote 3 of this document for more details on the bootstrapping procedure. Panel A. Annual data (182 27) Dividend Yield Term Spread Bond Yield Corr(e t, b v t ) R (1.891) (-9.88) [.7] [.57] [1.] (.69) (3.298) [.498] [.236] [.] (2.129) (-2.86) [.34] [.18] [.997] (2.383) (2.137) (2.373) (-1.988) [.13] [.21] [.17] [.1] [.973] Panel B. Quarterly data (1952Q1 26Q4) Bond Yield Dividend Yield CAY Corr(e t, b v t ) R (3.299) (3.466) [.2] [.1] [.] (1.951) ( ) [.95] [.91] [1.] (3.866) (-8.885) [.] [.] [1.] (3.145) (1.876) (3.11) (-4.236) [.] [.2] [.67] [.4] [1.] 2
22 Table A3 (same as Table 1 in the paper) Variance Ratios and Components of Long-Horizon Variance The first row of each panel reports the ratio (1/k)Var(r T,T+k D T )/Var(r T+1 D T ), where Var(r T,T+k D T ) is the predictive variance of the k-year return based on 26 years of annual data for real equity returns and the three predictors over the period. The second row reports Var(r T,T+k D T ), multiplied by 1. The remaining rows report the five components of Var(r T,T+k D T ), also multiplied by 1 (they add up to total variance). Panel A contains results for k = 25 years, and Panel B contains results for k = 5 years. Results are reported under each of three priors for ρ uw, R 2, and β. As the prior for one of the parameters departs from the benchmark, the priors on the other two parameters are held at the benchmark priors. The tight priors, as compared to the benchmarks, are more concentrated towards 1 for ρ uw, for R 2, and 1 for β; the loose priors are less concentrated in those directions. ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
23 Table A4 Variance Ratios and Components of Long-Horizon Variance First subperiod ( ) Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
24 Table A5 Variance Ratios and Components of Long-Horizon Variance Second subperiod (195 27) Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
25 Table A6 Variance Ratios and Components of Long-Horizon Variance One instead of three predictors (Dividend yield) Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
26 Table A7 Variance Ratios and Components of Long-Horizon Variance Excess instead of real returns Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
27 Table A8 Variance Ratios and Components of Long-Horizon Variance Quarterly data (1952Q1 26Q4) Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
28 Table A9 Variance Ratios and Components of Long-Horizon Variance Quarterly data (1952Q1 26Q4) One predictor instead of three (dividend yield) Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
29 Table A1 Variance Ratios and Components of Long-Horizon Variance Quarterly data (1952Q1 26Q4) Excess instead of real returns Counterpart of Table A3 ρ uw R 2 β Prior Tight Bench Loose Tight Bench Loose Tight Bench Loose Panel A. Investment Horizon k = 25 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk Panel B. Investment Horizon k = 5 years Variance Ratio Predictive Variance IID Component Mean Reversion Uncertain Future µ Uncertain Current µ Estimation Risk
30 β R ρ uw ρ µ b Figure A1. Distributions for uncertain parameters The plots display the probability densities used to illustrate the effects of parameter uncertainty on long-run variance. In the R 2 panel, the solid line plots the density of the true R 2 (predictability given µ T ), and the dashed line plots the implied density of the R-squared in a regression of returns on b T. The dashed line incorporates the uncertainty about ρ µb. 29
31 .55 Panel A. Predictive Variance of Stock Returns.5 Variance (per year) Predictive Horizon (years).6 Panel B. Components of Predictive Variance.4 Variance (per year) IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (years) Figure A2 (same as Figure 6 in the paper). Predictive variance of multiperiod return and its components. Panel A plots the variance of the predictive distribution of long-horizon returns, Var(r T,T+k D T ). Panel B plots the five components of the predictive variance. All quantities are divided by k, the number of periods in the return horizon. The results are obtained by estimating the predictive system on annual real U.S. stock market returns in Three predictors are used: the dividend yield, the bond yield, and the term spread. 3
32 .6 Predictive Variance of Stock Returns: Different Priors.55.5 Variance (per year) Predictive Horizon (years) Figure A3. Predictive variance for seven different priors. The figure plots the variance of the predictive distribution of long-horizon returns, Var(r T,T+k D T ), as a function of the investment horison k. The variance is divided by k, the number of periods in the return horizon. The results are obtained by estimating the predictive system on annual real U.S. stock market returns in Three predictors are used: the dividend yield, the bond yield, and the term spread. 31
33 .4 Panel A. Predictive Variance of Stock Returns Variance (per year) Predictive Horizon (years).3 Panel B. Components of Predictive Variance.2 Variance (per year).1.1 IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (years) Figure A4. Predictive variance of multiperiod return and its components. First subperiod ( ). Counterpart of Figure A2. 32
34 .5 Predictive Variance of Stock Returns: Different Priors.45.4 Variance (per year) Predictive Horizon (years) Figure A5. Predictive variance for seven different priors. First subperiod ( ). Counterpart of Figure A3. 33
35 .55 Panel A. Predictive Variance of Stock Returns.5 Variance (per year) Predictive Horizon (years).6 Panel B. Components of Predictive Variance.4 Variance (per year).2.2 IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (years) Figure A6. Predictive variance of multiperiod return and its components. Second subperiod (195 27). Counterpart of Figure A2. 34
36 .75 Predictive Variance of Stock Returns: Different Priors Variance (per year) Predictive Horizon (years) Figure A7. Predictive variance for seven different priors. Second subperiod (195 27). Counterpart of Figure A3. 35
37 .4 Panel A. Predictive Variance of Stock Returns.38 Variance (per year) Predictive Horizon (years).4 Panel B. Components of Predictive Variance Variance (per year) IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (years) Figure A8. Predictive variance of multiperiod return and its components. One instead of three predictors (dividend yield). Counterpart of Figure A2. 36
38 .44 Predictive Variance of Stock Returns: Different Priors Variance (per year) Predictive Horizon (years) Figure A9. Predictive variance for seven different priors. One instead of three predictors (dividend yield). Counterpart of Figure A3. 37
39 .38 Panel A. Predictive Variance of Stock Returns.36 Variance (per year) Predictive Horizon (years).3 Panel B. Components of Predictive Variance.2 Variance (per year).1.1 IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (years) Figure A1. Predictive variance of multiperiod return and its components. Excess instead of real returns. Counterpart of Figure A2. 38
40 .44 Predictive Variance of Stock Returns: Different Priors Variance (per year) Predictive Horizon (years) Figure A11. Predictive variance for seven different priors. Excess instead of real returns. Counterpart of Figure A3. 39
41 .15 Panel A. Predictive Variance of Stock Returns Variance (per quarter) Predictive Horizon (quarters).15 Panel B. Components of Predictive Variance Variance (per quarter) IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (quarters) Figure A12. Predictive variance of multiperiod return and its components. Quarterly data (1952Q1 26Q4). Counterpart of Figure A2. 4
42 .24 Predictive Variance of Stock Returns: Different Priors Variance (per quarter) Predictive Horizon (quarters) Figure A13. Predictive variance for seven different priors. Quarterly data (1952Q1 26Q4). Counterpart of Figure A3. 41
43 .2 Panel A. Predictive Variance of Stock Returns.18 Variance (per quarter) Predictive Horizon (quarters).15 Panel B. Components of Predictive Variance Variance (per quarter) IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (quarters) Figure A14. Predictive variance of multiperiod return and its components. Quarterly data (1952Q1 26Q4). One instead of three predictors (dividend yield). Counterpart of Figure A2. 42
44 .55 Predictive Variance of Stock Returns: Different Priors Variance (per quarter) Predictive Horizon (quarters) Figure A15. Predictive variance for seven different priors. Quarterly data (1952Q1 26Q4). One instead of three predictors (dividend yield). Counterpart of Figure A3. 43
45 .15 Panel A. Predictive Variance of Stock Returns Variance (per quarter) Predictive Horizon (quarters).15 Panel B. Components of Predictive Variance Variance (per quarter) IID component Mean reversion Uncertain future µ Uncertain current µ Estimation risk Predictive Horizon (quarters) Figure A16. Predictive variance of multiperiod return and its components. Quarterly data (1952Q1 26Q4). Excess instead of real returns. Counterpart of Figure A2. 44
46 .22 Predictive Variance of Stock Returns: Different Priors Variance (per quarter) Predictive Horizon (quarters) Figure A17. Predictive variance for seven different priors. Quarterly data (1952Q1 26Q4). Excess instead of real returns. Counterpart of Figure A3. 45
47 1 A. Prior for Standard Deviation of π 1 B. Prior for Standard Deviation of π Density (rescaled) Density (rescaled) σ π σ π 1 C. Posterior for Increase in R 2 1 D. Posterior for Increase in R 2 Density (rescaled) Density (rescaled) Variance (per year) R 2 E. Predictive Variance of Returns Horizon (years) Variance (per quarter) R 2 F. Predictive Variance of Returns less imperfect more imperfect perfect (σ π =) Horizon (quarters) Figure A18 (same as Figure 7 in the paper). Predictive variance and predictor imperfection. The plots display results under the predictive system (System 2) in which expected return depends on a vector of observable predictors, x t, as well as a missing predictor, π t, that obeys an AR(1) process. The top panels display prior distributions for σ π, the standard deviation of π t, under different degrees of predictor imperfection. The middle panels display the corresponding posteriors of R 2, the true R 2 for one-period returns minus the observed R 2 when conditioning only on x t. The bottom panels display the predictive variances for the two imperfect-predictor cases as well for the case of perfect predictors (σ π = R 2 = ). The left-hand panels are based on annual data from for real U.S. stock returns and three predictors: the dividend yield, the bond yield, and the term spread. The right-hand panels are based on quarterly data from 1952Q1 26Q4 for real returns and three predictors: the dividend yield, CAY, and the bond yield. 46
Labor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationThe Effects of Monetary Policy on Stock Market Bubbles: Some Evidence
The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence Jordi Gali Luca Gambetti ONLINE APPENDIX The appendix describes the estimation of the time-varying coefficients VAR model. The model
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationGARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50
GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6
More informationPredictive Systems: Living with Imperfect Predictors
Predictive Systems: Living with Imperfect Predictors by * Ľuboš Pástor and Robert F. Stambaugh First draft: September 15, 6 This version: September 6, 6 Abstract The standard regression approach to investigating
More informationMID-TERM EXAM ANSWERS. p t + δ t = Rp t 1 + η t (1.1)
ECO 513 Fall 2005 C.Sims MID-TERM EXAM ANSWERS (1) Suppose a stock price p t and the stock dividend δ t satisfy these equations: p t + δ t = Rp t 1 + η t (1.1) δ t = γδ t 1 + φp t 1 + ε t, (1.2) where
More informationLecture 4: Dynamic models
linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationVolatility. Gerald P. Dwyer. February Clemson University
Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use
More informationPredictive Systems: Living With Imperfect Predictors. December
Predictive Systems: Living With Imperfect Predictors Lubos Pastor University of Chicago, CEPR & NBER Robert F. Stambaugh University of Pennsylvania & NBER December 6 This paper can be downloaded without
More informationDiscussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis
Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,
More informationOnline Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve
Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Joshua CC Chan Australian National University Gary Koop University of Strathclyde Simon M Potter
More informationBayesian Dynamic Linear Modelling for. Complex Computer Models
Bayesian Dynamic Linear Modelling for Complex Computer Models Fei Liu, Liang Zhang, Mike West Abstract Computer models may have functional outputs. With no loss of generality, we assume that a single computer
More informationTimevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan
Timevarying VARs Wouter J. Den Haan London School of Economics c Wouter J. Den Haan Time-Varying VARs Gibbs-Sampler general idea probit regression application (Inverted Wishart distribution Drawing from
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationPredictive Regressions: A Reduced-Bias. Estimation Method
Predictive Regressions: A Reduced-Bias Estimation Method Yakov Amihud 1 Clifford M. Hurvich 2 November 28, 2003 1 Ira Leon Rennert Professor of Finance, Stern School of Business, New York University, New
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationMassachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s
Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao
More informationPredictive Regressions: A Reduced-Bias. Estimation Method
Predictive Regressions: A Reduced-Bias Estimation Method Yakov Amihud 1 Clifford M. Hurvich 2 May 4, 2004 1 Ira Leon Rennert Professor of Finance, Stern School of Business, New York University, New York
More informationEstimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve
econometrics Article Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve Francesca Rondina ID Department of Economics, University of Ottawa, 120 University Private, Ottawa,
More informationMultivariate GARCH models.
Multivariate GARCH models. Financial market volatility moves together over time across assets and markets. Recognizing this commonality through a multivariate modeling framework leads to obvious gains
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationResearch Division Federal Reserve Bank of St. Louis Working Paper Series
Research Division Federal Reserve Bank of St Louis Working Paper Series Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models Michael Dueker Working Paper
More informationAppendix for Monetary Policy Shifts and the Term Structure
Appendix for Monetary Policy Shifts and the Term Structure Andrew Ang Columbia University and NBER Jean Boivin Bank of Canada, CIRPÉE, CIRANO and NBER Sen Dong Ortus Capital Management Rudy Loo-Kung Columbia
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationBayesian Model Comparison:
Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500
More informationPredicting bond returns using the output gap in expansions and recessions
Erasmus university Rotterdam Erasmus school of economics Bachelor Thesis Quantitative finance Predicting bond returns using the output gap in expansions and recessions Author: Martijn Eertman Studentnumber:
More informationPredictive Systems: Living with Imperfect Predictors
Predictive Systems: Living with Imperfect Predictors by * Ľuboš Pástor and Robert F. Stambaugh First draft: September 15, 26 This version: July 21, 28 Abstract We develop a framework for estimating expected
More informationAn estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic
Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented
More informationWeb Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.
Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we
More information1 Linear Difference Equations
ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with
More informationWhat is the Chance that the Equity Premium Varies over Time? Evidence from Regressions on the Dividend-Price Ratio
What is the Chance that the Equity Premium Varies over Time? Evidence from Regressions on the Dividend-Price Ratio Jessica A. Wachter University of Pennsylvania and NBER Missaka Warusawitharana Board of
More informationTAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω
ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.
More informationKatsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract
Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach
More informationThis note introduces some key concepts in time series econometrics. First, we
INTRODUCTION TO TIME SERIES Econometrics 2 Heino Bohn Nielsen September, 2005 This note introduces some key concepts in time series econometrics. First, we present by means of examples some characteristic
More informationIdentifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data
Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data Michael Ellington and Costas Milas Financial Services, Liquidity and Economic Activity Bank of England May
More informationSwitching Regime Estimation
Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms
More informationBayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More information7. Forecasting with ARIMA models
7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationGARCH Models Estimation and Inference. Eduardo Rossi University of Pavia
GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function
More informationB y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal
Structural VAR Modeling for I(1) Data that is Not Cointegrated Assume y t =(y 1t,y 2t ) 0 be I(1) and not cointegrated. That is, y 1t and y 2t are both I(1) and there is no linear combination of y 1t and
More informationECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS
ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the
More informationarxiv: v1 [stat.me] 26 Jul 2011
AUTOREGRESSIVE MODELS FOR VARIANCE MATRICES: STATIONARY INVERSE WISHART PROCESSES BY EMILY B. FOX AND MIKE WEST Duke University, Durham NC, USA arxiv:1107.5239v1 [stat.me] 26 Jul 2011 We introduce and
More informationA Predictive System with Heteroscedastic Expected Returns and Economic Constraints
A Predictive System with Heteroscedastic Expected Returns and Economic Constraints Maxime Bonelli and Daniel Mantilla-García March 6, 015 Abstract We propose a variation of a predictive system that incorporates
More informationVector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.
Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series
More informationPanel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63
1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:
More informationLecture 6: Univariate Volatility Modelling: ARCH and GARCH Models
Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models Prof. Massimo Guidolin 019 Financial Econometrics Winter/Spring 018 Overview ARCH models and their limitations Generalized ARCH models
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form
ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t
More informationCointegration Lecture I: Introduction
1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic
More informationBayesian Estimation of DSGE Models
Bayesian Estimation of DSGE Models Stéphane Adjemian Université du Maine, GAINS & CEPREMAP stephane.adjemian@univ-lemans.fr http://www.dynare.org/stepan June 28, 2011 June 28, 2011 Université du Maine,
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationA Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance
CESIS Electronic Working Paper Series Paper No. 223 A Bootstrap Test for Causality with Endogenous Lag Length Choice - theory and application in finance R. Scott Hacker and Abdulnasser Hatemi-J April 200
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More informationDynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano
Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively
More informationAppendix to The Life-Cycle and the Business-Cycle of Wage Risk - Cross-Country Comparisons
Appendix to The Life-Cycle and the Business-Cycle of Wage Risk - Cross-Country Comparisons Christian Bayer Falko Juessen Universität Bonn and IGIER Technische Universität Dortmund and IZA This version:
More informationPredicting Mutual Fund Performance
Predicting Mutual Fund Performance Oxford, July-August 2013 Allan Timmermann 1 1 UC San Diego, CEPR, CREATES Timmermann (UCSD) Predicting fund performance July 29 - August 2, 2013 1 / 51 1 Basic Performance
More informationSome Customers Would Rather Leave Without Saying Goodbye
Some Customers Would Rather Leave Without Saying Goodbye Eva Ascarza, Oded Netzer, and Bruce G. S. Hardie Web Appendix A: Model Estimation In this appendix we describe the hierarchical Bayesian framework
More informationBootstrap prediction intervals for factor models
Bootstrap prediction intervals for factor models Sílvia Gonçalves and Benoit Perron Département de sciences économiques, CIREQ and CIRAO, Université de Montréal April, 3 Abstract We propose bootstrap prediction
More informationDSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.
DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer
More information(I AL BL 2 )z t = (I CL)ζ t, where
ECO 513 Fall 2011 MIDTERM EXAM The exam lasts 90 minutes. Answer all three questions. (1 Consider this model: x t = 1.2x t 1.62x t 2 +.2y t 1.23y t 2 + ε t.7ε t 1.9ν t 1 (1 [ εt y t = 1.4y t 1.62y t 2
More informationFinancial Econometrics
Financial Econometrics Estimation and Inference Gerald P. Dwyer Trinity College, Dublin January 2013 Who am I? Visiting Professor and BB&T Scholar at Clemson University Federal Reserve Bank of Atlanta
More informationIntroduction to Econometrics
Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle
More informationWeek 5 Quantitative Analysis of Financial Markets Characterizing Cycles
Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036
More information1. Fundamental concepts
. Fundamental concepts A time series is a sequence of data points, measured typically at successive times spaced at uniform intervals. Time series are used in such fields as statistics, signal processing
More informationMidterm Suggested Solutions
CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ)
More informationBayesian Methods in Multilevel Regression
Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationDynamic Matrix-Variate Graphical Models A Synopsis 1
Proc. Valencia / ISBA 8th World Meeting on Bayesian Statistics Benidorm (Alicante, Spain), June 1st 6th, 2006 Dynamic Matrix-Variate Graphical Models A Synopsis 1 Carlos M. Carvalho & Mike West ISDS, Duke
More informationTime Series Models for Measuring Market Risk
Time Series Models for Measuring Market Risk José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department June 28, 2007 1/ 32 Outline 1 Introduction 2 Competitive and collaborative
More informationCointegrating Regressions with Messy Regressors: J. Isaac Miller
NASMES 2008 June 21, 2008 Carnegie Mellon U. Cointegrating Regressions with Messy Regressors: Missingness, Mixed Frequency, and Measurement Error J. Isaac Miller University of Missouri 1 Messy Data Example
More informationBayesian Econometrics
Bayesian Econometrics Lecture Notes Prof. Doron Avramov The Jerusalem School of Business Administration The Hebrew University of Jerusalem Bayes Rule Let x and y be two random variables Let P x and P y
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate
More informationNowcasting Norwegian GDP
Nowcasting Norwegian GDP Knut Are Aastveit and Tørres Trovik May 13, 2007 Introduction Motivation The last decades of advances in information technology has made it possible to access a huge amount of
More informationVAR Models and Applications
VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationThe Return of the Gibson Paradox
The Return of the Gibson Paradox Timothy Cogley, Thomas J. Sargent, and Paolo Surico Conference to honor Warren Weber February 212 Introduction Previous work suggested a decline in inflation persistence
More informationModeling conditional distributions with mixture models: Theory and Inference
Modeling conditional distributions with mixture models: Theory and Inference John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università di Venezia Italia June 2, 2005
More informationDividend Variability and Stock Market Swings
Dividend Variability and Stock Market Swings MartinD.D.Evans Department of Economics Georgetown University Washington DC 20057 The Review of Economic Studies Abstract This paper examines the extent to
More informationSmall Open Economy RBC Model Uribe, Chapter 4
Small Open Economy RBC Model Uribe, Chapter 4 1 Basic Model 1.1 Uzawa Utility E 0 t=0 θ t U (c t, h t ) θ 0 = 1 θ t+1 = β (c t, h t ) θ t ; β c < 0; β h > 0. Time-varying discount factor With a constant
More informationSix anomalies looking for a model: A consumption based explanation of international finance puzzles
Six anomalies looking for a model: A consumption based explanation of international finance puzzles Riccardo Colacito Shaojun Zhang (NYU Stern) Colacito (2009) 1 / 17 Motivation Six Puzzles Backus-Smith
More informationGARCH Models Estimation and Inference
Università di Pavia GARCH Models Estimation and Inference Eduardo Rossi Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function
More informationCross-sectional space-time modeling using ARNN(p, n) processes
Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest
More informationESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS
ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, N.C.,
More information... Econometric Methods for the Analysis of Dynamic General Equilibrium Models
... Econometric Methods for the Analysis of Dynamic General Equilibrium Models 1 Overview Multiple Equation Methods State space-observer form Three Examples of Versatility of state space-observer form:
More informationSUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )
Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND
More informationECON3327: Financial Econometrics, Spring 2016
ECON3327: Financial Econometrics, Spring 2016 Wooldridge, Introductory Econometrics (5th ed, 2012) Chapter 11: OLS with time series data Stationary and weakly dependent time series The notion of a stationary
More informationIntroduction. Chapter 1
Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics
More informationEcon 423 Lecture Notes: Additional Topics in Time Series 1
Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes
More informationWarwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014
Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive
More informationVector Auto-Regressive Models
Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationVector Autoregression
Vector Autoregression Jamie Monogan University of Georgia February 27, 2018 Jamie Monogan (UGA) Vector Autoregression February 27, 2018 1 / 17 Objectives By the end of these meetings, participants should
More informationModeling conditional distributions with mixture models: Applications in finance and financial decision-making
Modeling conditional distributions with mixture models: Applications in finance and financial decision-making John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università
More informationMultivariate Time Series
Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form
More informationR = µ + Bf Arbitrage Pricing Model, APM
4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.
More informationEconomics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models
University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe
More information