Quantiles, Expectiles and Splines
|
|
- Cornelius Cross
- 5 years ago
- Views:
Transcription
1 Quantiles, Expectiles and Splines Andrew Harvey University of Cambridge December 2007 Harvey (University of Cambridge) QES December / 40
2 Introduction The movements in a time series may be described by time-varying quantiles. These may be estimated non-parametrically by tting a simple moving average or a more elaborate kernel. An alternative approach is to formulate a partial model, the role of which is to focus attention on some particular feature - here a quantile - so as to provide a (usually nonlinear) weighting of the observations that will extract that feature by taking account of the dynamic properties of the series. Time-varying quantiles can be tted to a sequence of observations by setting up a state space model and iteratively applying a suitably modi ed signal extraction algorithm. Satisfy the de ning property of xed quantiles in having the appropriate number of observations above and below. Harvey (University of Cambridge) QES December / 40
3 GM Q(05) Q(95) Q(25) Q(75) Figure: Quantiles tted to GM returns Harvey (University of Cambridge) QES December / 40
4 Expectiles are similar to quantiles except that they are de ned by tail expectations; see Newey and Powell (1987). Time-varying expectiles can be estimated by a state space algorithm. This is similar to the algorithm used for quantiles, but estimation is more straightforward and much quicker. Harvey (University of Cambridge) QES December / 40
5 Quantiles and expectiles Let ξ(τ) - or, when there is no risk of confusion, ξ - denote the τ th quantile. The probability that an observation is less than ξ(τ) is τ, where 0 < τ < 1. Given a set of T observations, y t, t = 1,.., T, the sample quantile, eξ(τ), can be obtained as the solution to minimizing S τ = T ρ τ (y t t=1 ξ) = (τ 1)(y t ξ) + τ(y t ξ) (1) y t <ξ y t ξ with respect to ξ, where ρ τ (.) is the check function, de ned for quantiles as ρ τ (y t ξ) = (τ I (y t ξ < 0)) (y t ξ) (2) and I (.) is the indicator function. Harvey (University of Cambridge) QES December / 40
6 Expectiles, denoted µ(ω), 0 < ω < 1, are similar to quantiles but they are determined by tail expectations rather than tail probabilities. For a given value of ω, the sample expectile, eµ(ω), is obtained by minimizing the asymmetric least squares function, S ω = ρ ω (y t µ(ω)) = jω I (y t µ(ω) < 0)j (y t µ(ω)) 2, (3) with respect to µ(ω). Di erentiating S ω and dividing by minus two gives T jω I (y t µ(ω) < 0)j (y t µ (ω)) (4) t=1 The sample expectile, eµ(ω), is the value of µ(ω) that makes (4) equal to zero. Setting ω = 0.5 gives the mean, that is eµ(0.5) = y. For other ω it is necessary to iterate. Harvey (University of Cambridge) QES December / 40
7 Signal extraction A framework for estimating time-varying quantiles, ξ t (τ), can be set up by assuming that they are generated by stochastic processes and are connected to the observations through a measurement equation y t = ξ t (τ) + ε t (τ), t = 1,..., T, (5) where Pr(ε t (τ) < 0) = τ with 0 < τ < 1. The disturbances, ε t (τ), are assumed to be serially independent and independent of ξ t (τ). The problem is then one of signal extraction. The assumption that the quantile or expectile follows a stochastic process can be regarded as a device for inducing local weighting of the observations. One possibility is a random walk, ξ t (τ) = ξ t 1 (τ) + η t (τ), η t (τ) v IID(0, σ 2 η(τ) ), (6) Harvey (University of Cambridge) QES December / 40
8 A smoother quantile can be extracted by a local linear trend ξ t = ξ t 1 + β t 1 + η t (7) β t = β t 1 + ζ t where β t is the slope and ζ t is IID(0, σ 2 ζ ). It is well known that in a Gaussian model setting Var(η t ) = σ 2 ζ /3 and Cov(η t, ζ t ) = σ2 ζ /2 results in the smoothed estimates being a cubic spline. The model for expectiles is set up in a similar way with (5) replaced by y t = µ t (ω) + ε t (ω). Harvey (University of Cambridge) QES December / 40
9 Derivation and properties The proposed solution minimizes t ρ τ (y t ξ t ) subject to a set of constraints imposed by the time series model for the quantile. Suppose this model is a rst-order autoregression,. The criterion function is then J = T ρ τ (y t ξ t ) t=1 ω 1 (1 φ 2 )(ξ 1 ξ ) 2 2 σ 2 η T 1 η 2 2 t t=2 σ 2. (8) η where ω is a scaling constant. Given the observations, the estimated time-varying quantiles, eξ 1,.., eξ T, are the values that maximise J. Harvey (University of Cambridge) QES December / 40
10 In a classical signal extraction framework, y t = µ t + ε t, t = 1,..., T (9) where µ t is a Gaussian stochastic process and ε t is NID(0, σ 2 ). If J is rede ned with µ t in place of ξ t and ρ τ (y t ξ t )/ω replaced by (y t µ t ) 2 /2σ 2, it can be interpreted as the logarithm of the joint density of the observations and µ t 0 s. Di erentiating with respect to µ t, t = 1,..., T, setting to zero and solving gives the modes, eµ t, t = 1,..., T, of the conditional distributions of the µ t 0 s. For a multivariate Gaussian distribution these are the conditional expectations, which by de nition are the smoothed (minimum mean square error) estimators. Harvey (University of Cambridge) QES December / 40
11 Returning to the quantiles and di erentiating J with respect to ξ t gives J = 1 ξ t ω IQ (y t ξ t ) + φξ t 1 (1 + φ 2 )ξ t + φξ t+1 + (1 φ) 2 ξ σ 2, η (10) for t = 2,..., T 1, (modi ed at the endpoints), where IQ (y t ξ t ) is de ned as in (24). For t = 2,.., T 1, setting J/ ξ t to zero gives an equation that is satis ed by the estimated quantiles, eξ t, eξ t 1 and eξ t+1, and similarly for t = 1 and T. If a solution is located on an observation, that is eξ t = y t, then IQ (y t ξ t ) is not de ned as the check function is not di erentiable at zero. Harvey (University of Cambridge) QES December / 40
12 In a Gaussian model, (9), a little algebra leads to the classic Wiener-Kolmogorov (WK) formula for a doubly in nite sample. For the AR(1) model eµ t = µ + g g y (y t µ) where µ = E (µ t ), g = σ 2 η/((1 φl)(1 φl 1 )) is the autocovariance generating function (ACGF) of µ t, L is the lag operator, and g y = g + σ 2 ε. The WK formula has the attraction that, for simple models, g y can be written in terms of the reduced form and g/g y can be expanded to give an explicit expression for the weights. Here the reduced form of y t is an ARMA(1, 1) model and q µ θ eµ t = µ + φ(1 θ 2 θ jjj (y t+j µ) (11) ) j= where q µ = σ 2 η/σ 2 ε and h θ = (q µ φ 2 )/2φ q µ φ 2 i 2 1/2 4φ 2 /2φ. This expression is still valid for the random walk except that µ disappears because the weights sum to one. Harvey (University of Cambridge) QES December / 40
13 In order to proceed in a similar way with quantiles, we need to take account of corner solutions by de ning the corresponding IQ 0 ts as the values that give equality of the associated derivative of J. Then we can set J/ ξ t in (10) equal to zero to give eξ t ξ g = 1 ω IQ(y t eξ t ) (12) for a doubly in nite sample with ξ known. Using the lag operator yields eξ t = ξ + σ2 η ω j= φ jjj 1 φ 2 IQ(y t+j eξ t+j ) (13) Note that if the observations are multiplied by a constant, then the quantile is multiplied by the same constant, as are ω and σ 2 η. By de ning the quasi signal-noise ratio as q = σ 2 η/ω 2, it remains scale invariant. Harvey (University of Cambridge) QES December / 40
14 An expression for extracting quantiles that has a similar form to (11) can be obtained by adding (eξ t ξ )/ω 2 to both sides of (12) and re-arranging to give eξ t = ξ + g h e gy ω ξ t ξ + ωiq(y t eξ t )i (14) where gy ω = g + ω 2.This is not an ACGF but it can be treated as though it were. Thus we obtain eξ t = ξ + q τ θ φ(1 θ 2 θ jjj [eξ t+j ξ + ωiq(y t+j eξ t+j )] (15) ) j= where θ is as de ned for (11) but with q µ replaced by q = σ 2 η/ω 2. For the random walk, the weights sum to one, so ξ drops out giving eξ t = 1 θ 1 + θ θ jjj [eξ t+j + ωiq(y t+j eξ t+j )] (16) j= Harvey (University of Cambridge) QES December / 40
15 Non-parametric smoothing Changing quantiles may be estimated non-parametrically, as in Yu and Jones (1998), by minimising a local check function to give an estimator, bξ t, that satis es h K (j/h)iq(y t+j bξ t ) = 0 (17) j= h where K (.) is a weighting kernel, h is a bandwidth.with IQ(y t+j bξ t ) de ned appropriately if y t+j = bξ t. Proposition If the same kernel and bandwidth are used for di erent quantiles, they cannot cross (though they may touch). Harvey (University of Cambridge) QES December / 40
16 It is interesting to compare the above weighting scheme with the one implied by the random walk model. If eξ t+j in (16) is constant, it satis es (17) with K (j/h) replaced by θ jjj so giving an (in nite) exponential decay. The time series model determines the shape of the kernel while the q parameter plays the same role as the bandwidth. An important advantage of the more model-based approach for forecasting is that it automatically determines a weighting pattern at the end of the sample that is consistent with the one in the middle. Harvey (University of Cambridge) QES December / 40
17 State Space Form The state space form (SSF) for a univariate time series is: y t = z 0 t α t + ε t, Var(ε t ) = σ 2 t, t = 1,..., T (18) α t = T t α t 1 +η t, Var(η t ) = Q t where α t is an m 1 state vector, z t is a non-stochastic m 1 vector, σ 2 t is a non-negative scalar, T t is an m m non-stochastic transition matrix and Q t is an m m covariance matrix. The speci cation is completed by assuming that α 1 has mean a 1j0 and covariance matrix P 1j0 and that the serially independent disturbances ε t and η t are independent of each other and of the initial state. Harvey (University of Cambridge) QES December / 40
18 Consider the criterion function J = T ht 1 ρ(y t zt 0 α t ) t=1 T 1 2 η 0 t Q 1 1 t η t t=2 2 (α 1 a 1j0 ) 0 P 1 1j0 (α 1 a 1j0 ), (19) where ρ(y t zt 0 α t ) is as in (2) or (3), with zt 0 α t equal to ξ t (τ) or µ t (ω), Q t and P 1j0 are are assumed positive de nite matrices as in (18) and h t is a non-stochastic sequence of postive scalars. Suppose that the initial state and the ηt 0 s are normally distributed. For a Gaussian model of the form (18), logarithm of the joint density of the observations and the states is, ignoring irrelevant terms, given by J with ρ(y t zt 0 α t ) = (y t µ t (0.5)) 2 and h t = 2σ 2 t. Di erentiating J with respect to to each element of α t gives a set of equations, which, when set to zero and solved gives the minimum mean square error estimates of α t. These they may be computed e ciently by the Kalman lter and associated smoother (KFS) as described in Durbin and Koopman (2001, pp ). If all the elements in the state are nonstationary and given a di use prior, the last term in J disappears. An algorithm is available as a subroutine in the SsfPack set of programs within Ox; see Koopman et al (1999). Harvey (University of Cambridge) QES December / 40
19 We can think of (19) as a criterion function that provides the basis for computing a quantile or expectile subject to a set of constraints imposed by the time series model for the quantile or expectile. For expectiles di erentiating J gives J = z 1 (2/h 1 )IE (y 1 z α 1α 0 1 ) P 1 1j0 (α 1 a 1j0 ) + T2Q (α 2 T 2 α 1 ) 1 J = z t (2/h t )IE (y t zt 0 α t ) Qt 1 (α t T t α t 1 ) + T α t+1q 0 t+1 1 (α t+1 T t t=2,..., T 1, J = z T (2/h T )IE (y T zt 0 α α T ) QT 1 (α T T T α T 1 ). T where IE (y t µ t (ω)) = jω I (y t µ t (ω) < 0)j (y t µ t (ω)), t = 1,..., T. (21) The smoothed estimates, eα t, satisfy the equations obtained by setting these derivatives equal to zero. Harvey (University of Cambridge) QES December / 40
20 Let h t = g t /κ, where κ is a constant, the interpretation of which will become apparent. For any expectile, adding and subtracting z t gt 1 zt 0 α t to the equations in (20) allows the rst term to be written as z t g 1 t [z 0 t α t + 2κIE y t z 0 t α t ] zt g 1 t z 0 t α t, t = 1,..., T. (22) This suggests that we set up an iterative procedure in which the estimate of the state at the i-th iteration, eα (i) t, is computed from the KFS applied to a set of synthetic observations constructed as (i 1) by t = ztbα 0 (i 1) t + 2κIE y t ztbα 0 (i 1) t. (23) The iterations are carried out until the bα (i)0 t s converge whereupon eµ t (ω) = zteα 0 t. Harvey (University of Cambridge) QES December / 40
21 For quantiles, the rst term in each of the three equations of (20) is given by z t ht 1 IQ(y t zt 0 α t ), where IQ(y t ξ t (τ)) = τ 1, if yt < ξ t (τ) τ, if y t > ξ t (τ) t = 1,..., T, (24) and the synthetic observations in the KFS are (j 1) by t = ztbα 0 (j 1) + κiq y t ztbα 0 (j 1), t = 1,..., T (25) t However, the possibility of a solution where the estimated quantile passes through an observation means that the algorithm has to be modi ed somewhat; see De Rossi and Harvey (2006). t Harvey (University of Cambridge) QES December / 40
22 Estimates of time-varying quantiles and expectiles obtained from the smoothing equations of the previous sub-section can be shown to satisfy properties that generalize the de ning characteristics of xed quantiles and expectiles. It is assumed that the state has been arranged so that the rst element represents the level and that (without loss of generality) the rst element in z t has been set to unity. Let the rst derivative, with respect to α t, of the second term of J be written j2 0 = T t=1 A t α t, where the Ats 0 are m m matrices. Lemma For a model in SSF with a di use prior on the initial state, a su cient condition for the rst element in the vector j2 0 to be zero is that the rst column of T t I consists of zeroes for all t = 2,..., T. Harvey (University of Cambridge) QES December / 40
23 If h t is time-invariant and the conditions of the Lemma hold, the estimated quantiles satisfy the fundamental property of sample time-varying quantiles, namely that the number of observations that are less than the corresponding quantile, that is y t < eξ t (τ), is no more than [T τ] while the number greater is no more than [T (1 τ)]. Harvey (University of Cambridge) QES December / 40
24 Proposition If the distribution of y is time invariant when adjusted for changes in location and scale, and is continuous with nite mean, the population τ quantiles and ω expectiles coincide for ω satisfying ω = R ξ(τ) (y ξ(τ))df (y) R ξ(τ) (y ξ(τ))df (y) R ξ(τ) (y ξ(τ))df (y) where F (y) is the cdf of y. Assuming this to be the case, eµ t (ω) is an estimator of the eτ quantile, ξ t (eτ), where eτ is de ned as the proportion of observations for which y t < eµ t (ω), t = 1,..., T. When used in this way we will denote the estimator eµ t (ω) as eµ t (eτ). However, it will not, in general, coincide with the time-varying eτ quantile estimated directly since it weights the observations di erently. In particular, it is unlikely to pass through any observations. Harvey (University of Cambridge) QES December / 40
25 Parameter estimation The smoothing algorithms depend on parameters that can be estimated by cross validation. For time-varying quantiles, the function to be minimized is T CV (τ) = ρ τ (y t eξ ( t) t ) (26) t=1 where eξ ( t) t is the smoothed value at time t when y t is dropped; see De Rossi and Harvey (2006). A similar criterion, CV (ω), may be used for expectiles. In a time invariant model with quantiles or expectiles following a random walk, Q t is a scalar equal to σ 2 η(τ) or σ 2 η(ω). We would like a suitable parameterization in terms of a quasi signal-noise ratio that is scale invariant. For the mean it will be recalled that h t = 2σ 2 t and so in a time invariant model the usual signal-noise ratio, σ 2 η/σ 2, implies that g t = σ 2 and κ = 0.5 in (22). A similar normalization can be applied for other expectiles so the quasi signal-noise ratios are q ω = σ 2 η(ω)/σ 2. Hence the iterations are based on (22) with g t set to one and κ = 0.5. For Harvey quantiles, (University of Cambridge) σ 2 /h is not scale invariant. QES We therefore consider Decemberthe 2007 quasi 25 / 40
26 Nonparametric regression with cubic splines A slowly changing quantile can be estimated by minimizing the criterion function ρ τ fy t ξ t g subject to smoothness constraints. The cubic spline solution seeks to do this by nding a solution to Z min ρ τ fy t ξ(x t )g + λ 2 fξ 00 (x)g 2 dx (27) T t=1 where ξ(x) is a continuous function with square integrable second derivative, 0 x T and x t = t. The parameter λ 2 controls the smoothness of the spline. We show the same cubic spline is obtained by quantile signal extraction. The SSF allows irregularly spaced observations to be handled since it can deal with systems that are not time invariant. The form of such systems is the implied discrete time formulation of a continuous time model. This generalisation allows the handling of nonparametric quantile and expectile regression by cubic splines when there is only one explanatory variable. The observations, which may be from a cross-section, are arranged so that the values of the explanatory variable are in ascending order. Harvey (University of Cambridge) QES December / 40
27 Bosch, Ye and Woodworth (1995) propose a solution to cubic spline quantile regression that uses quadratic programming. Unfortunately this necessitates the repeated inversion of large matrices of dimension up to 4T 4T. This is very time consuming. Our signal extraction appears to be much faster (and more general) and makes estimation of the smoothing parameter (quasi signal-noise ratio) a feasible proposition. The fundamental property of quantiles continues to hold with irregularly spaced observations. All that happens is that the SSF becomes time-varying. If there are multiple observations at some points then n, the total number of observations, replaces T, number of distinct points, in the summation. Harvey (University of Cambridge) QES December / 40
28 An example of cubic spline regression is provided by the motorcycle data, which records measurements of the acceleration, in milliseconds, of the head of a dummy in motorcycle crash tests. The observations are irregularly spaced and at some time points there are multiple observations. Harvey and Koopman (2000) highlight the stochastic trend connection. Figure 2 shows the cubic spline expectiles obtained using the value of σ 2 ζ /σ2 = 0.07 computed by CV for the mean. The graph gives a clear visual impression of the movements in level and dispersion. If we count the number of observations below each expectile, they can be interpreted as quantiles if we are prepared to assume that the shape of the distribution is time invariant. Harvey (University of Cambridge) QES December / 40
29 Obs time ω =0.25 q=0.07 ω =0.75 q=0.07 time time ω =0.05 q=0.07 ω =0.5 q=0.07 ω =0.95 q= Figure: Cubic spline expectiles tted to the motorcycle data. The parameter q µ is estimated by cross validation. Harvey (University of Cambridge) QES December / 40
30 Conclusions ( so far) Time-varying quantiles and expectiles are GOOD THINGS and provide information on various aspects of a time series, such as dispersion and asymmetry. Time-varying quantiles can be tted iteratively applying a suitably modi ed state space signal extraction algorithm. The algorithm for time-varying expectiles is much faster as there is no need to take account of corner solutions. Satisfy the de ning property of xed quantiles in having the appropriate number of observations above and below it, while expectiles satisfy properties that generalize the moment conditions associated with xed expectiles. Our model-based approach means that time-varying quantiles and expectiles can be used for forecasting. As such they o er an alternative to methods such as those in Engle and Manganelli (2004) and Granger and Sin (2000), that are based on conditional autoregressive models. Equivalent to tting a cubic spline. Because the state space form can handle irregularly spaced observations, the proposed algorithms are easily adapted to provide a viable means of computing spline-based Harvey (University of Cambridge) QES December / 40
31 But estimating the signal noise ratio for quantiles takes a long time Harvey (University of Cambridge) QES December / 40
32 The dual: tracking the distribution Harvey (University of Cambridge) QES December / 40
33 A di erent, but complementary approach, is to pre-assign a value ξ and then construct a binary series, I (y t ξ). At any point in time E (I (y t ξ) = τ t, t = 1,..., T (28) If the quantiles are xed and known, setting ξ = ξ(τ) means that E (I t ) = τ. If the distribution changes, a discount parameter, ω, leads to where eτ t+1jt = a t+1jt / a t+1jt + b t+1jt, t = 1,..., T, a t+1jt = ωa tjt 1 + ωi t, t = 1,..., T (29) b t+1jt = ωb tjt 1 + ω(1 I t ) (30) with a 1j0 = b 1j0 = 1. Note that eτ t+1jt = Pr (I t+1 (τ) = 1jI j (τ), j = 1,..., t) = eτ tjt. Harvey (University of Cambridge) QES December / 40
34 The log-likelihood function is log L(ω) = T t=1 fi t ln eτ tjt 1 + (1 I t (τ)) ln(1 eτ tjt 1 )g, (31) where eτ 1j0 = 1/2. The ltered estimates are an EWMA in the indicators, that is eτ t+1jt = eτ tjt = t 1 Σ j=0 ωj I t j + ω t t 1 Σ j=0 ωj + 2ω t ' (1 ω) t 1 Σ j=0 ωj I t j with the terms in ω t not present if a 1j0 = b 1j0 = 0. It is reasonable to construct a two-sided smoothed estimator of the same form. This may be done using the smoother for a Gaussian random walk plus noise with q = (1 ω) 2 /ω. In the middle of a large sample eτ tjt = 1 ω 1 + ω Σ j ωj I t j (32) Harvey (University of Cambridge) QES December / 40
35 This is also the same problem as forecasting the sign of a variable, or more generally y t ξ, where ξ is some pre-assigned value. Indeed it can be used for any binary series eg I t = 1 if Cambridge win the boat race. Here ω = (and q = 0.017) 1.0 boat Level Harvey (University of Cambridge) QES December / 40
36 By forming a whole grid a changing distribution function can be estimated and tracked. There are N categories de ned by the boundaries, ξ 1,..., ξ N 1,. Let I t,j = 1 if y t ξ j, j = 1,..., N 1 and zero otherwise, ie I (y t ξ j ), and de ne I N,t = 1 and I 0,t = 0. Then a j,t+1jt = ω j a j,tjt 1 + ω j I j,t, j = 1,..., N, t = 1,..., T (33) with a j,1j0 = 1, j = 1,..., N, and eτ j,t+1jt = eτ j,tjt = Σ j i=1 a i,t+1jt /Σ N j=1 a j,t+1jt. The eτ 0 j,tjts lie in the range [0,1] by construction and, of course, eτ N,tjt = 1. When the ω 0 s are the same, the estimated series of probabilities, eτ j,tjt, cannot cross. Harvey (University of Cambridge) QES December / 40
37 The Dirichlet-multinomial or Polya predictive distribution leads to the log-likelihood function where ln L(ω 1,.., ω N ) = T ln `t t=1 ln `t = ln Γ(Σ N j=1a j,tjt 1 ) Σ N j=1 lnfγ(i j I j 1 + a j,tjt 1 )/Γ(a j,tjt 1 )g, (34) The binary likelihood, (31), is obtained when N = 2. Figure 4 shows a graph of estimates of the discount factors for GM with ξ 1,..., ξ N 1 set to the 5%, 10%,..., 95% quantiles from the raw (empirical) distribution. As expected there is very little discounting at and around the centre of the distribution. Quadratic tted by OLS. This suggests that a general approach for estimating ω 0 s as a slowly changing function of τ is to maximize the likelihood, (34), wrt a, b and c in the quadratic ω(τ) = a + b + cτ 2. May set ω = 1 and/or impose symmetry. Harvey (University of Cambridge) QES December / 40
38 Figure: GM - quadratic tted to estimated discount factors Harvey (University of Cambridge) QES December / 40
39 Time-varying quantiles may be extracted from the eτ 0 j,tjts as follows. Suppose an estimate of ξ t (τ) is required, and that eτ k 1,tjt τ eτ k,tjt for some k = 1,.., N. Linear interpolation then yields bξ t (τ) = τ eτ k 1,tjt eτ k,tjt eτ k 1,tjt fξ k ξ k 1 g + ξ k 1, t = 1,..., T What is the relationship between these estimates and those obtained directly? If eξ t+j (τ) = ξ(τ) is constant, then IQ(y t+j ξ(τ)) = τ I t+j (y t ξ(τ)), and so the condition j= θ jjj IQ(y t+j ξ(τ)) = 0 implied by tting the time-varying quantile implies in turn that j= θ jjj I t+j (y t ξ(τ)) = τ. A comparison with (32) shows that ω = θ. Harvey (University of Cambridge) QES December / 40
40 So tracking the distribution may be better - but really only viable when there is no trend in mean or variance. *********** THE END ********************* Harvey (University of Cambridge) QES December / 40
Quantiles, Expectiles and Splines
Quantiles, Expectiles and Splines Giuliano De Rossi and Andrew Harvey* Faculty of Economics, Cambridge University February 9, 2007 Abstract A time-varying quantile can be tted to a sequence of observations
More informationTime-Varying Quantiles
Time-Varying Quantiles Giuliano De Rossi and Andrew Harvey Faculty of Economics, Cambridge University July 19, 2006 Abstract A time-varying quantile can be tted to a sequence of observations by formulating
More informationWhen is a copula constant? A test for changing relationships
When is a copula constant? A test for changing relationships Fabio Busetti and Andrew Harvey Bank of Italy and University of Cambridge November 2007 usetti and Harvey (Bank of Italy and University of Cambridge)
More informationTime-Varying Quantiles
Time-Varying Quantiles Giuliano De Rossi and Andrew Harvey Faculty of Economics, Cambridge University e-mail: ach34@cam.ac.uk January 31, 2007 1 Introduction In modelling the evolution of a time series,
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationProblem set 1 - Solutions
EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationFiltering and Likelihood Inference
Filtering and Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania July 10, 2011 Jesús Fernández-Villaverde (PENN) Filtering and Likelihood July 10, 2011 1 / 79 Motivation Introduction
More informationWhen is a copula constant? A test for changing relationships
When is a copula constant? A test for changing relationships Fabio Busetti and Andrew Harvey Bank of Italy and Faculty of Economics, Cambridge November 9, 2007 Abstract A copula de nes the probability
More informationEconomics 241B Review of Limit Theorems for Sequences of Random Variables
Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence
More informationTREND ESTIMATION AND THE HODRICK-PRESCOTT FILTER
J. Japan Statist. Soc. Vol. 38 No. 1 2008 41 49 TREND ESTIMATION AND THE HODRICK-PRESCOTT FILTER Andrew Harvey* and Thomas Trimbur** The article analyses the relationship between unobserved component trend-cycle
More informationBayesian Modeling of Conditional Distributions
Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings
More informationForecasting and Estimation
February 3, 2009 Forecasting I Very frequently the goal of estimating time series is to provide forecasts of future values. This typically means you treat the data di erently than if you were simply tting
More information7 Day 3: Time Varying Parameter Models
7 Day 3: Time Varying Parameter Models References: 1. Durbin, J. and S.-J. Koopman (2001). Time Series Analysis by State Space Methods. Oxford University Press, Oxford 2. Koopman, S.-J., N. Shephard, and
More informationTrending Models in the Data
April 13, 2009 Spurious regression I Before we proceed to test for unit root and trend-stationary models, we will examine the phenomena of spurious regression. The material in this lecture can be found
More informationSimple Estimators for Semiparametric Multinomial Choice Models
Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More information2b Multivariate Time Series
2b Multivariate Time Series Andrew Harvey Faculty of Economics, University of Cambridge May 2017 Andrew Harvey (Faculty of Economics, University of Cambridge) 2b Multivariate Time Series May 2017 1 / 28
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationClass: Trend-Cycle Decomposition
Class: Trend-Cycle Decomposition Macroeconometrics - Spring 2011 Jacek Suda, BdF and PSE June 1, 2011 Outline Outline: 1 Unobserved Component Approach 2 Beveridge-Nelson Decomposition 3 Spectral Analysis
More informationWeb Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.
Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we
More informationVolatility. Gerald P. Dwyer. February Clemson University
Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationA Practical Guide to State Space Modeling
A Practical Guide to State Space Modeling Jin-Lung Lin Institute of Economics, Academia Sinica Department of Economics, National Chengchi University March 006 1 1 Introduction State Space Model (SSM) has
More informationChapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models
Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationSimple Estimators for Monotone Index Models
Simple Estimators for Monotone Index Models Hyungtaik Ahn Dongguk University, Hidehiko Ichimura University College London, James L. Powell University of California, Berkeley (powell@econ.berkeley.edu)
More informationMC3: Econometric Theory and Methods. Course Notes 4
University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew
More informationEmpirical Macroeconomics
Empirical Macroeconomics Francesco Franco Nova SBE April 5, 2016 Francesco Franco Empirical Macroeconomics 1/39 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical
More informationPanel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43
Panel Data March 2, 212 () Applied Economoetrics: Topic March 2, 212 1 / 43 Overview Many economic applications involve panel data. Panel data has both cross-sectional and time series aspects. Regression
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationMultivariate ARMA Processes
LECTURE 8 Multivariate ARMA Processes A vector y(t) of n elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation (1) A 0 y(t) + A 1 y(t 1) + + A p y(t p) = M
More informationEmpirical Macroeconomics
Empirical Macroeconomics Francesco Franco Nova SBE April 21, 2015 Francesco Franco Empirical Macroeconomics 1/33 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical
More informationPart I State space models
Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:
More information(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1
Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,
More informationEcon 423 Lecture Notes: Additional Topics in Time Series 1
Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes
More information8 Periodic Linear Di erential Equations - Floquet Theory
8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions
More informationChapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models School of Economics and Management - University of Geneva Christophe Hurlin, Université of Orléans University of Orléans April 2018 C. Hurlin (University of Orléans)
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More information1 Linear Difference Equations
ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications
ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationComments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival
Comments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival Applied Physics Laboratory Department of Statistics University of
More informationLearning in Real Time: Theory and Empirical Evidence from the Term Structure of Survey Forecasts
Learning in Real Time: Theory and Empirical Evidence from the Term Structure of Survey Forecasts Andrew Patton and Allan Timmermann Oxford and UC-San Diego November 2007 Motivation Uncertainty about macroeconomic
More information2. Multivariate ARMA
2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt
More informationECONOMETRICS FIELD EXAM Michigan State University May 9, 2008
ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within
More informationStochastic Processes
Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationEconomics 620, Lecture 13: Time Series I
Economics 620, Lecture 13: Time Series I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 13: Time Series I 1 / 19 AUTOCORRELATION Consider y = X + u where y is
More informationNotes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770
Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 77 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill October 4, 2 MATHEMATICAL
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationGLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses
GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses Josep Lluís Carrion-i-Silvestre University of Barcelona Dukpa Kim Boston University Pierre Perron
More informationIntroduction to Nonparametric and Semiparametric Estimation. Good when there are lots of data and very little prior information on functional form.
1 Introduction to Nonparametric and Semiparametric Estimation Good when there are lots of data and very little prior information on functional form. Examples: y = f(x) + " (nonparametric) y = z 0 + f(x)
More informationYou must continuously work on this project over the course of four weeks.
The project Five project topics are described below. You should choose one the projects. Maximum of two people per project is allowed. If two people are working on a topic they are expected to do double
More informationLECTURE 13: TIME SERIES I
1 LECTURE 13: TIME SERIES I AUTOCORRELATION: Consider y = X + u where y is T 1, X is T K, is K 1 and u is T 1. We are using T and not N for sample size to emphasize that this is a time series. The natural
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationThe Predictive Space or If x predicts y; what does y tell us about x?
The Predictive Space or If x predicts y; what does y tell us about x? Donald Robertson and Stephen Wright y October 28, 202 Abstract A predictive regression for y t and a time series representation of
More informationPANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1
PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,...,
More informationCointegration and the joint con rmation hypothesis
Cointegration and the joint con rmation hypothesis VASCO J. GABRIEL Department of Economics, Birkbeck College, UK University of Minho, Portugal October 2001 Abstract Recent papers by Charemza and Syczewska
More informationSTRUCTURAL TIME-SERIES MODELLING
1: Structural Time-Series Modelling STRUCTURAL TIME-SERIES MODELLING Prajneshu Indian Agricultural Statistics Research Institute, New Delhi-11001 1. Introduction. ARIMA time-series methodology is widely
More informationChapter 9: Forecasting
Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the
More informationNonparametric Trending Regression with Cross-Sectional Dependence
Nonparametric Trending Regression with Cross-Sectional Dependence Peter M. Robinson London School of Economics January 5, 00 Abstract Panel data, whose series length T is large but whose cross-section
More informationProf. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis
Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation
More informationEquivalence of several methods for decomposing time series into permananent and transitory components
Equivalence of several methods for decomposing time series into permananent and transitory components Don Harding Department of Economics and Finance LaTrobe University, Bundoora Victoria 3086 and Centre
More informationAn estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic
Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented
More informationTime Series and Forecasting Lecture 4 NonLinear Time Series
Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations
More information11.1 Gujarati(2003): Chapter 12
11.1 Gujarati(2003): Chapter 12 Time Series Data 11.2 Time series process of economic variables e.g., GDP, M1, interest rate, echange rate, imports, eports, inflation rate, etc. Realization An observed
More informationLECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT
MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes
More informationThreshold Autoregressions and NonLinear Autoregressions
Threshold Autoregressions and NonLinear Autoregressions Original Presentation: Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Threshold Regression 1 / 47 Threshold Models
More informationGARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50
GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6
More informationEconomics 620, Lecture 19: Introduction to Nonparametric and Semiparametric Estimation
Economics 620, Lecture 19: Introduction to Nonparametric and Semiparametric Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 19: Nonparametric Analysis
More informationGARCH Models Estimation and Inference. Eduardo Rossi University of Pavia
GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function
More informationOutline. Overview of Issues. Spatial Regression. Luc Anselin
Spatial Regression Luc Anselin University of Illinois, Urbana-Champaign http://www.spacestat.com Outline Overview of Issues Spatial Regression Specifications Space-Time Models Spatial Latent Variable Models
More informationHeteroskedasticity in Time Series
Heteroskedasticity in Time Series Figure: Time Series of Daily NYSE Returns. 206 / 285 Key Fact 1: Stock Returns are Approximately Serially Uncorrelated Figure: Correlogram of Daily Stock Market Returns.
More informationGeneralized Autoregressive Score Models
Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be
More informationFederal Reserve Bank of New York Staff Reports
Federal Reserve Bank of New York Staff Reports A Flexible Approach to Parametric Inference in Nonlinear Time Series Models Gary Koop Simon Potter Staff Report no. 285 May 2007 This paper presents preliminary
More informationIntro VEC and BEKK Example Factor Models Cond Var and Cor Application Ref 4. MGARCH
ntro VEC and BEKK Example Factor Models Cond Var and Cor Application Ref 4. MGARCH JEM 140: Quantitative Multivariate Finance ES, Charles University, Prague Summer 2018 JEM 140 () 4. MGARCH Summer 2018
More information1 Class Organization. 2 Introduction
Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat
More informationSupplemental Material 1 for On Optimal Inference in the Linear IV Model
Supplemental Material 1 for On Optimal Inference in the Linear IV Model Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Vadim Marmer Vancouver School of Economics University
More informationc 2007 Je rey A. Miron
Review of Calculus Tools. c 007 Je rey A. Miron Outline 1. Derivatives. Optimization 3. Partial Derivatives. Optimization again 5. Optimization subject to constraints 1 Derivatives The basic tool we need
More informationDepartment of Econometrics and Business Statistics
ISSN 1440-771X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Exponential Smoothing: A Prediction Error Decomposition Principle Ralph
More informationParametric Inference on Strong Dependence
Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation
More informationMarkov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1
Markov-Switching Models with Endogenous Explanatory Variables by Chang-Jin Kim 1 Dept. of Economics, Korea University and Dept. of Economics, University of Washington First draft: August, 2002 This version:
More informationCovariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )
Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y
More informationARIMA Modelling and Forecasting
ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first
More informationFöreläsning /31
1/31 Föreläsning 10 090420 Chapter 13 Econometric Modeling: Model Speci cation and Diagnostic testing 2/31 Types of speci cation errors Consider the following models: Y i = β 1 + β 2 X i + β 3 X 2 i +
More informationGMM estimation of spatial panels
MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More information13 Endogeneity and Nonparametric IV
13 Endogeneity and Nonparametric IV 13.1 Nonparametric Endogeneity A nonparametric IV equation is Y i = g (X i ) + e i (1) E (e i j i ) = 0 In this model, some elements of X i are potentially endogenous,
More informationUniversity of Toronto
A Limit Result for the Prior Predictive by Michael Evans Department of Statistics University of Toronto and Gun Ho Jang Department of Statistics University of Toronto Technical Report No. 1004 April 15,
More informationConvergence of prices and rates of in ation
Convergence of prices and rates of in ation Fabio Busetti, Silvia Fabiani y, Andrew Harvey z May 18, 2006 Abstract We consider how unit root and stationarity tests can be used to study the convergence
More information9. AUTOCORRELATION. [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s.
9. AUTOCORRELATION [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s. ) Assumptions: All of SIC except SIC.3 (the random sample assumption).
More informationBeta-t-(E)GARCH. Andrew Harvey and Tirthankar Chakravarty, Faculty of Economics, Cambridge University. September 18, 2008
Beta-t-(E)GARCH Andrew Harvey and Tirthankar Chakravarty, Faculty of Economics, Cambridge University. ACH34@ECON.CAM.AC.UK September 18, 008 Abstract The GARCH-t model is widely used to predict volatilty.
More informationECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.
ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,
More informationEconomics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models
University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe
More information