Interval Estimation for AR(1) and GARCH(1,1) Models

Size: px
Start display at page:

Download "Interval Estimation for AR(1) and GARCH(1,1) Models"

Transcription

1 for AR(1) and GARCH(1,1) Models School of Mathematics Georgia Institute of Technology 2010

2 This talk is based on the following papers: Ngai Hang Chan, Deyuan Li and (2010). Toward a Unified of Autoregressions. Ngai Hang Chan, Liang peng and Rongmao Zhang (2010). of the Tail Index of a GARCH(1,1) Model. Yun Gong, Zhouping Li and (2010). Empirical likelihood intervals for conditional Value-at-Risk in ARCH/GARCH models. JTSA 31,

3 Outline

4 EL Review Parametric likelihood ratio test Empirical likelihood method Estimating equations Profile empirical likelihood method

5 Parametric likelihood ratio test Observations: X 1,, X n iid with pdf f (x; g(µ)), where g is a known function, but µ = E(X 1 ) is unknown. Question: test H 0 : µ = µ 0 against H a : µ µ 0 PLRT: Let ˆµ denote the maximum likelihood estimate for µ. Then the likelihood ratio is defined as λ = Π n i=1f (X i ; g(µ 0 ))/Π n i=1f (X i ; g(ˆµ)). The likelihood ratio test is based on the following Wilks Theorem. Under H 0, 2 log λ d χ 2 (1) as n.

6 Empirical likelihood method When we do not fit a class of parametric family to X i, but still test H 0 : µ = µ 0 vs H a : µ µ 0, a similar approach to the parametric likelihood ratio test was introduced by Owen (1988, 1990), which is a nonparametric likelihood ratio test and called empirical likelihood method.

7 Define the empirical likelihood ratio function for µ as R(µ) = sup{ n (np i ) p i 0, i=1 n p i = 1, i=1 By Lagrange multiplier technique, we have p i = n 1 {1 + λ T (X i µ)} 1 and 2 log R(µ) = 2 where λ = λ(µ) satisfies n 1 n i=1 n p i X i = µ}. i=1 n log{1 + λ T (X i µ)}, i=1 X i µ 1 + λ T (X i µ) = 0.

8 Wilks Theorem: Under H 0, W (µ 0 ) := 2 log R(µ 0 ) d χ 2 (d) as n, where µ R d. Confidence interval/region: The above theorem can be employed to construct a confidence interval or region for µ as I α = {µ : W (µ) χ 2 d,α }. Advantages: i) No need to estimate any additional quantities such as asymptotic variance; ii) the shape of confidence interval/region is determined by the sample automatically; iii) Bartlett correctable

9 Estimating equations A popular way to formulate the empirical likelihood function is via estimating equations. Observations: X 1,, X n iid with common distribution function F and there is a q-dimensional parameter θ associated with F. Conditions: Let y T denote the transpose of the vector y and G(x; θ) = (g 1 (x; θ),, g s (x; θ)) T denote s( q) functionally independent functions, which connect F and θ through the equations EG(X 1 ; θ) = 0.

10 Empirical likelihood function: n n n R(θ) = sup{ (np i ) : p i 0, p i = 1, p i G(X i ; θ) = 0}. i=1 i=1 i=1 Wilks Theorem: 2 log R(θ 0 ) d χ 2 (q) as n.

11 Profile empirical likelihood method Suppose we are only interested in a part of θ. Then like the parametric profile likelihood ratio test, we have the profile empirical likelihood method. Observations: X 1,, X n iid with common distribution function F and there is a q-dimensional parameter θ associated with F. Write θ = (α T, β T ) T, where α and β are q 1 -dimensional and q 2 -dimensional parameters, respectively, and q 1 + q 2 = q. Now we are interested in α. Conditions: Let y T denote the transpose of the vector y and G(x; θ) = (g 1 (x; θ),, g s (x; θ)) T denote s( q) functionally independent functions, which connect F and θ through the equations EG(X 1 ; θ) = 0.

12 Profile empirical likelihood ratio: l(α) = 2l E ((α T, ˆβ T (α)) T ) 2l E ( θ), where l E (θ) = n i=1 log{1 + λt G(X i ; θ)}, λ = λ(θ) is the solution of the following equation 0 = 1 n n i=1 G(X i ; θ) 1 + λ T G(X i ; θ), θ = ( α T, β T ) T minimizes l E (θ) with respect to θ, and ˆβ(α) minimizes l E ((α T, β T ) T ) with respect to β for fixed α. Wilks Theorem: l(α 0 ) d χ 2 (q 1 ) as n, where α 0 denotes the true value of α.

13 Consider the first-order autoregressive process Y i = βy i 1 + ɛ i, i = 1,..., n, Y 0 = 0, (1) where ɛ 1,..., ɛ n are independent and identically distributed random variables with mean zero and finite variance σ 2. The process is called stationary, unit root, near-integrated and explosive for the cases β < 1, β = 1, β = β(n) = 1 γ/n for some constant γ R, and β > 1, respectively.

14 Although the behaviour of the process depends on β, the parameter β can be estimated by the usual least squares procedure ˆβ = arg min β n (Y i βy i 1 ) 2 = i=1 n Y i Y i 1 / i=1 n i=1 Y 2 i 1. The asymptotic behaviour of ˆβ has been studied extensively in the literature. Specifically,

15 ( n i=1 Y 2 i 1) 1/2 { ˆβ β} d N(0, σ 2 ) if β < 1, R 1 0 X (t) dw (t) ( R 1 σ if β = β(n) = 1 γ/n, 0 X 2 (t) dt) 1/2 (1 β 2 ) 1/2 sgn(u)v if β > 1, where γ R, {X (t) : 0 t 1} is an Ornstein-Uhlenbeck process satisfying the stochastic differential equation dx (t) = γx (t) dt + σ dw (t) with X (0) = 0, {W (t) : 0 t 1} is a standard Brownian motion, and U and V are independent random variables having the same distribution as j=1 β (j 1) ɛ j. (2)

16 To construct a confidence interval for the autoregressive coefficient β or to test H 0 : β = β 0, one usually employs (2) either by means of simulating the limit distribution or using a bootstrap procedure. Since the limit distribution depends on the location of β, one needs to distinguish the different cases of β before simulating from the limit distribution. However, differentiating the near-integrated case (β = 1 γ/n) from the stationary or the integrated case ( β 1) through testing for β remains a challenging task as the limit distribution depends on the nuisance parameter γ according to (2).

17 On the other hand, it is known that the full sample bootstrap method (i.e., the resample size is the same as the original sample size) works for both ˆβ β and ( n i=1 Y i 1 2 )1/2 ( ˆβ β) for the stationary case β < 1; see Bose (1988). For the explosive case β > 1, Basawa et al. (1989) showed that the full sample bootstrap method also works for (β 2 1) 1 β n ( ˆβ β). But, it follows from the proofs in Basawa et al. (1989) that the full sample bootstrap method is inconsistent for either ˆβ β or ( n i=1 Y i 1 2 )1/2 ( ˆβ β) when β > 1. Although Ferretti and Romo (1996) showed that the full sample bootstrap method is valid for ( n i=1 Y i 1 2 )1/2 ( ˆβ 1) when β = 1, Datta (1996) proved that it is inconsistent for ( n i=1 Y 2 i 1 )1/2 ( ˆβ β) when β = 1.

18 In summary, neither the limit distribution (2) nor the full sample bootstrap method is applicable to construct a confidence interval for β without knowing the true location of β a priori. Question: Construct an interval for β without knowing the cases of stationary, near unit root or explosive? Solution: First find an estimator which always has a normal limit, but the asymptotic variance may depend on the different cases; Next apply empirical likelihood method.

19 Weighted estimation: First consider the near-integrated case. Write ˆβ β = n i=1 ɛ iy i 1 / n i=1 Y i 1 2. When β = 1 γ/n, instead of converging to a constant in probability as in the usual case, the quantity n 2 n i=1 E(ɛ2 i Y i 1 2 F i 1) (F i = σ(ɛ 1,..., ɛ i )) converges in distributionto a random variable. As a result, the limit of n 1 n i=1 ɛ iy i 1 can no longer be a normal distribution. However, as the proofs of Chan and Wei (1987) indicate, the following equality still holds: n 1 n E( i=1 Y 2 i Y 2 i 1 ɛ 2 i F i 1 ) = n 1 n E(ɛ 2 i F i 1 )+o p (1) = σ 2 +o p (1). i=1

20 Motivated by the above equation, a weighted least squares estimate is proposed as follows: ˆβ w (δ) = arg min β n i=1 = n i=1 (Y i βy i 1 ) 2 (1+δY 2 i 1 )1/2 Y i 1 Y i (1+δY 2 i 1 )1/2 / n i=1 Y 2 i 1 (1+δY 2 i 1 )1/2, where δ is a positive constant. For such a weighted least squares estimate, asymptotic normality is achieved.

21 Theorem Assume model (1) holds with E ɛ 1 2+d < for some d > 0. Further assume that β is either a constant or β = β(n) = 1 γ/n for some constant γ R. Then for any δ > 0 ( n i=1 Yi 1 2 ) 1/2 ( n Yi 1 2 ) ( ˆβ w 1 + δyi 1 2 (1 + δy 2 (δ) β) d N(0, σ 2 ). i=1 i 1 )1/2

22 Based on this theorem, one constructs a confidence interval for β or test H 0 : β = β 0 via estimating σ 2 by means of ˆσ 2 = n 1 n i=1 (Y i βy i 1 ) 2, where β is either the least squares estimate or the weighted least squares estimate ˆβ w (δ). Herein, the empirical likelihood method is applied to the weighted score equation of the weighted least squares estimate. Specifically,

23 define Z i (β; δ) = Y i 1 (1 + δy 2 i 1 )1/2 (Y i βy i 1 ) for i = 2,..., n. Since the weighted least squares estimate ˆβ w (δ) is the solution to the score equation n i=2 Z i(β; δ) = 0, define the empirical likelihood function as L(β; δ) = sup{ n i=2 (np i) : p 2 0,..., p n 0, n i=2 p i = 1, n i=2 p iz i (β; δ) = 0}.

24 After applying the Lagrange multiplier technique, one obtains p i = n 1 (1 + λz i (β; δ)) 1 and the log empirical likelihood ratio becomes l(β; δ) = 2 log L(β; δ) = 2 n i=2 log(1 + λz i(β; δ)), where λ = λ(β, δ) satisfies n 1 n i=2 Z i (β; δ) = 0. (3) 1 + λz i (β; δ) The following theorem shows that the Wilks theorem holds for the proposed empirical likelihood method.

25 Theorem Under conditions of Theorem 1, l(β 0 ; δ) d χ 2 (1) as n for any given δ > 0, where β 0 denotes the true value of β. In particular, using this theorem, one can construct a confidence interval for β 0 with level α as I α = {β : l(β; δ) χ 2 1,α}, where χ 2 1,α denotes the α-quantile of χ2 (1).

26 Remark 1. When the sample size n is not large, the rate of convergence of Z i (β; δ) to infinity is not very fast in the near-integrated case or the explosive case. Hence a small δ would be preferred. In the simulation study, δ = 1 is used.

27 Remark 2. Theorem 2 still holds when {ɛ i } is a martingale difference sequence with respect to an increasing sequence of σ-fields {F t } such that { 1 n n t=1 E(ɛ2 t F t 1 ) p σ 2, 1 n n t=1 E( ɛ t 2+d F t 1 ) p c <, for some d > 0 as n.

28 Simulation: A total of 10, 000 random samples each with size n = 200 are drawn from model (1), where the distribution function of ɛ i is either N(0, 1), t 3 or t 6. Consider the near-integrated situations β = 1 γ/n with γ = ±50, ±10, ±5, ±3, ±1 and 0. Since n = 200, the case of γ = ±50 may be perceived as the explosive case and the stationary case, respectively.

29 For comparison, a bootstrap confidence interval for β at level α is constructed as I α = [ ˆβ c 2 ( n i=1 Y 2 i 1) 1/2, ˆβ c 1 ( n i=1 Y 2 i 1) 1/2 ], where c1 and c 2 are the [B(1 α)/2]-th largest value and the [B(1 + α)/2]-th largest value of the B bootstrapped versions of the statistic ( n i=1 Y i 1 2 )1/2 ( ˆβ β), respectively. Here we took B = 5, 000. Theoretically, when β is the near-integrated case or the explosive case, this bootstrap confidence interval becomes inconsistent. In calculating the coverage probability for the proposed empirical likelihood method, δ = 1 is used in the emplik package in R software.

30 In Tables 1 3, the empirical coverage probabilities for these two confidence intervals are reported. Note that the cases γ = 1, 3, 50 clearly indicate the inconsistencies of the bootstrap methods. In most cases, the proposed empirical likelihood method provides more accurate coverage probabilities than the bootstrap method, and the empirical likelihood method performs reasonably well under all circumstances of β.

31 Table 1: Empirical coverage probabilities are calculated for the empirical likelihood confidence intervals I α and bootstrap confidence intervals I α, with α = 0.55, 0.9, 0.95 and ɛ i being standard normal.

32 γ I 0.55 I 0.9 I 0.95 I0.55 I0.9 I

33 Table 2: Empirical coverage probabilities are calculated for the empirical likelihood confidence intervals I α and bootstrap confidence intervals I α, with α = 0.55, 0.9, 0.95 and ɛ i being t 6.

34 γ I 0.55 I 0.9 I 0.95 I0.55 I0.9 I

35 Table 3: Empirical coverage probabilities are calculated for the empirical likelihood confidence intervals I α and bootstrap confidence intervals I α, with α = 0.55, 0.9, 0.95 and ɛ i being t 3.

36 γ I 0.55 I 0.9 I 0.95 I0.55 I0.9 I

37 Proof of Theorem 1. Define X i = for i = 1,..., n. Then Y i 1 n 1/2 (1 + δy 2 i 1 )1/2 ɛ i and F i = σ(ɛ 1,..., ɛ i ) 1 ( n Y 2 ) i 1 n (1 + δyi 1 2 ( ˆβ w (δ) β) = )1/2 i=1 When β < 1, it is easy to check that 1 n n i=1 Y 2 i δy 2 i 1 p Y 2 1 n X i. (4) i=1 E 1 + δy1 2. (5)

38 When β = 1 γ/n for some constant γ, Lemma 2.1 in Chan and Wei (1987) shows that there is a standard Brownian motion W 1 (t) such that n 1/2 β [nt] n Y [nt] D W1 (e 2γ (e 2tγ 1)/(2γ)) in D[0, 1] as n, which implies that as n. 1 n n i=1 Y 2 i δy 2 i 1 p δ 1 (6)

39 When β > 1, it follows that β 2 T Y T 1 d β 1 i ɛ i i=1 as T, which implies that as n, 1 n n i=1 Y 2 i δy 2 i 1 p δ 1. (7)

40 Hence, by (5) (7), n i=1 E(X 2 i F i ) = σ 2 1 n n i=1 Y 2 i δy 2 i 1 p σ 2 E Y 1 2 if β < 1, 1+δY1 2 δ 1 if β > 1, δ 1 if β = 1 γ/n. Similar to the proofs of (5) (7), it can be shown that (8)

41 n i=1 E(X 2 i I ( X i > c) F i ) c 2 d n E( X i 2+d F i ) i=1 = c 2 d E ɛ 1 2+d n p 0 i=1 Y i 1 2+d n (2+d)/2 {1 + δy 2 i 1 }(2+d)/2 (9) for any c > 0 as n. Hence, Theorem 1 follows from (4) (9) and Corollary 3.1 of Hall and Heyde (1980).

42 Proof of Theorem 2. It follows from (5) (7) that 1 n n i=1 Zi 2 (β 0 ; δ) p σ 2 E Y 1 2 if β < 1, 1+δY1 2 δ 1 if β = 1 γ/n, δ 1 if β > 1. (10) By Corollary 3.1 of Hall and Heyde (1980), 1 n N(0, σ 2 E Y 1 2 ) if β < 1, Z i (β 0 ; δ) d 1+δY1 2 n N(0, σ i=1 2 δ 1 ) if β = 1 γ/n, N(0, σ 2 δ 1 if β > 1. (11)

43 It is easy to check that max Z i(β 0 ; δ) = o(n 1/2 ) a.s. (12) 1 i n Hence, Theorem 2 can be proved by using (10) (12) and standard arguments in Chapter 11 of Owen (2001).

44 Consider a generalized autoregressive conditional heteroskedastic (GARCH) model given by Y t = σ t ε t, σ 2 t = ω + aσ 2 t 1 + by 2 t 1, (13) for t = 1,..., n, where {ε t } are independent and identically distributed random variables with mean zero and variance one, ω > 0, a 0 and b 0. It follows from Kesten (1973) or Goldie (1991) that under some regularity conditions, there exists a constant α such that E(a + bε 2 t ) α = 1, (14) where α is the tail index of the time series {Y t }.

45 Specifically, Goldie (1991) showed that there exists a positive constant c such that P( Y 1 > x) = cx α {1 + o(1)}, as x. (15) Consequently, knowing the tail index α would offer important information to understand many properties of {Y t }.

46 For example, knowing whether α exceeds 1/2 or 1 is instrumental in testing for structural changes in stock price, see Quinton, Fan and Phillips (2001). Similarly, the limit distribution of the sample covariances of {Y t } depends on whether α (0, 4], α (4, 8] or α > 8, see Mikosch and Stǎrică (2000). Also, the tail empirical process of {Y t } is determined by the value of α (see Drees (2000)) and for a fixed integer m, the tail index of the joint distribution of (Y 1,..., Y m ) is simply α (see Basrak, Davis and Mikosch (2002)).

47 In view of (14), Mikosch and Stărică (2000) and Berkes, Horváth and Kokoszka (2003) proposed estimating α by solving the equation 1 n (â + ˆbˆε 2 t ) α = 1, (16) n i=1 where â and ˆb are the quasi maximum likelihood estimate (QMLE) of a and b, and ˆε t = Y t /ˆσ t is an estimator of ɛ t with (ω, a, b) T with being replaced by the QMLE. Denote such an estimate by ˆα. For a detailed study of the asymptotic limit of the quasi maximum likelihood estimation of a GARCH(p, q) model, see Hall and Yao (2003).

48 Based on the asymptotic distribution of ˆα given in Berkes, Horváth and Kokoszka (2003), one can construct an interval estimate of α by means of estimating the asymptotic variance of ˆα. Since the plug-in estimators â and ˆb are employed, the proposed estimator ˆα has a non-trivial asymptotic variance that depends on the asymptotic limit of the plug-in estimators. To circumvent the difficulty of estimating the asymptotic variance, we propose to apply the empirical likelihood method to construct a confidence interval for α.

49 Methodology: Let θ = (ω, a, b) T and assume that in model (13) E log(a + bɛ 2 t ) < 0. This implies that σ 2 t := σ 2 t (θ) = ω(1 at ) 1 a For t = 1,..., n, define t 1 + k=0 ba k Y 2 t 1 k + at σ 2 0(θ). (17)

50 σ t 2 (θ) = ω(1 a t )/(1 a) + b 0 k t 1 ak Y 2 Z t,1 (θ, α) = {a + byt 2 /σt 2 (θ)} α 1, Z t,1 (θ, α) = {a + byt 2 / σ t 2 (θ)} α 1, Z t,2 (θ) = ω {Y t 2 /σt 2 (θ) + log(σt 2 (θ))}, Z t,2 (θ) = ω {Y t 2 / σ t 2 (θ) + log( σ t 2 (θ))}, Z t,3 (θ) = a {Y t 2 /σt 2 (θ) + log(σt 2 (θ))}, Z t,3 (θ) = a {Y t 2 / σ t 2 (θ) + log( σ t 2 (θ))}, Z t,4 (θ) = b {Y t 2 /σt 2 (θ) + log(σt 2 (θ))}, Z t,4 (θ) = b {Y t 2 / σ t 2 (θ) + log( σ t 2 (θ))}. t k 1,

51 Then the quasi maximum likelihood estimate for θ is the solution to the score equations n Z t=1 t,i (θ) = 0 for i = 2, 3, 4. We propose to apply the profile empirical likelihood method based on estimating equations in Qin and Lawless (1994) to the equations EZ t (θ, α) = 0, where Z t (θ, α) = (Z t,1 (θ, α), Z t,2 (θ), Z t,3 (θ), Z t,4 (θ)) T for t = 1,..., n.

52 In other words, define the empirical likelihood function of (θ, α) as L(θ, α) = sup{ n t=1 (np t) : p 1 0,..., p n 0, n t=1 p t = 1, n t=1 p t Z t (θ, α) = 0}, where Z t (θ, α) = ( Z t,1 (θ, α), Z t,2 (θ), Z t,3 (θ), Z t,4 (θ)) T for t = 1,..., n.

53 By virtue of the Lagrange multipliers, it is clear that p t = n 1 {1 + λ T Z t (θ, α)} 1 for t = 1,..., n and l(θ, α) := 2 log L(θ, α) = 2 where λ = λ(θ, α) satisfies n t=1 n log{1 + λ T Z t (θ, α)}, t=1 Z t (θ, α) 1 + λ T Z = 0. (18) t (θ, α) Since we are only interested in the tail index α, consider the profile empirical likelihood function l p (α) = l( θ(α), α), where θ = θ(α) := arg min θ l(θ, α).

54 Throughout we use θ 0 and α 0 to denote the true values of θ and α respectively. First we show the existence and consistency of θ(α 0 ).

55 Proposition Suppose (13) holds with E log(a + bɛ 2 t ) < 0, a < 1 and E ɛ 1 4δ < for some δ > max(1, α 0 ). Then, with probability tending to 1, l(θ, α 0 ) attains its minimum value at some point θ in the interior of the ball Θ = {θ : θ θ 0 Cn 1/(2γ) } for any given γ (1, min{δ/α 0, δ}) and C > 0. Moreover θ and λ = λ( θ, α 0 ) satisfy where Q 1n (θ, λ) = 1 n n t=1 Q 2n (θ, λ) = 1 n Q 1n ( θ, λ) = 0 and Q 2n ( θ, λ) = 0, (19) n t=1 Z t(θ,α 0 ) 1+λ T Z t(θ,α 0 ), λ T Z t (θ, α 0 ) { θ Z t (θ, α 0 )} T λ.

56 Theorem Under conditions of Proposition 2.1, the random variable l p (α 0 ), with θ given in Proposition 2.1, converges in distribution to χ 2 (1) as n. Corollary For any 0 < ς < 1, let µ ς denote the ς-th quantile of a χ 2 (1) random variable and define the empirical likelihood confidence interval with level ς as I ς = {α l p (α) µ ς }. Then P(α 0 I ς ) ς as n.

57 Remark: Note that instead of assuming δ > α 0 /2 as stipulated in Berkes, Horváth and Kokoszka (2003), we assume that δ > α 0. The reason is that to establish the asymptotic normality of 1 n n t=1 {(a 0 + b 0 ε 2 t ) α 0 1}, the condition E{(a 0 + b 0 ε 2 t ) α 0 1} 2 < is required.

58 Simulation: Consider model (13) with (ω, a, b) = (1, 0.1, 0.8), (1, 0.8, 0.1) and ε t N(0, 1) or t(ν)/ ν/(ν 2). First we draw 100, 000 random samples of size n = 10, 000 for the errors in (13). For each random sample, we solve the equation (14) with the right-hand side replaced by the sample average to obtain α, say α. Then the true value α 0 is calculated as the average of these 100, 000 α s.

59 Next we draw 1, 000 random samples each of size n = 500 and 2, 000 from (13). Since the asymptotic limit of ˆα derived in Berkes, Horváth and Kokoszka (2003) is quite complicated, we consider the percentile bootstrap confidence interval for α based on ˆα. Specifically, draw B = 200 bootstrap samples each of size n from ˆε 1,, ˆε n with replacement, say ε 1,, ε n, and then generate observations from model (13) with (ω, a, b) and ε ts being replaced by the QMLE and ε t s. Based on these generated observations, the bootstrap version of ˆα is calculated. Hence the percentile bootstrap confidence interval with level ς is obtained via the B bootstrapped estimators of α. Denote it by I b ς.

60 The empirical coverage probabilities for the percentile bootstrap confidence interval Iς b and the proposed empirical likelihood confidence interval I ς at levels ς = 0.9 and 0.95 are reported in Tables 4 and 5. Since computing the coverage probability of the percentile bootstrap interval Iς b is very time-consuming, which is almost B times of the time required for the empirical likelihood method, we only calculate Iς b for sample size n = 500.

61 We observe from Tables 4 and 5 that: (i) the proposed empirical likelihood method performs much better than the percentile bootstrap method when (ω, a, b) = (1, 0.1, 0.8), (ii) the performance of ˆα is very poor when (ω, a, b) = (1, 0.8, 0.1) and n = 500, and (iii) both the accuracy of the proposed empirical likelihood method and the tail index estimator improve when the sample size increases. In conclusion, the simulation results show that the proposed empirical likelihood method offers a more accurate coverage probability than the percentile bootstrap method and the computational challenge of the proposed method is much less severe than the bootstrap method.

62 Table 4: Empirical coverage probabilities of the proposed empirical likelihood interval I ς and the percentile bootstrap interval Iς b at levels ς = 0.9 and The true value of the tail index α, the sample mean and sample standard deviation of ˆα are respectively given in the last three columns of the table. Here n = 500.

63 (ω, a, b, ɛ t) I 0.9 I0.9 b I 0.95 I0.95 b α 0 E(ˆα) SD(ˆα) (1, 0.1, 0.8, N) (1, 0.8, 0.1, N) (1, 0.1, 0.8, t(5)) (1, 0.8, 0.1, t(5)) (1, 0.1, 0.8, t(10)) (1, 0.8, 0.1, t(10))

64 Table 5: Empirical coverage probabilities of the proposed empirical likelihood interval I ς and the percentile bootstrap interval Iς b at levels ς = 0.9 and The true value of the tail index α, the sample mean and sample standard deviation of ˆα are respectively given in the last three columns of the table. Here n = 2000.

65 (ω, a, b, ɛ t) I 0.9 I0.9 b I 0.95 I0.95 b α 0 E(ˆα) SD(ˆα) (1, 0.1, 0.8, N) (1, 0.8, 0.1, N) (1, 0.1, 0.8, t(5)) (1, 0.8, 0.1, t(5)) (1, 0.1, 0.8, t(10)) (1, 0.8, 0.1, t(10))

66 GARCH(p,q) model X t = ɛ t ht and h t = ω + p α i Xt i 2 + i=1 q β j h t j, (20) where ω > 0, α i, β j 0 are constants and ɛ t (error): i.i.d. random variables with mean 0 and variance 1. Denote λ = (ω, α 1,..., α p, β 1,..., β q ) T Θ and write h t as h t (λ). j=1 ω: the baseline variance α i : quantifies the impact of shocks β j : quantifies the persistence of shocks.

67 Remark Put β j = 0 in model (20), it becomes ARCH(p) model. GARCH models are applied in finance and economics used for option pricing estimating volatility for risk management purposes

68 For r (0, 1), the one-step ahead 100r% CVaR, given X 1,, X n, is defined as q r = inf{x : P(X n+1 x X n+1 k, k 1) r}. An obvious estimator for CVaR q r is ˆq r = h n+1 (ˆλ) ˆθ ɛ,r, where ˆλ is an estimator for λ and ˆθ ɛ,r is an estimator of the 100r% quantile of ɛ t.

69 ˆλ can be the quasi-maximum likelihood estimator. ˆθ ɛ,r can be estimated as the rth sample quantile based on the estimated errors ˆɛ 1,..., ˆɛ n. However, the asymptotic variance is complicated since the variability of parameter estimation involves. Here we propose the estimating equation approach of EL to take the variability of parameter estimation into account.

70 Define the quasi maximum likelihood as L(λ) = n t=1 l t (λ) and l t (λ) = 1 2 log h t(λ) X 2 t 2h t (λ), Score equation: n t=1 D t(λ) = 0, where D t (λ) = l t(λ) λ = 1 h t (λ) 2h t (λ) λ { } X 2 t h t (λ) 1. Let K be a symmetric density function with support in [ 1, 1]. Put G(x) = x K(y)dy.

71 Then we can construct the EL ratio for the rth CVaR θ as L n (r; θ, λ) n = sup{ (np t ) : t=1 n p t = 1, t=1 n p t D t (λ) = 0, t=1 where ɛ t (λ) = n t=1 p i > 0, i = 1,..., n}, p t G( θ/ h n+1 (λ) ɛ t (λ) ) = r, h Xt, r (0, 1) and h = h(n) > 0 is a bandwidth. ht(λ)

72 Denote g t (θ, λ) = (G( θ/ h n+1 (λ) ɛ t (λ) ) r, Dt T (λ)) T = (ω t (λ), Dt T (λ)) T. h Then the EL ratio can be rewritten as L n (r; θ, λ) = sup{ n (np t ) : t=1 n p t = 1, t=1 p i > 0, i = 1,..., n}. n p t g t (θ, λ) = 0, t=1

73 By the argument of Lagrange multipliers, we get the log EL ratio l n (r; θ, λ) = 2 where b(θ, λ) satisfies 1 n n t=1 n t=1 { } log 1 + b T (θ, λ)g t (θ, λ), g t (θ, λ) 1 + b T (θ, λ)g t (θ, λ) = 0.

74 Our interest is θ, we consider the profiled likelihood ratio l n (r; θ, ˆλ(θ)), where ˆλ(θ) = arg min λ l n (r; θ, λ). Proposition. Suppose Eɛ 4+δ t < for some δ > 0, and n 1 σ h 2 and nh 4 0 for some σ (0, 1/2) as n. Then, with probability tending to one, l n (r; θ 0, λ) attains its minimum value at some point ˆλ n (θ 0 ) in the interior of V n = {λ : λ λ 0 n 0.5+σ/4 }.

75 Theorem. Under conditions of the above proposition, we have l n (r; θ 0, ˆλ n (θ 0 )) d χ 2 1 as n, where ˆλ n (θ 0 ) is given in Proposition 1. Based on the above theorem, a confidence interval of θ 0 with level γ can be obtained as I γ (r) = {θ : l n (r; θ, ˆλ n (θ)) χ 2 1,γ}, where χ 2 1,γ is the γth quantile of χ2 1.

76 Simulation Consider ARCH(1) models and the conditional VaR at X n = 0 and 1 (i.e., z 1 = 0, 1) with level r = 0.90, 0.95, and In each case, data are generated from an ARCH(1) model with parameters (ω, α 1 ) = (0.5, 0.7) or (0.5, 0.9), and the error ɛ t has a N(0, 1) distribution and a standardized t-distribution with the degree of 3 freedom 5, i.e., 5t(5). From each model, we draw random samples with the sample sizes n = 1000 and The number of replications is set to We employ the kernel K(x) = (1 x 2 ) 2 I ( x 1) and choose the bandwidth h = n 1/3. Based on the 5000 replications, we report the empirical coverage probabilities with confidence levels γ = 0.90, 0.95.

77 Table 1: Coverage probabilities for ɛ t N(0, 1). (n, α 1, z 1, γ) ɛ t N(0, 1) r=0.90 r=0.95 r=0.99 (1000, 0.7, 0, 0.90) (3000, 0.7, 0, 0.90) (1000, 0.9, 0, 0.90) (3000, 0.9, 0, 0.90) (1000, 0.7, 1, 0.90) (3000, 0.7, 1, 0.90) (1000, 0.9, 1, 0.90) (3000, 0.9, 1, 0.90)

78 (n, α 1, z 1, γ) ɛ t N(0, 1) r = (1000, 0.7, 0, 0.95) (3000, 0.7, 0, 0.95) (1000, 0.9, 0, 0.95) (3000, 0.9, 0, 0.95) (1000, 0.7, 1, 0.95) (3000, 0.7, 1, 0.95) (1000, 0.9, 1, 0.95) (3000, 0.9, 1, 0.95)

79 Table 2: Coverage probabilities for ɛ t (n, α 1, z 1, γ) ɛ t 3 5 t(5) 3 5 t(5) r=0.90 r=0.95 r=0.99 (1000, 0.7, 0, 0.90) (3000, 0.7, 0, 0.90) (1000, 0.9, 0, 0.90) (3000, 0.9, 0, 0.90) (1000, 0.7, 1, 0.90) (3000, 0.7, 1, 0.90) (1000, 0.9, 1, 0.90) (3000, 0.9, 1, 0.90)

80 3 (n, α 1, z 1, γ) ɛ t 5 t(5) r=0.90 r=0.95 r=0.99 (1000, 0.7, 0, 0.95) (3000, 0.7, 0, 0.95) (1000, 0.9, 0, 0.95) (3000, 0.9, 0, 0.95) (1000, 0.7, 1, 0.95) (3000, 0.7, 1, 0.95) (1000, 0.9, 1, 0.95) (3000, 0.9, 1, 0.95)

81 THANKS!

EMPIRICAL LIKELIHOOD FOR AUTOREGRESSIVE MODELS, WITH APPLICATIONS TO UNSTABLE TIME SERIES

EMPIRICAL LIKELIHOOD FOR AUTOREGRESSIVE MODELS, WITH APPLICATIONS TO UNSTABLE TIME SERIES Statistica Sinica 12(2002), 387-407 EMPIRICAL LIKELIHOOD FOR AUTOREGRESSIVE MODELS, WITH APPLICATIONS TO UNSTABLE TIME SERIES Chin-Shan Chuang and Ngai Hang Chan Goldman, Sachs and Co. and Chinese University

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 24 Paper 153 A Note on Empirical Likelihood Inference of Residual Life Regression Ying Qing Chen Yichuan

More information

M-estimators for augmented GARCH(1,1) processes

M-estimators for augmented GARCH(1,1) processes M-estimators for augmented GARCH(1,1) processes Freiburg, DAGStat 2013 Fabian Tinkl 19.03.2013 Chair of Statistics and Econometrics FAU Erlangen-Nuremberg Outline Introduction The augmented GARCH(1,1)

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis : Asymptotic Inference and Empirical Analysis Qian Li Department of Mathematics and Statistics University of Missouri-Kansas City ql35d@mail.umkc.edu October 29, 2015 Outline of Topics Introduction GARCH

More information

Estimating GARCH models: when to use what?

Estimating GARCH models: when to use what? Econometrics Journal (2008), volume, pp. 27 38. doi: 0./j.368-423X.2008.00229.x Estimating GARCH models: when to use what? DA HUANG, HANSHENG WANG AND QIWEI YAO, Guanghua School of Management, Peking University,

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications

Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications Fumiya Akashi Research Associate Department of Applied Mathematics Waseda University

More information

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Parameter Estimation of the Stable GARCH(1,1)-Model

Parameter Estimation of the Stable GARCH(1,1)-Model WDS'09 Proceedings of Contributed Papers, Part I, 137 142, 2009. ISBN 978-80-7378-101-9 MATFYZPRESS Parameter Estimation of the Stable GARCH(1,1)-Model V. Omelchenko Charles University, Faculty of Mathematics

More information

11 Survival Analysis and Empirical Likelihood

11 Survival Analysis and Empirical Likelihood 11 Survival Analysis and Empirical Likelihood The first paper of empirical likelihood is actually about confidence intervals with the Kaplan-Meier estimator (Thomas and Grunkmeier 1979), i.e. deals with

More information

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS Sangyeol Lee 1 Contents 1. Introduction of the CUSUM test 2. Test for variance change in AR(p) model 3. Test for Parameter Change in Regression Models

More information

GARCH processes probabilistic properties (Part 1)

GARCH processes probabilistic properties (Part 1) GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

CHANGE DETECTION IN TIME SERIES

CHANGE DETECTION IN TIME SERIES CHANGE DETECTION IN TIME SERIES Edit Gombay TIES - 2008 University of British Columbia, Kelowna June 8-13, 2008 Outline Introduction Results Examples References Introduction sunspot.year 0 50 100 150 1700

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song Presenter: Jiwei Zhao Department of Statistics University of Wisconsin Madison April

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 9 - Heteroskedasticity Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 43 Agenda 1 Introduction 2 AutoRegressive Conditional Heteroskedastic Model (ARCH)

More information

Studies in Nonlinear Dynamics and Econometrics

Studies in Nonlinear Dynamics and Econometrics Studies in Nonlinear Dynamics and Econometrics Quarterly Journal April 1997, Volume, Number 1 The MIT Press Studies in Nonlinear Dynamics and Econometrics (ISSN 1081-186) is a quarterly journal published

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

Empirical likelihood-based methods for the difference of two trimmed means

Empirical likelihood-based methods for the difference of two trimmed means Empirical likelihood-based methods for the difference of two trimmed means 24.09.2012. Latvijas Universitate Contents 1 Introduction 2 Trimmed mean 3 Empirical likelihood 4 Empirical likelihood for the

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The

More information

Extremogram and ex-periodogram for heavy-tailed time series

Extremogram and ex-periodogram for heavy-tailed time series Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Wen Xu Department of Economics & Oxford-Man Institute University of Oxford (Preliminary, Comments Welcome) Theme y it =

More information

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Kuangyu Wen & Ximing Wu Texas A&M University Info-Metrics Institute Conference: Recent Innovations in Info-Metrics October

More information

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University babu.

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University  babu. Bootstrap G. Jogesh Babu Penn State University http://www.stat.psu.edu/ babu Director of Center for Astrostatistics http://astrostatistics.psu.edu Outline 1 Motivation 2 Simple statistical problem 3 Resampling

More information

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Piotr Kokoszka 1, Gilles Teyssière 2, and Aonan Zhang 3 1 Mathematics and Statistics, Utah State University, 3900 Old Main

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Unified Discrete-Time and Continuous-Time Models. and High-Frequency Financial Data

Unified Discrete-Time and Continuous-Time Models. and High-Frequency Financial Data Unified Discrete-Time and Continuous-Time Models and Statistical Inferences for Merged Low-Frequency and High-Frequency Financial Data Donggyu Kim and Yazhen Wang University of Wisconsin-Madison December

More information

Nonparametric Inference In Functional Data

Nonparametric Inference In Functional Data Nonparametric Inference In Functional Data Zuofeng Shang Purdue University Joint work with Guang Cheng from Purdue Univ. An Example Consider the functional linear model: Y = α + where 1 0 X(t)β(t)dt +

More information

Web-based Supplementary Material for. Dependence Calibration in Conditional Copulas: A Nonparametric Approach

Web-based Supplementary Material for. Dependence Calibration in Conditional Copulas: A Nonparametric Approach 1 Web-based Supplementary Material for Dependence Calibration in Conditional Copulas: A Nonparametric Approach Elif F. Acar, Radu V. Craiu, and Fang Yao Web Appendix A: Technical Details The score and

More information

Empirical Likelihood Inference for Two-Sample Problems

Empirical Likelihood Inference for Two-Sample Problems Empirical Likelihood Inference for Two-Sample Problems by Ying Yan A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Statistics

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

This paper is not to be removed from the Examination Halls

This paper is not to be removed from the Examination Halls ~~ST104B ZA d0 This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ST104B ZB BSc degrees and Diplomas for Graduates in Economics, Management, Finance and the Social Sciences,

More information

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions. Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of

More information

Adjusted Empirical Likelihood for Long-memory Time Series Models

Adjusted Empirical Likelihood for Long-memory Time Series Models Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Empirical Likelihood

Empirical Likelihood Empirical Likelihood Patrick Breheny September 20 Patrick Breheny STA 621: Nonparametric Statistics 1/15 Introduction Empirical likelihood We will discuss one final approach to constructing confidence

More information

1 Empirical Likelihood

1 Empirical Likelihood Empirical Likelihood February 3, 2016 Debdeep Pati 1 Empirical Likelihood Empirical likelihood a nonparametric method without having to assume the form of the underlying distribution. It retains some of

More information

Time Series Models for Measuring Market Risk

Time Series Models for Measuring Market Risk Time Series Models for Measuring Market Risk José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department June 28, 2007 1/ 32 Outline 1 Introduction 2 Competitive and collaborative

More information

Extending the two-sample empirical likelihood method

Extending the two-sample empirical likelihood method Extending the two-sample empirical likelihood method Jānis Valeinis University of Latvia Edmunds Cers University of Latvia August 28, 2011 Abstract In this paper we establish the empirical likelihood method

More information

Quasi-likelihood Scan Statistics for Detection of

Quasi-likelihood Scan Statistics for Detection of for Quasi-likelihood for Division of Biostatistics and Bioinformatics, National Health Research Institutes & Department of Mathematics, National Chung Cheng University 17 December 2011 1 / 25 Outline for

More information

Bootstrapping high dimensional vector: interplay between dependence and dimensionality

Bootstrapping high dimensional vector: interplay between dependence and dimensionality Bootstrapping high dimensional vector: interplay between dependence and dimensionality Xianyang Zhang Joint work with Guang Cheng University of Missouri-Columbia LDHD: Transition Workshop, 2014 Xianyang

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction

More information

GLS and FGLS. Econ 671. Purdue University. Justin L. Tobias (Purdue) GLS and FGLS 1 / 22

GLS and FGLS. Econ 671. Purdue University. Justin L. Tobias (Purdue) GLS and FGLS 1 / 22 GLS and FGLS Econ 671 Purdue University Justin L. Tobias (Purdue) GLS and FGLS 1 / 22 In this lecture we continue to discuss properties associated with the GLS estimator. In addition we discuss the practical

More information

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Farhat Iqbal Department of Statistics, University of Balochistan Quetta-Pakistan farhatiqb@gmail.com Abstract In this paper

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Bootstrap Confidence Intervals

Bootstrap Confidence Intervals Bootstrap Confidence Intervals Patrick Breheny September 18 Patrick Breheny STA 621: Nonparametric Statistics 1/22 Introduction Bootstrap confidence intervals So far, we have discussed the idea behind

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50 GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes Chapter 2 Some basic tools 2.1 Time series: Theory 2.1.1 Stochastic processes A stochastic process is a sequence of random variables..., x 0, x 1, x 2,.... In this class, the subscript always means time.

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Empirical Likelihood Methods for Pretest-Posttest Studies

Empirical Likelihood Methods for Pretest-Posttest Studies Empirical Likelihood Methods for Pretest-Posttest Studies by Min Chen A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in

More information

Testing Restrictions and Comparing Models

Testing Restrictions and Comparing Models Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Bootstrap prediction intervals for factor models

Bootstrap prediction intervals for factor models Bootstrap prediction intervals for factor models Sílvia Gonçalves and Benoit Perron Département de sciences économiques, CIREQ and CIRAO, Université de Montréal April, 3 Abstract We propose bootstrap prediction

More information

Lecture 10: Panel Data

Lecture 10: Panel Data Lecture 10: Instructor: Department of Economics Stanford University 2011 Random Effect Estimator: β R y it = x itβ + u it u it = α i + ɛ it i = 1,..., N, t = 1,..., T E (α i x i ) = E (ɛ it x i ) = 0.

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Estimation of Threshold Cointegration

Estimation of Threshold Cointegration Estimation of Myung Hwan London School of Economics December 2006 Outline Model Asymptotics Inference Conclusion 1 Model Estimation Methods Literature 2 Asymptotics Consistency Convergence Rates Asymptotic

More information

Jackknife Empirical Likelihood for the Variance in the Linear Regression Model

Jackknife Empirical Likelihood for the Variance in the Linear Regression Model Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics Summer 7-25-2013 Jackknife Empirical Likelihood for the Variance in the Linear

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference Università di Pavia GARCH Models Estimation and Inference Eduardo Rossi Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

Empirical likelihood based modal regression

Empirical likelihood based modal regression Stat Papers (2015) 56:411 430 DOI 10.1007/s00362-014-0588-4 REGULAR ARTICLE Empirical likelihood based modal regression Weihua Zhao Riquan Zhang Yukun Liu Jicai Liu Received: 24 December 2012 / Revised:

More information

13. Parameter Estimation. ECE 830, Spring 2014

13. Parameter Estimation. ECE 830, Spring 2014 13. Parameter Estimation ECE 830, Spring 2014 1 / 18 Primary Goal General problem statement: We observe X p(x θ), θ Θ and the goal is to determine the θ that produced X. Given a collection of observations

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

Empirical Likelihood Tests for High-dimensional Data

Empirical Likelihood Tests for High-dimensional Data Empirical Likelihood Tests for High-dimensional Data Department of Statistics and Actuarial Science University of Waterloo, Canada ICSA - Canada Chapter 2013 Symposium Toronto, August 2-3, 2013 Based on

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Bootstrap & Confidence/Prediction intervals

Bootstrap & Confidence/Prediction intervals Bootstrap & Confidence/Prediction intervals Olivier Roustant Mines Saint-Étienne 2017/11 Olivier Roustant (EMSE) Bootstrap & Confidence/Prediction intervals 2017/11 1 / 9 Framework Consider a model with

More information

Generalized Method of Moments Estimation

Generalized Method of Moments Estimation Generalized Method of Moments Estimation Lars Peter Hansen March 0, 2007 Introduction Generalized methods of moments (GMM) refers to a class of estimators which are constructed from exploiting the sample

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(1,1)) Model

Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(1,1)) Model JIRSS (205) Vol. 4, No. 2, pp 5-36 DOI: 0.7508/jirss.205.02.002 Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(,)) Model Alireza

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Optimal Jackknife for Unit Root Models

Optimal Jackknife for Unit Root Models Optimal Jackknife for Unit Root Models Ye Chen and Jun Yu Singapore Management University October 19, 2014 Abstract A new jackknife method is introduced to remove the first order bias in the discrete time

More information

Chapter 13: Functional Autoregressive Models

Chapter 13: Functional Autoregressive Models Chapter 13: Functional Autoregressive Models Jakub Černý Department of Probability and Mathematical Statistics Stochastic Modelling in Economics and Finance December 9, 2013 1 / 25 Contents 1 Introduction

More information

Nonparametric estimation of extreme risks from heavy-tailed distributions

Nonparametric estimation of extreme risks from heavy-tailed distributions Nonparametric estimation of extreme risks from heavy-tailed distributions Laurent GARDES joint work with Jonathan EL METHNI & Stéphane GIRARD December 2013 1 Introduction to risk measures 2 Introduction

More information