Cross Validation, Monte Carlo Cross Validation, and Accumulated Prediction Errors: Asymptotic Properties

Size: px
Start display at page:

Download "Cross Validation, Monte Carlo Cross Validation, and Accumulated Prediction Errors: Asymptotic Properties"

Transcription

1 Cross Validation, Monte Carlo Cross Validation, and Accumulated Prediction Errors: Asymptotic Properties Ching-Kang Ing Institute of Statistics, National Tsing Hua University, Hsinchu, Taiwan

2 Outline 1 Cross Validation (CV) 2 Monte Carlo Cross Validation (MCCV) 3 Accumulated Prediction Errors (APE) 4 References

3 Cross Validation (CV) Cross Validation (CV) Consider p y i = x i β + e i = x ij β j + e i, j=1 where β = (β 1,..., β p) and e i are i.i.d. r.v.s with E(e i ) = 0 and E(e 2 i ) = σ2 > 0. Suppose β i = 0 for some i P = {1,..., p}. We are interested in choosing the smallest true model y i = x i,α β α + e i, where α = {i : β i 0}, β α = (β i, i α ) and x i,α = (x ij, j α ), among the candidate models E(y) C(X α), where y = (y 1,..., y n), X α = (x i,α,..., x n,α) with x i,α = (x ij, j α), and α 2 P. 1

4 Cross Validation (CV) Delete-n v -out CV (CV(n v )) Let (y i, x i ), i = 1,..., n, be observations. CV(n v) splits the data into two parts: {(y i, x i ), i s} and {(y i, x i ), i s c }, where s and s c are subsets of N = {1,..., n} with s s c = and s s c = N. The first part is referred to as the validation (testing) sample, whereas the second part is referred to as the training sample. Denote (s) and (s c ) by n v and n c, respectively, noting that n v + n c = n. The CV evaluates the performance of α using 1 ˆΓ α,n(n v) = ( n n ) (y i x ˆβ i,α α,s c ) 2 v n v All (s,s c ) combinations i s 1 = ( n n ) y s ŷ α,s c 2, (ŷ α,s c = X α,s ˆβα,s c ) v n v All (s,s c ) combinations where y A = (y i, i A), A N and ˆβ α,s c = (X α,s c X α,s c ) 1 X α,s c y s c with X α,a = (x i,α, i A). 2

5 Cross Validation (CV) Some statistical properties of CV Define P α = X α(x α Xα) 1 X α and Q α,a = X α,a (X α Xα) 1 X α,a. Fact 1 n 1 v ys ŷ α,s c 2 = n 1 v (Inv Qα,s) 1 (y s X α,s ˆβα) 2, where I nv denotes the n v-dimensional identity matrix and ˆβ α = (X αx α) 1 X αy, which is the least squares estimate of β α based on the full data. 3

6 Cross Validation (CV) Proof Since (X α,s c X α,s c ) 1 = (X α Xα X α,s Xα,s) 1, by making use of (A B B) 1 = A 1 + A 1 B (I BA 1 B ) 1 BA 1, (1) we obtain ŷ α,s c = X α,s ˆβα,s c (I: identity matrix of suitable dimension) = X α,s ˆβα + Q α,s(i Q α,s) 1 X α,s ˆβα Q α,sy s Q α,s(i Q α,s) 1 Q α,sy s, and hence y s X α,s ˆβα,s c = (I Q α,s) 1 (y s X α,s ˆβα). This completes the proof. Remark By Fact 1, we have ˆΓ n α,n(1) = n 1 {(1 w iα ) 1 (y i x ˆβ i,α α)} 2, Delete-1-out CV (conventional CV) where w iα = (P α) ii, the ith diagonal element of P α. 4

7 Cross Validation (CV) Theorem 1 Assume X X n where X = (x 1,..., x n), and Then (1) R, a positive definite matrix, (2) n max w iα 0 for any α 2 P. (3) 1 i n n ˆΓ α,n(1) = σ n β X (I P α)xβ + o p(1), (I : n-dimensional identity matrix) (2) if α is an incorrect model (namely α α ), ˆΓ α,n(1) = n 1 e e + 2dασ2 n 1 n e P αe + o p(n 1 ), if α is a correct model (α α = ), where d α = (α) and e = (e 1,..., e n). 5

8 Cross Validation (CV) Proof Since (1 w iα ) 2 = 1 + 2w iα + O(w 2 iα ), where O( ) denotes a bound uniform over 1 i n, it holds that ˆΓ α,n(1) = 1 γiα 2 n + 1 (2w iα + O(wiα 2 n ))γ2 i,α (I) + (II), where γ i,α = y i x i,α ˆβ α. If α is incorrect, then (I) = n 1 y (I P α)y = n 1 (e + β X )(I P α)(xβ + e) = n 1 e (I P α)e + 2n 1 e (I P α)xβ + n 1 β X (I P α)xβ why? = σ 2 + n 1 β X (I P α)xβ + o p(1), and (II) = o p(1). Then (1) follows. 6

9 Cross Validation (CV) If α is correct, then and This completes the proof. Corollary 1 Let (I) = n 1 e (I P α)e, n (II) = n 1 (2w iα + O(wiα 2 ))(e i x i,α ( ˆβ α β α)) 2 why? = = why? = 2 n w iα e 2 i + op(n 1 ) 2 w iα σ w iα (e 2 i n n σ2 ) + o p(n 1 ) 2d ασ 2 + o p(1). n ˆα = argmin α 2 P ˆΓ α,n(1). Then lim n P (α ˆα = ) = 1, but lim n P (ˆα = α ) is in general less than 1. 7

10 Cross Validation (CV) Notation b = ( n n v ) B nv = {s : s N and (s) = n v} P α,s = X α,s(x α,sx α,s) 1 X α,s ˆR α,s = 1 n v t s xt,αx t,α ˆR α,s c = 1 n c t s c xt,αx t,α γ α,s = y s X α,s ˆβα 8

11 Cross Validation (CV) Then, Fact 1 yields ˆΓ α,n(n v) = 1 n vb where (I) = (III) = s Bnv (IV) = 2 nv n c 1 n vb (V) = γ α,s (Inv Qα,s) 1 (I nv Q α,s) 1 γ α,s = (I) + (II) + + (VI), (4) 1 γ α,s n γα,s, vb s Bnv 1 1 γ n 2 α,s c n vb s Bnv γ α,s Pα,sγα,s, 2 n c 1 n vb (VI) = 2nv n 2 c s Bnv γ α,s s Bnv 1 γ α,s n vb s Bnv (II) = n2 v n 2 c 1 1 Xα,s( ˆR α,sc ˆR α,s )X α,s 1 γ α,s n Pα,sγα,s, vb s Bnv 1 1 Xα,s( ˆR α,sc ˆR α,s )X α,s γα,s, 1 1 Pα,sXα,s( ˆR α,sc ˆR α,s )X α,s γα,s. 1 1 Xα,s( ˆR α,sc ˆR α,s )X α,s γα,s, (Pα,sXα,s = Xα,s) Hint Using (1), one gets (I nv Q α,s) 1 = I nv + nv n c P α,s + 1 n c X α,s( 1 1 ˆR α,sc ˆR α,s )X α,s. 9

12 Cross Validation (CV) Theorem 2 Assume the same assumptions as in Theorem 1. Assume also that where ˆR A = 1 (A) t A xtx t with A N. Suppose Then (i) For α α, n v n lim max ˆR s ˆR s c = 0, (5) n s B nv 1 and nc = n nv. (6) ˆΓ α,n(n v) = n 1 e e + n 1 β X (I P α)xβ + o p(1) + R n, (7) where R n is a non-negative random variable. (ii) For α α =, ˆΓ α,n(n v) = n 1 e e + dασ2 n c (iii) lim n P (ˆα nv = α ) = 1, where ˆα nv = argmin α 2 P ˆΓ α,n(n v). + o p(n 1 c ). (8) 10

13 Cross Validation (CV) Proof We begin by considering the case of α α =. We will first show that ˆΓ α,n(n v) = (I) + (II)(1 + o(1)), (9) via proving (K) = (II)o(1), (10) where K = III, IV, V, and VI. 11

14 Cross Validation (CV) To show (10) holds with K = VI, note that (VI) 2{(A) + (B)}, (11) where (A) = nv n 2 c (B) = nv n 2 c 1 n γ α,s vb s Bnv 1 n vb s Bnv α,s X α,s γα,s, α,s X α,s γα,s. 1 1 Xα,s( ˆR α,sc ˆR α,s )( ˆR α,s ˆR 1 α,s c ) ˆR γ 1 α,sxα,s ˆR α,s ( ˆR α,s ˆR 1 α,s c ) ˆR In addition, and (A) why? (B) why? n2 v 1 n 2 c n vb n2 v 1 n 2 c n vb max s Bnv max ˆR α,s ˆR α,s c max 1 ˆR s Bnv s Bnv ˆR α,s ˆR α,s c 2 max s Bnv α,s s Bnv 1 ˆR α,s max 1 ˆR α,s s Bnv c γ α,spα,sγα,s, (12) s Bnv γ α,spα,sγα,s. (13) 12

15 Cross Validation (CV) By (2), we have 1 max ˆR α,s = O(1), s B nv which, together with (5), and (11) (13), yields Similarly, it can be shown that Since nv, it is easy to see that n c (VI) = (II)o(1). (14) (V) = (II)o(1). (15) (IV) = (IV) = (II)o(1). ((IV) is non-negative) (16) 13

16 Cross Validation (CV) Now, for (III), we have (III) nv 1 n 2 c n vb nv 1 n 2 c n vb + nv 1 n 2 c n vb s B nv γ α,sx α,s ˆR 1 γ 1 α,sxα,s ˆR s B nv s B nv γ α,sx α,s ˆR 1 α,s c ( ˆR α,s ˆR 1 α,s c ) ˆR α,s c ( ˆR α,s ˆR α,s c ) α,s ( ˆR α,s ˆR α,s c ) α,s( ˆR α,s ˆR 1 α,s c ) ˆR α,s c 1 ˆR α,s c ( ˆR α,s ˆR 1 α,s c ) ˆR ( ˆR α,s ˆR 1 α,s c ) ˆR α,s c ( ˆR α,s ˆR 1 α,s c ) ˆR α,s X α,s γα,s. This and an argument similar to that used to prove (14) give 1 ˆR α,sx α,sγ α,s α,s X α,s γα,s (III) = (II)o(1). (17) Hence, (10) follows from (14) (17). Now, by an argument similar to that use to prove (14) again, we have (II) = n2 v 1 n 2 c n vb s B nv γ α,s Qα,sγα,s + (II)o(1). (18) 14

17 Cross Validation (CV) Moreover, n 2 v 1 n 2 c n vb s B nv γ α,s Qα,sγα,s = n2 v 1 n 2 P α,ij γ i,α γ j,α ([P α,ij ] i,j s = P α) c n vb s B nv (i,j) s s = n2 v 1 n 2 P α,ii γi,α 2 c n + n2 v 1 vb n s B nv i s 2 P α,ij γ i,α γ j,α (P α,ii = w iα ) c n vb s B nv (i,j) s s,i j why? = n2 v 1 n 2 c n = n2 v 1 n 2 c n w iα γi,α 2 n2 v n v 1 1 n 2 w iα γi,α 2 c n 1 n ( ) n c w iα σ O w iα (γi,α 2 n 1 n σ2 ) c why? = dασ2 + o p(n 1 c ). n c (19) 15

18 Cross Validation (CV) By (18) and (19), we have Moreover, (II) = dασ2 n c + o p(n 1 c ). (20) (I) = 1 n y (I P α)y = 1 n e (I P α)e = n 1 e e + O p(n 1 ). (21) In view of (20), (21) and (9), the desired conclusion (8) follows. 16

19 Cross Validation (CV) To show (7), note that (4) implies (why?) Since α α, ˆΓ α,n(n v) 1 n vb s B nv γ α,sγ α,s = 1 n y (I P α)y. (22) 1 n y (I P α)y = n 1 e e + n 1 β X (I P α)xβ + 2n 1 β X (I P α)e and hence (7) is ensured by (22) and (23). = n 1 e e + n 1 β X (I P α)xβ + o p(1), (23) Finally, (iii) is an immediate consequence of (i) and (ii). 17

20 Monte Carlo Cross Validation (MCCV) Monte Carlo Cross Validation (MCCV) i.i.d. Let s i U(B nv ), where B nv is defined in the note for CV. Define MCCV as follows: ˆΓ MCCV α,n = 1 n vb y si ŷ α,s c i 2, where b is the number of Monte Carlo simulations used to calculate CV. 1

21 Monte Carlo Cross Validation (MCCV) Theorem 3 Assume the same assumptions as in Theorem 1. Suppose max 1 1 j b x i x i n 1 x i x i v n = op(1), (24) i s c j i / s j and Then, (i) for α α, Ee 4 1 <, n 2 bn 2 0, c ˆΓ MCCV α,n = 1 n vb n v n 1, and nc. (25) e s i e si + 1 n β X (I P α)xβ + o p(1) + R n, where e s i = (e j, j s i ) and R n is some positive r.v., (ii) for α α =, ˆΓ MCCV α,n = 1 n vb e s i e si + n 1 c d ασ 2 + o p(n 1 c ), (iii) lim n P (ˆα MCCV = α ) = 1, where ˆα MCCV = argmin α 2 P ˆΓ MCCV α,n. 2

22 Monte Carlo Cross Validation (MCCV) Proof By an argument similar to that used to prove (9) and (18) in the note for CV, one has for α α =, and ˆΓ MCCV α,n = 1 n vb n 2 v 1 n 2 c n vb γ α,s i γ α,si + n2 v 1 n 2 c n vb γ α,s i P α,si γ α,si (1 + o p(1)), (26) γ α,s i P α,si γ α,si (1 + o p(1)) = n2 v 1 n 2 c n vb γ α,s i Q α,si γ α,si. (27) 3

23 Monte Carlo Cross Validation (MCCV) In addition, by (19) in the note for CV, [ ] n 2 v 1 E n 2 γ α,s c n vb i Q α,si γ α,si (y 1, x 1 ),..., (y n, x n) implying where = why? = = n 2 v 1 n 2 E[γ α,s c n 1 Q α,s1 γ α,s1 (y 1, x 1 ),..., (y n, x n)] v n 2 v 1 1 ( n 2 c n n ) v n v d ασ 2 n c s B nv γ α,s Qα,sγα,s + o p(n 1 c ), E[n cv n F n] pr. d ασ 2, (28) V n = n2 v 1 n 2 c n vb γ α,s i Q α,si γ α,si and F n is the σ-field generated by (y 1, x 1 ),..., (y n, x n). 4

24 Monte Carlo Cross Validation (MCCV) We also have V ar(n cv n F n) = n2 v 1 n 2 c b 2 V ar(γ α,s i Q α,si γ α,si F n) n2 v 1 n 2 c b E((γ α,s 1 Q α,s1 γ α,s1 ) 2 F n) = n2 v 1 1 ( n 2 c b n ) (γ α,s Qα,sγα,s)2. n v s B nv Since Ee 4 1 <, we have E((γ α,sq α,sγ α,s) 2 ) C < for all s B nv, (C : some positive constant) which, together with n2 v 1 n 2 0, yields c b V ar[n cv n F n] pr. 0. (29) 5

25 Monte Carlo Cross Validation (MCCV) It is shown in the Appendix that (28) + (29) implies n cv n pr. d ασ 2. (30) In view of (26), (27) and (30), (ii) is proved once 1 n vb To show (31), note first that yielding γ α,s i γ α,si = 1 n vb γ α,si = e si X α,si ( ˆβ α β α), e s i e si + o p(n 1 c ). (31) 1 n vb γ α,s i γ α,si = 1 n vb e s i e si 2 n vb ( ˆβ α β α) X α,s i e si ( ) +( ˆβ α β α) 1 X α,s n vb i X α,si ( ˆβ α β α). (32) 6

26 Monte Carlo Cross Validation (MCCV) Since and one has 1 ( ˆβ α β α) X α,s n vb i e si ˆβ 1 α β α X α,s n vb i e si, ( ) 1 E X 1 α,s n vb i e si E( X α,s n vb i e si ) 1 = E[E( X α,s n 1 e s1 F n)] v 1 = E ( 1 n n ) X α,se s v n v s B nv 2 n vb why? = O(n 1 2 v ), ˆβ α β α = O p(n 1 2 ), ( ˆβ α β α) X α,s why? i e si = O p(n 1 ). (33) 7

27 Monte Carlo Cross Validation (MCCV) Similarly, it can be shown that ( ( ˆβ α β α) 1 n vb ) X α,s i X α,si ( ˆβ α β α) = O p(n 1 ). (34) Combining (32) (34), we obtain the desired conclusion (31). Thus, (ii) is proved. The proof of (i) is left as an exercise. Finally, we note that (iii) is an immediate consequence of (i) & (ii). 8

28 Monte Carlo Cross Validation (MCCV) Appendix: the proof of (30) To prove (30), we need the so-called conditional Chebyshev s inequality, which is stated as follows. Conditional Chebyshev s inequality Let X be a positive r.v., ɛ > 0, and F be a σ-field. Then Proof P (X > ɛ F) E(X F) ɛ Let D F. Then, by the definition of conditional expectation, P (X > ɛ F) dp = E(I X>ɛ F) dp = I X>ɛ dp D D D Hence (35) follows. a.s. (almost surely). (35) = D D D X ɛ I dp X>ɛ X ɛ dp E(X F) ɛ dp. 9

29 Monte Carlo Cross Validation (MCCV) Now, (30) is ensured by the following fact. Fact Let {X n} and {F n} be sequences of r.v.s and σ-field, respectively. If V ar(x n F n) pr. 0, (36) and then E(X n F n) pr. C (a constant), (37) X n pr. C. (38) Proof By the conditional Chebyshev s inequality and (36), it holds that for any ɛ > 0 P ( X n E(X n F n) > ɛ F n) V ar(xn Fn) ɛ 2 which, together with the dominated convergence theorem, yields This and (37) give (38). X n E(X n F n) pr. 0. (why?) pr. 0, 10

30 Accumulated Prediction Errors (APE) Accumulated Prediction Errors (APE) Consider the following stochastic regression model y t = p β i x ti + ɛ t, (39) where ɛ t i.i.d. (0, σ 2 ) and x t = (x t1,..., x tp) is F t 1 -measurable, meaning that x t can be completely decided by the information collected at time t 1. Example 1: Autoregressive (AR) models y t = a 1 y t a py t p + ɛ t, where A(z) = 1 a 1 z a 2 z 2 a pz p 0, z 1. Example 2: Autoregressive exogenous models y t = a 1 y t a py t p + η 1 Z t1 + + η qz tq + ɛ t, where A(z) 0, z 1, and (ɛ t, Z t1,..., Z tq) are i.i.d.. 1

31 Accumulated Prediction Errors (APE) Let α and α be defined as in the note for CV. We are interested in choosing α based on APE, which assigns α to a positive value: APE α = (y t x ˆβ t,α t 1,α ) 2, i=m+1 where t ˆβ t,α = x j,α x 1 t j,α x j,α y j j=1 j=1 and M is some integer to be specified later. 2

32 Accumulated Prediction Errors (APE) Theorem 4 Assume and 1 n x tx t t=1 pr. R (p.d.), E ɛ 1 q 1 < for some large q 1, (40) max E x ti q 2 < for some large q 2, E t( ˆβ t β) q 3 < k <, (41) 1 t n,1 i p for all t M and some large q 3. (i) For α α =, (ii) For α α, APE α = APE α i=m+1 ɛ 2 t + σ2 d α log n + o p(log n), i=m+1 ɛ 2 t + n(α) + Op(1), where 1 pr. n(α) γ > 0. n (iii) lim n P (ˆα APE = α ) = 1, where ˆα APE = argmin α 2P APE α. 3

33 Accumulated Prediction Errors (APE) Proof We first prove (i). Define ( t 1 t t V t,α = x i,α x i,α), Q t,α = x j,α ɛ j V t,α x j,α ɛ j, j=1 j=1 and d t,α = x t,α Vt,αxt,α. By making use of and one gets = V t,α = V t 1,α V t 1,αx t,αx t,α V t 1,α 1 + x t,α V t 1,αx t,α x t,α V t 1,αx t,α = 1 d t,α, t=m+1 2 t 1 x t,αv t 1,α x j,α ɛ j (1 d t,α) + Q n,α Q M,α j=1 d t,αɛ 2 t + 2 n x t,α ( ˆβ t 1,α β α)ɛ t(1 d t,α). (42) t=m+1 t=m+1 4

34 Accumulated Prediction Errors (APE) Moreover, by (40) and (41), it can be shown that t=m+1 2 t 1 x t,α V t 1,α x j,α ɛ j d t,α = O p(1), (43) j=1 Q n = O p(1), Q M = O p(1), (44) x t,α( ˆβ t 1,α β α)ɛ t(1 d t,α) = o p(log n). (45) t=m+1 Also, we have t=m+1 d t,αɛ 2 t = t=m+1 d t,ασ 2 + t=m+1 d t,α(ɛ 2 t σ 2 ) = σ 2 d α log n + o p(log n) + O p(1). (46) 5

35 Accumulated Prediction Errors (APE) Consequently, (i) follows from (42) (46), and the fact that APE α = = = (ɛ t x t,α ( ˆβ t 1,α β α)) 2 t=m+1 ɛ 2 t 2 ( ˆβ t 1,α β α)x t,αɛ t + [x t,α( ˆβ t 1,α β α)] 2 t=m+1 t=m+1 t=m+1 ɛ 2 t + [x t,α( ˆβ t 1,α β α)] 2 + o p(log n). t=m+1 t=m+1 To show (ii), note that by Theorem 2.1 of Wei (1992), we have (y t x ˆβ t,α t 1,α ) 2 (y t x ˆβ M t,α n,α) 2 (y t x ˆβ t,α M,α ) 2 t=m+1 t=1 t=1 = y (I P α)y + O p(1), which implies (ii) (why?). Now, (iii) follows directly from (i) and (ii). 6

36 References References 1 Shao, J. (1993). Linear model selection by cross-validation. Journal of the American Statistical Association, 88, Wei, C. Z. (1992). On predictive least squares principles. The Annals of Statistics, 20, 1 42.

On High-Dimensional Cross-Validation

On High-Dimensional Cross-Validation On High-Dimensional Cross-Validation BY WEI-CHENG HSIAO Institute of Statistical Science, Academia Sinica, 128 Academia Road, Section 2, Nankang, Taipei 11529, Taiwan hsiaowc@stat.sinica.edu.tw 5 WEI-YING

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Lecture 5 Linear Quadratic Stochastic Control

Lecture 5 Linear Quadratic Stochastic Control EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Chapter 7: Model Assessment and Selection

Chapter 7: Model Assessment and Selection Chapter 7: Model Assessment and Selection DD3364 April 20, 2012 Introduction Regression: Review of our problem Have target variable Y to estimate from a vector of inputs X. A prediction model ˆf(X) has

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Bootstrap, Jackknife and other resampling methods

Bootstrap, Jackknife and other resampling methods Bootstrap, Jackknife and other resampling methods Part VI: Cross-validation Rozenn Dahyot Room 128, Department of Statistics Trinity College Dublin, Ireland dahyot@mee.tcd.ie 2005 R. Dahyot (TCD) 453 Modern

More information

) k ( 1 λ ) n k. ) n n e. k!(n k)! n

) k ( 1 λ ) n k. ) n n e. k!(n k)! n k ) λ ) k λ ) λk k! e λ ) π/!. e α + α) /α e k ) λ ) k λ )! λ k! k)! ) λ k λ k! λk e λ k! λk e λ. k! ) k λ ) k k + k k k ) k ) k e k λ e k ) k EX EX V arx) X Nα, σ ) Bp) Eα) Πλ) U, θ) X Nα, σ ) E ) X α

More information

MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.

MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3 (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. (a) Define d : V V + {0} by d(x, y) = 1 ξ j η j 2 j 1 + ξ j η j. Show that

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Homework 4 Solutions

Homework 4 Solutions CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set

More information

Lecture 4: Two-point Sampling, Coupon Collector s problem

Lecture 4: Two-point Sampling, Coupon Collector s problem Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms

More information

Limiting behaviour of large Frobenius numbers. V.I. Arnold

Limiting behaviour of large Frobenius numbers. V.I. Arnold Limiting behaviour of large Frobenius numbers by J. Bourgain and Ya. G. Sinai 2 Dedicated to V.I. Arnold on the occasion of his 70 th birthday School of Mathematics, Institute for Advanced Study, Princeton,

More information

A Consistent Model Selection Criterion for L 2 -Boosting in High-Dimensional Sparse Linear Models

A Consistent Model Selection Criterion for L 2 -Boosting in High-Dimensional Sparse Linear Models A Consistent Model Selection Criterion for L 2 -Boosting in High-Dimensional Sparse Linear Models Tze Leung Lai, Stanford University Ching-Kang Ing, Academia Sinica, Taipei Zehao Chen, Lehman Brothers

More information

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b)

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b) ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS Conditioning of linear systems. Estimating errors for solutions of linear systems Backward error analysis Perturbation analysis for linear

More information

Financial Time Series Analysis Week 5

Financial Time Series Analysis Week 5 Financial Time Series Analysis Week 5 25 Estimation in AR moels Central Limit Theorem for µ in AR() Moel Recall : If X N(µ, σ 2 ), normal istribute ranom variable with mean µ an variance σ 2, then X µ

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Residual Bootstrap for estimation in autoregressive processes

Residual Bootstrap for estimation in autoregressive processes Chapter 7 Residual Bootstrap for estimation in autoregressive processes In Chapter 6 we consider the asymptotic sampling properties of the several estimators including the least squares estimator of the

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

Efficient parameter estimation for independent and INAR(1) negative binomial samples

Efficient parameter estimation for independent and INAR(1) negative binomial samples Metrika manuscript No. will be inserted by the editor) Efficient parameter estimation for independent and INAR1) negative binomial samples V. SAVANI e-mail: SavaniV@Cardiff.ac.uk, A. A. ZHIGLJAVSKY e-mail:

More information

Learnability and Numberings of Families of Computably Enumerable Sets

Learnability and Numberings of Families of Computably Enumerable Sets Learnability and Numberings of Families of Computably Enumerable Sets Klaus Ambos-Spies These notes contain most of the material covered in the course Learnability and Numberings of Families of Computably

More information

MAT 135B Midterm 1 Solutions

MAT 135B Midterm 1 Solutions MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.

More information

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)

More information

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Dr. Maddah ENMG 617 EM Statistics 11/28/12. Multiple Regression (3) (Chapter 15, Hines)

Dr. Maddah ENMG 617 EM Statistics 11/28/12. Multiple Regression (3) (Chapter 15, Hines) Dr. Maddah ENMG 617 EM Statistics 11/28/12 Multiple Regression (3) (Chapter 15, Hines) Problems in multiple regression: Multicollinearity This arises when the independent variables x 1, x 2,, x k, are

More information

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS Sangyeol Lee 1 Contents 1. Introduction of the CUSUM test 2. Test for variance change in AR(p) model 3. Test for Parameter Change in Regression Models

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

18.S34 (FALL 2007) PROBLEMS ON ROOTS OF POLYNOMIALS

18.S34 (FALL 2007) PROBLEMS ON ROOTS OF POLYNOMIALS 18.S34 (FALL 2007) PROBLEMS ON ROOTS OF POLYNOMIALS Note. The terms root and zero of a polynomial are synonyms. Those problems which appeared on the Putnam Exam are stated as they appeared verbatim (except

More information

Lecture 5: Two-point Sampling

Lecture 5: Two-point Sampling Randomized Algorithms Lecture 5: Two-point Sampling Sotiris Nikoletseas Professor CEID - ETY Course 2017-2018 Sotiris Nikoletseas, Professor Randomized Algorithms - Lecture 5 1 / 26 Overview A. Pairwise

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

CSE 312 Final Review: Section AA

CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Machine Learning. Lecture 9: Learning Theory. Feng Li.

Machine Learning. Lecture 9: Learning Theory. Feng Li. Machine Learning Lecture 9: Learning Theory Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Why Learning Theory How can we tell

More information

Multivariate Calibration with Robust Signal Regression

Multivariate Calibration with Robust Signal Regression Multivariate Calibration with Robust Signal Regression Bin Li and Brian Marx from Louisiana State University Somsubhra Chakraborty from Indian Institute of Technology Kharagpur David C Weindorf from Texas

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak

More information

2.7 Estimation with linear Restriction

2.7 Estimation with linear Restriction Proof (Method 1: show that that a C(W T ), which implies that the GLSE is an estimable function under the old model is also an estimable function under the new model; secnd show that E[a T ˆβ G ] = a T

More information

(X i X) 2. n 1 X X. s X. s 2 F (n 1),(m 1)

(X i X) 2. n 1 X X. s X. s 2 F (n 1),(m 1) X X X 10 n 5 X n X N(µ X, σx ) n s X = (X i X). n 1 (n 1)s X σ X n = (X i X) σ X χ n 1. t t χ t (X µ X )/ σ X n s X σx = X µ X σ X n σx s X = X µ X n s X t n 1. F F χ F F n (X i X) /(n 1) m (Y i Y ) /(m

More information

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

ESTIMATION OF THE OFFSPRING AND IMMIGRATION MEAN VECTORS FOR BISEXUAL BRANCHING PROCESSES WITH IMMIGRATION OF FEMALES AND MALES

ESTIMATION OF THE OFFSPRING AND IMMIGRATION MEAN VECTORS FOR BISEXUAL BRANCHING PROCESSES WITH IMMIGRATION OF FEMALES AND MALES Pliska Stud. Math. Bulgar. 20 (2011), 63 80 STUDIA MATHEMATICA BULGARICA ESTIMATION OF THE OFFSPRING AND IMMIGRATION MEAN VECTORS FOR BISEXUAL BRANCHING PROCESSES WITH IMMIGRATION OF FEMALES AND MALES

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Bounds for pairs in partitions of graphs

Bounds for pairs in partitions of graphs Bounds for pairs in partitions of graphs Jie Ma Xingxing Yu School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0160, USA Abstract In this paper we study the following problem of Bollobás

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6 Matthew Straughn Math 402 Homework 6 Homework 6 (p. 452) 14.3.3, 14.3.4, 14.3.5, 14.3.8 (p. 455) 14.4.3* (p. 458) 14.5.3 (p. 460) 14.6.1 (p. 472) 14.7.2* Lemma 1. If (f (n) ) converges uniformly to some

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH A Test #2 June 11, Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH A Test #2 June 11, Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 2. A Test #2 June, 2 Solutions. (5 + 5 + 5 pts) The probability of a student in MATH 4 passing a test is.82. Suppose students

More information

Graduate Econometrics I: Maximum Likelihood II

Graduate Econometrics I: Maximum Likelihood II Graduate Econometrics I: Maximum Likelihood II Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez

PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez Conditional PROBABILITY AND STATISTICS IN COMPUTING III. Discrete Random Variables and Deviations From: [5][7][6] Page of 46 German Hernandez Conditional. Random variables.. Measurable function Let (Ω,

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29

REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29 REPEATED MEASURES Copyright c 2012 (Iowa State University) Statistics 511 1 / 29 Repeated Measures Example In an exercise therapy study, subjects were assigned to one of three weightlifting programs i=1:

More information

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015 Problem 1: (The Flajolet-Martin Counter) In class (and in the prelim!), we looked at an idealized algorithm for finding the number of distinct elements in a stream, where we sampled uniform random variables

More information

Notes on the second moment method, Erdős multiplication tables

Notes on the second moment method, Erdős multiplication tables Notes on the second moment method, Erdős multiplication tables January 25, 20 Erdős multiplication table theorem Suppose we form the N N multiplication table, containing all the N 2 products ab, where

More information

Chapter 4. Replication Variance Estimation. J. Kim, W. Fuller (ISU) Chapter 4 7/31/11 1 / 28

Chapter 4. Replication Variance Estimation. J. Kim, W. Fuller (ISU) Chapter 4 7/31/11 1 / 28 Chapter 4 Replication Variance Estimation J. Kim, W. Fuller (ISU) Chapter 4 7/31/11 1 / 28 Jackknife Variance Estimation Create a new sample by deleting one observation n 1 n n ( x (k) x) 2 = x (k) = n

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

Economics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL)

Economics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL) Economics Department LSE EC402 Lent 2015 Danny Quah TW1.10.01A x7535 : Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL) 1. Suppose ɛ is w.n. (0, σ 2 ), ρ < 1, and W t = ρw t 1 + ɛ t, for t = 1, 2,....

More information

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1 List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question

More information

IMPLICIT RENEWAL THEORY AND POWER TAILS ON TREES PREDRAG R. JELENKOVIĆ, Columbia University

IMPLICIT RENEWAL THEORY AND POWER TAILS ON TREES PREDRAG R. JELENKOVIĆ, Columbia University Applied Probability Trust 22 June 212 IMPLICIT RWAL THORY AD POWR TAILS O TRS PRDRAG R. JLKOVIĆ, Columbia University MARIAA OLVRA-CRAVIOTO, Columbia University Abstract We extend Goldie s 1991 Implicit

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij = 20. ONE-WAY ANALYSIS OF VARIANCE 1 20.1. Balanced One-Way Classification Cell means parametrization: Y ij = µ i + ε ij, i = 1,..., I; j = 1,..., J, ε ij N(0, σ 2 ), In matrix form, Y = Xβ + ε, or 1 Y J

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Fall 2013 Full points may be obtained for correct answers to eight questions. Each numbered question (which may have several parts) is worth

More information

Topic 12 Overview of Estimation

Topic 12 Overview of Estimation Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the

More information

On variable bandwidth kernel density estimation

On variable bandwidth kernel density estimation JSM 04 - Section on Nonparametric Statistics On variable bandwidth kernel density estimation Janet Nakarmi Hailin Sang Abstract In this paper we study the ideal variable bandwidth kernel estimator introduced

More information

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

CS 125 Section #10 (Un)decidability and Probability November 1, 2016 CS 125 Section #10 (Un)decidability and Probability November 1, 2016 1 Countability Recall that a set S is countable (either finite or countably infinite) if and only if there exists a surjective mapping

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Week 2. Review of Probability, Random Variables and Univariate Distributions

Week 2. Review of Probability, Random Variables and Univariate Distributions Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference

More information

Lecture 4: Completion of a Metric Space

Lecture 4: Completion of a Metric Space 15 Lecture 4: Completion of a Metric Space Closure vs. Completeness. Recall the statement of Lemma??(b): A subspace M of a metric space X is closed if and only if every convergent sequence {x n } X satisfying

More information

3. GRAPHS AND CW-COMPLEXES. Graphs.

3. GRAPHS AND CW-COMPLEXES. Graphs. Graphs. 3. GRAPHS AND CW-COMPLEXES Definition. A directed graph X consists of two disjoint sets, V (X) and E(X) (the set of vertices and edges of X, resp.), together with two mappings o, t : E(X) V (X).

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

Stochastic Models (Lecture #4)

Stochastic Models (Lecture #4) Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Properties of Summation Operator

Properties of Summation Operator Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,

More information

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables To be provided to students with STAT2201 or CIVIL-2530 (Probability and Statistics) Exam Main exam date: Tuesday, 20 June 1

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

SDS : Theoretical Statistics

SDS : Theoretical Statistics SDS 384 11: Theoretical Statistics Lecture 1: Introduction Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin https://psarkar.github.io/teaching Manegerial Stuff

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.

Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet. 2016 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE Chapter 6. Panel Data Joan Llull Quantitative Statistical Methods II Barcelona GSE Introduction Chapter 6. Panel Data 2 Panel data The term panel data refers to data sets with repeated observations over

More information