Regression Analysis. Institute of Statistics, National Tsing Hua University, Taiwan

Size: px
Start display at page:

Download "Regression Analysis. Institute of Statistics, National Tsing Hua University, Taiwan"

Transcription

1 Regression Analysis Ching-Kang Ing ( 銀慶剛 ) Institute of Statistics, National Tsing Hua University, Taiwan

2 Regression Models: Finite Sample Theory y i = β 0 + β 1 x i1 + + β k x ik + ε i, i = 1,, n, where ε i are i.i.d. r.v. s with E(ε 1 ) = 0 and E(ε 2 1) = var(ε 1 ) = σ 2 > 0. A. Analysis of Variance: SS T = SS Reg + SS Res. Define ( ˆβ 0,, ˆβ k ) ˆβ = (X X) 1 X y, where X = 1 x 11 x 1k... and y = y x n1 x nk y n 1

3 Also define n SS T = (y i ȳ) 2, where ȳ = 1 n y i, n i=1 n SS Res = (y i ˆβ 0 ˆβ 1 x i1 ˆβ n k x ik ) 2 = (y i ˆβ x i ) 2, i=1 i=1 i=1 where x i = (1, x i1,, x ik ), and n SS Reg = ( ˆβ 0 + ˆβ 1 x i1 + + ˆβ n k x ik ȳ) 2 = ( ˆβ x i ȳ) 2. i=1 i=1 2

4 It is not difficult to see ( why? ) that SS T = y (I M 0 )y, where M 0 = E n = 11 n with 1 = SS Res = y (I M k )y, where M k = X(X X) 1 X 1. 1, [Note that y 1. y n x 1 ˆβ. x n ˆβ = y X ˆβ = y X(X X) 1 X y = (I M k )y, and M k = M 2 k, (I M k ) 2 = I M k.] 3

5 and SS Reg = y (M k M 0 )y. Therefore, ANOVA is nothing but y (I M 0 )y = y (M k M 0 )y + y (I M k )y. Actually, ANOVA i s a Pythagorean inequality, as i llustrated below, in which C(X) ={Xa : a R k+1 } i s called the column space of X. 4

6 C(X) : column space of X 5

7 Projection Matrices Let X = x 11 x 1r. x n1 x nr. = [X 1,, X r ] be an n r matrix. The column space of X, C(X), is defined as C(X) = {Xa : a = (a 1,, a r ) R r } noting that Xa = a 1 X a r X r. 6

8 Projection Matrices Definition. An n n matrix M is called an orthogonal projection matrix onto C(X) if and only if 1. for v C(X), Mv = v, 2. for w C (X), Mw = 0, where C (X) = {s : v s = 0 for all v C(X)}. 7

9 Projection Matrices Fact 1. C(M) = C(X). proof. Let v C(X). Then v = Xb = MXb C(M) ( why? ), for some b. Let v C(M). Then v = Ma = M(a 1 + a 2 ) = a 1 C(X), for some a, and some a 1 C(X), a 2 C (X). This completes the proof. 8

10 Projection Matrices Fact 2. M = M (symmetric) and M 2 = M (idempotent) if and only if M is an orthogonal projection matrix on C(M) proof. ( ) For v C(M), Mv = MMb idempotent = Mb = v, for some b. For w C (M), Mw symm. = M w = 0. (why?) ( ) Define e i = (0,, 0, 1, 0,, 0), i-th compenent is 1, and the others are 0. It is suffices to show that for any e i, e j, e i M (I M)e j = 0. (why?) Since we can decompose e i and e j as e i = e (1) i + e (2) i and e j = e (1) j + e (2) j, where e (1) i, e (1) j C(M) and 9

11 e (2) i, e (2) j C (M), e im (I M)e j = e im (I M)(e (1) j + e (2) j ) This completes the proof. why? = e im e (2) why? j = e (1) i e (2) j = 0. 10

12 Projection Matrices Fact 3. Orthogonal projection matrices are unique. proof. Let M and P be orthogonal projection matrices onto some space S R n. Then, for any v R n, v = v 1 + v 2, where v 1 S and v 2 S. The desired conclusion follows from (M P )v = (M P )(v 1 + v 2 ) = (M P )v 1 = 0. 11

13 Projection Matrices Fact 4. Let { o 1,, o r be an orthonormal basis of C(X), 0, if i j, i.e., o i o j = and for any v C(X), v = Ob for some 1, if i = j, b R r, where O = [o 1,, o r ]. Then, OO = r i=1 o io i is the orthogonal projection matrix onto C(X). proof. Since OO is symmetric and OO OO = OO, where O O = I r, the r-dimensional identity matrix, by Fact 2, OO is the orthogonal projection matrix onto C(OO ). Moreover, for v C(X), we have v = Ob = OO Ob C(OO ), for some b R r. C(O) = C(X). The desired conclusion follows. In addition, C(OO ) 12

14 Projection Matrices Remark. One can also prove the result by showing (i) for v C(X), OO v = OO Ob = Ob = v, and (ii) for w C (X), OO w = 0 (the n-dimensional vector of zeros). 兩種證法之差異在於第一種方法是先引用 Fact 2 得到 OO 是 C(OO ) 的正交投影矩陣, 再從 C(OO ) 的結構猜測它與 C(X) 相同 ; 而後者則是直接猜測 OO 是 C(X) 的正交投影矩陣 前者證明較曲折但 猜測 成分較少, 後者則反之 13

15 Projection Matrices Q. Given a matrix X, how to construct the orthogonal projection matrix for C(X)? Gram-Schmidt processes Let X = [x 1,, x q ] for some q 1. Define y 1 = x 1 / x 1, where x 1 2 = x 1x 1. w 2 = x 2 (x 2y 1 )y 1. 14

16 Projection Matrices y 2 = w 2 / w 2.. w s = x s s 1 i=1 (x sy i )y i. y s = w s / w s, 2 s q. If the rank of C(X) is 1 r q, then Y = [y 1,, y r ] is an orthonormal basis of C(X), noting that y r+j = 0 for 1 j q r. Y Y is the orthogonal projection matrix onto C(X) (by Fact 4). 15

17 Explanation of Rank Explain the rank of C(X) : Let J be a subset of {1,, q} satisfying (i) {x i, i J} is linearly independent (i.e. i x i = 0 if and only if i Ja a i = 0 for all i J), (ii) for any J 1 J with J 1 J, {x i, i J 1 } is not linearly independent. The rank of C(X) is defined by (J), the number of the elements in J. 16

18 Projection Matrices Moreover, if r(x) = q (i.e. the rank of C(X) is q), the X(X X) 1 X is the orthogonal projection matrix of C(X). proof. (i) X(X X) 1 X is symmetric and idempotent. (ii) C(X(X X) 1 X ) why? = C(X). If 1 r(x) < q, then X(X X) X is the orthogonal projection matrix of C(X), where A denotes a generalized inverse (g-inverse) of A which is defined by any matrix G such that AGA = A. (Note that (X X) = (X X) 1 if r(x) = q, and there re infinitely many (X X) if r(x) < q. But in either case, X(X X) X is unique, according to Fact 3. More details can be found in ref.pdf.) 17

19 Projection Matrices We now go back to regression problems, and summarize the key features of M 0 = 1 n 11, M k = X(X X) 1 X where X = (I M 0 ), (I M k ) and M k M 0 : 1 x 11 x 1k. 1 x n1 x nk., (i) M 0 is the orthogonal projection matrix onto C(1), (ii) M k is the orthogonal projection matrix onto C(X), (iii) (I M 0 ) is the orthogonal projection matrix onto C (1), 18

20 (iv) (I M k ) is the orthogonal projection matrix onto C (X), (v) M k M 0 is the orthogonal projection matrix onto C((I M 0 )X), where x 11 x 1 x 1k x k C((I M 0 )X) why? = C.,,., x n1 x 1 x nk x k x i = 1 n n x ji. j=1 (vi) M 0 M k = M 0 = M k M 0, (I M 0 )M 0 = 0, (I M k )M k = 0, (I M k )M 0 = 0, where 0 is the n n matrix of zeros. 19

21 Estimation B. Does ˆβ possess any optimal properties? (i) E( ˆβ) = β since E( ˆβ) = (X X) 1 X y. E { (X X) 1 X (Xβ + ε) } = β + E((X X) 1 X ε) = β + (X X) 1 X E(ε) = β + (X X) 1 X 0 = β. (ii) V ar( ˆβ) = (X X) 1 σ 2 because V ar( ˆβ) = E(( ˆβ β)( ˆβ β)) = E {(X X) 1 X εε X(X X) 1} = (X X) 1 X E(εε )X(X X) 1 = σ 2 (X X) 1, 20

22 noting that we have used E(εε ) = σ 2 I. (iii) Gauss-Markov Theorem. For any ˇβ = Ay satisfying β = E( ˇβ) = E(Ay) = E(A(Xβ + ε)) = AXβ for all β, we have V ar( ˆβ) V ar( ˇβ) in the sense that V ar( ˇβ) V ar( ˆβ) is non-negative definite ( 非負定 ), i.e., for any a = 1, { a V ar( ˇβ) V ar( ˆβ) } a 0. ( ) 21

23 Estimation Remark. (i) Ay is called a linear estimator of β. (ii) ˇβ is unbiased (since we assume E( ˇβ) = β for all β ). (iii) This theorem says that ˆβ is the best linear unbiased estimator (BLUE) of β. (iv) ( ) is equivalent to V ar(a ˇβ) why? V ar(a ˆβ), meaning that the variance of a ˇβ is always not smaller than that of a ˆβ regardless of which direction vector, a, ˆβ and ˇβ project onto. 22

24 proof. Let a R k+1 be arbitrarily chosen. Then, V ar(a ˇβ) = E[a ( ˇβ β)] 2 (since ˇβ is unbiased) = E(a ( ˇβ ˆβ) + a ( ˆβ β)) 2 V ar(a ˆβ) + 2E {a ( ˇβ ˆβ)( ˆβ } β) a (since ˆβ is unbiased) ( why? = V ar(a ˆβ) + 2a E (A (X X) 1 X )εε X(X X) 1) a why? = V ar(a ˆβ) + 2σ 2 a (A (X X) 1 X )X(X X) 1 a why? = V ar(a ˆβ) + 2σ 2 a [(X X) 1 a (X X) 1 a] = V ar(a ˆβ) 23

25 Estimation C. How to estimate σ 2? ˆσ 2 = = 1 n (k + 1) 1 n (k + 1) n (y i ˆβ 0 ˆβ 1 x i1 ˆβ k x ik ) 2 i=1 n (y i x ˆβ) i 2 1 = n (k + 1) y (I M k )y. i=1 24

26 Estimation Why k + 1? k + 1 makes ˆσ 2 unbiased, namely, E(ˆσ 2 ) = σ 2. To see this, we have E(ˆσ 2 1 ) = n (k + 1) E(y (I M k )y) why? 1 = n (k + 1) E(ε (I M k )ε), ε = (ε 1,, ε k ) (I M k )X = 0 why? = σ 2 n (k + 1) tr(i M k)i) why? = σ 2. 25

27 Reasons for the Third "Why" (1) E(z Az) = u Au + tr(av ), where u = E(z) and V = Cov(z) = E[(z u)(z u) ] (2) E(ε (I M k )ε) = E(tr(ε (I M k )ε)) since ε (I M k )ε is a scalar = tr[e{(i M k )ε ε}] = tr(i M k )σ 2. 26

28 Estimation Some facts about trace operator. 1. tr(a) Def. = n A ii, where i=1 A = [A ij ] 1 i,j n = A 11 A 1n. A n1 A nn. 2. tr(ab) = tr(ba), tr( k i=1 A i) = k i=1 tr(a i). 3. tr(m k ) = tr(x(x X) 1 X ) = tr((x X) 1 X X) = tr(i k+1 ) = k + 1, where I k+1 is the k + 1-dimensional identity matrix. 27

29 Estimation 4. tr(m k ) = tr ( k+1 ) k+1 k+1 o i o i = tr(o i o i) = tr(o io i ) = k + 1, i=1 i=1 i=1 where {o 1,, o k+1 } is an orthonormal basis for C(X). 5. Similarly, we have tr(i M k ) = n k 1 and tr(i M 0 ) = n 1. 28

30 Multivariate Normal Distributions Definition. We say z has an r-dimensional multivariate normal distribution with mean E(z) = u and variance E((z u)(z u) ) = Σ > 0 (i.e., a Σa > 0 for all a R r and a = 1), denoted by N(u, Σ), if there exist a k-dimensional standard normal vector ε = (ε 1,, ε k ), k r (i.e., ε 1,, ε k are i.i.d. N(0, 1) random variables) and an r k nonrandom matrix A of full row rank satisfying AA = Σ such that z Aε + u, where means both sides of the notation have the same distribution. 29

31 Multivariate Normal Distributions If a R r such that a Σa = 0, then E(a z) 2 = 0 (why?). This yields P (a z = 0) = 1 because E(a z) 2 = 0 implies E(a z) = 0 and V ar(a z) = 0. Therefore, with probability 1, one z i is a linear combination of other z j s. 30

32 Multivariate Normal Distributions Remark. 1. A = a 1. a r independent. is said to have a full rank if a 1,, a k are linearly 2. A is not unique since for any P P = P P = I k, we have AA = AP P A = Σ. 3. If z N(u, Σ), then for any B of full row rank, Bz N(Bu, BΣB ). 4. If r = 2, then z is said to be bivariate normal. 31

33 5. Let z = ( z1 z 2 ) be a two-dimensional random vector and fulfill z 1 N(0, 1), z 2 N(0, 1) and E(z 1 z 2 ) = 0. It is possible that z is not a bivariate normal. Fact 1. If z N(u, Σ), then the joint probability density function (pdf) of z, f(z), is given by f(z) = (2π) r 2 det 1 2 (Σ) exp { 1 } 2 (z u) Σ 1 (z u). 32

34 Multivariate Normal Distributions proof. By definition, z Aε + u, where ε N(0, I k ), k r, and A is an r k matrix of full row rank. Let b 1,, b k r satisfy b ib j = { 1 i = j 0 i j and b ia j = 0 for all 1 i k r, 1 j r. Define ( A A = B ) A b 1. and z = ( z w ) = A ε + u, b k r 33

35 where u = (u, 0,, 0). Then, the joint pdf of z is given by f (z ) = (2π) k 2 exp { 1 det(a 2 (z u )(A ) 1 (A ) 1 (z u )} 1 ). Note that here we have used the following facts: (i) The joint pdf of ε is (2π) k 2 exp { 1 } ε = 2 ε k (2π) 1 2 exp ( 1 ) 2 ε2 i since ε i s are independent, the joint pdf of (ε 1,, ε k ) is the product of the marginal pdfs. i=1 34

36 Multivariate Normal Distributions (ii) Let the joint pdf of v = (v 1,, v k ) be denoted by f(v), v D R k, let g(v) = (g 1 (v),, g k (v)) be a smooth one-to-one transformation of D onto E R k, and let g 1 (0) = (g1 1 (0),, g 1 k (0)) denote the inverse transformation of g(0), which satisfies g 1 (g(v)) = v. 35

37 Define J = g 1 (y) y = g 1 1 (y) y 1. g 1 k (y) y 1 g 1 1 (y) y k. g 1 k (y) y k. Then, the joint pdf of y = g(v) is given by f(g 1 (y)) det(j). Now, since ( A (A ) 1 (A ) 1 = (A A ) 1 = B ( (AA ) 1 0 = 0 I k r ) ( A B ) 1 ) ( Σ 1 0 = 0 I k r ) 36

38 and det(a ) 1 = det(a ) 1 ( ) = (det(a )det(a )) = det(a )det(a 2 ) ( ) 1 = det(a A 2 ) = det 1 2 (Σ), we have f = (z ) = (2π) r 2 exp { 1 } 2 (z u) Σ 1 (z u) det 1 2 (Σ) { (2π) k r 2 exp 1 } w, 2 w why 37

39 Multivariate Normal Distributions and hence f(z) = f (z )dw = (2π) r 2 exp { 1 } 2 (z u) Σ 1 (z u) det 1 2 (Σ) { (2π) k r 2 exp 1 } w dw 2 w = (2π) r 2 exp { 1 } 2 (z u) Σ 1 (z u) det 1 2 (Σ), where { (2π) k r 2 exp 1 } w dw = 1. 2 w (why?) 38

40 Multivariate Normal Distributions Fact 2. If z N(u, Σ) and z = ( z1 z 2 ), then Cov(z 1, z 2 ) = E((z 1 u 1 )(z 2 u 2 ) ) = 0, where 0 is a zero matrix, if and only if z 1 and z 2 are independent, where z 1 and z 2 are r 1 - and r 2 -dimensional, respectively. proof. ) It is easy and hence skipped. ) Since Cov(z 1, z 2 ) = 0, we have by Fact 1, f(z) = f(z 1, z 2 ) { = (2π) r 1 2 exp 1 } 2 (z 1 u 1 ) Σ 1 11 (z 1 u 1 ) det(σ 11 ) 1 2 { (2π) r 2 2 exp 1 } 2 (z 2 u 2 ) Σ 1 22 (z 2 u 2 ) det(σ 22 ) 1 2 = f(z 1 )f(z 2 ), 39

41 where ( u1 u 2 ) = u and ( Σ11 Σ 12 ) ( Σ11 0 ) Σ 21 Σ 22 = Σ = 0 Σ 22, by hypothesis. Since f(z 1 ) is the joint pdf of z 1 and f(z 2 ) is the joint pdf of z 2, the above identity implies z 1 and z 2 are independent (why?) Here, we ve used if X nd Y are independent iff f(x, y) = f x (x)f y (y). 40

42 Multivariate Normal Distributions ) Fact 3. Let z N(u, σ 2 I r ) and C = ( B1 B 2 q r row rank. Then B 1 z and B 2 z are independent if B 1 B 2 = 0., q r, have a full proof. Since Cov(B 1 z, B 2 z) = E(B 1 (z u)(z u) B 2) = σ 2 B 1 B 2 = 0, by Fact 2, the desired conclusion follows. Definition. Let z be an r-dimensional random vector and let A be an n n symmetric matrix. Then z Az is called a quadratic form. 41

43 Multivariate Normal Distributions Fact 4. Let E(z) = u and V ar(z) = Σ. Then E(z Az) = u Au + tr(aσ). proof. For u = 0, we have E(z Az) = E(tr(Azz )) = tr(ae(zz )) = tr(aσ). For u 0, we have tr(aσ) why? = E((z u) A(z u)) why? = E(z Az) 2u Au + u Au, and hence the desired conclusion holds. 42

44 Multivariate Normal Distributions Fact 5. If z N(0, I r ) and M is an r r orthogonal projection matrix, then z Mz χ 2 (r(m)), where r(m) denotes the rank of M and χ 2 (k) denotes the chi-square distribution with k degrees of freedom. [The background knowledge of Chi-square distribution can be found in Chi + F. pdf.] proof. Denote r(m) by q. Let {o 1,, o q } be an orthonormal basis for C(M). Then, we have shown that q M = OO = o i o i, i=1 where O = [o 1,, o q ] and note that O O = I q. 43

45 Since O has a full row rank, O z N(0, O O) = N(0, I q ), yielding that o iz, i = 1,, q, are i.i.d. N(0, 1) distributed. In addition, we have z OO z = q (o iz) 2 χ 2 (q), (see Chi + F.pdf ) i=1 which completes the proof. 44

46 Multivariate Normal Distributions Fact 6. Let z N(0, Σ). Then z Σ 1 z χ 2 (r). proof. Since z N(0, Σ), we have z Aε in which AA = Σ and ε N(0, I k ) for some k r. Here, A is an r k matrix of full row rank. This implies z Σ 1 z d = ε A (AA ) 1 Aε. Here, d = means is equivalent in distribution to. Note that A (AA ) 1 A is symmetric and idempotent. Therefore, it is an orthogonal projection matrix with rank r (why?). By Fact 5, ε A (AA ) 1 Aε χ 2 (r), and hence gives the desired conclusion. 45

47 Gaussian Regression D. Assume ε in y = Xβ + ε obeys ε N(0, σ 2 I n ). D 1. ˆβ = (X X) 1 X ε + β N(β, (X X) 1 σ 2 ). D 2. Please convince yourself this result!! ˆσ 2 1 = n k 1 ε (I M k )ε Here I is I n, but I sometimes drop the subscript n when no confusion is possible. σ 2 ε (I M k )ε = n k 1 σ 2 σ 2 χ2 (n k 1), n k 1 recalling that M k = X(X X) 1 X and 46

48 1 x 11 x 1k X =... 1 x n1 x nk D 3. Hypothesis testing [ The background knowledge of hypothesis testing can be found in 統計顯著性.pdf ] (a) F test. Consider the null hypothesis ( 虛無假設 ) H 0 : β 1 = β 2 = = β k = 0. ( 表迴歸是不重要的 ) H A : H 0 is wrong. ( Alternative hypothesis, 對立假設 ) 47

49 Gaussian Regression Test statistics: T 1 = SS Reg k SS Res n k 1 T 1 就是這兩類 貢獻 的對比 = 迴歸 的 單位 貢獻 模型殘差 的 單位 貢獻 T 1 大 時, 我們傾向 拒絕 H 0 此一假設, 因此時迴歸的貢獻是不可 忽視的, 但何謂 大? 這就得依賴 T 1 的分配 (distribution) 來決定, 特別 是 T 1 在 H 0 之下的分配 更進一步地說, 在 H 0 成立的情況下, T 1 應不會 太大, 如能在 H 0 下得到 T 1 的分配, 我們就可知道 P H0 (0 T 1 c) = 95% ( 此百分比可依各別需求調整 ) 48

50 的 c 是多少 也就是說 T 1 (0, c) 的機率高達 95%, 而當 T 1 c 時, 我們就要高度 懷疑 H 0 可能是不對的 ( 因為在 H 0 下不太可能發生的事情發生了 ) 故我們可將 T 1 c ( 或 T 1 < c ) 當成一 檢定的規則,i.e., reject H 0 if T 1 c and do not reject H 0 if T 1 < c 使用這樣的檢定規則犯下型 I 錯誤 (Type I error) 的機率是 5% [ 5% 稱為此一檢定方式的 顯著水準 ( significance level ), 而此一檢定被稱為 α-level 檢定,α = 5% ] Truth Action H 0 H A Do not reject H 0 O.K. Type II error Reject H 0 Type I error O.K. 更多關於統計檢定的介紹可參見由黃文璋教授寫的文章 統計顯著性 49

51 Gaussian Regression How to derive the distribution of T 1 under H 0? (i) SS Reg k (ii) underh = 0 ε (M k M 0 )ε byf act 5 σ 2 χ2 (k) k k SS Res byd2 = ˆσ2 σ 2 χ2 (n k 1) n k 1 n k 1 (iii) SS Reg and SS Res are independent. This is because underh SS 0 Reg = ε O Reg O Regε, where O Reg consists of the orthonormal basis of C(X), and SS Res = ε O Res O Resε, 50

52 where O Res consists of the orthonormal basis of C (X). Moreover, since O Res O Res = O, by Fact 3, O Reg ε and O Resε are independent, and hence SS Reg and SS Res are independent (why?). Note. 51

53 (iv) Combing (i) (iii), we obtain H T 0 1 F (k, n k 1), where F (k, n k 1) is called the F -distribution with k and n k 1 degrees of freedoms. [ why? Because T 1 ( under H 0 ) is a ratio of two independent chi-square distributions divided by their corresponding degrees of 52

54 freedom. For the background knowledge of F -distributions, see Chi + F.pdf. ] (v) ( α-level ) Testing rule: Reject H 0 if T 1 f 1 α (k, n k 1), where P (F (k, n k 1) > f 1 α (k, n k 1)) = α. f 1 α (k, n k 1) is called the upper critical value for the F (k, n k 1) distribution. The table for F tests at the α = 0.05, 0.1 and 0.01 levels is given in Chi + F.pdf. 53

55 54

56 Gaussian Regression (b) Wald test. Consider the linear parametric hypothesis: H 0 : Dβ = γ, where D and γ are known, D is a q (k + 1) matrix with 1 q k + 1 and γ is a q 1 vector. ( 0 Example: If β = 0 ), then H 0 = H A : H 0 is wrong. β 1,. β 4, D = ( { β1 = β 3 β 2 = β 4, and H A : β 1 β 3 or β 2 β 4. ), and γ = 55

57 By suitably imposing D and γ, Wald tests are much more flexible than F tests. Test statistics: where E = D(X X) 1 D. W 1 = (D ˆβ γ) E 1 (D ˆβ γ), ˆσ 2 q What is the distribution of W 1 under H 0? (i) D ˆβ γ H0 N(0, σ 2 E) ( why? D ˆβ γ underh0 = D( ˆβ β) (ii) (D ˆβ γ) E 1 (D ˆβ γ) σ 2 χ 2 (q) ( by Fact 6 ) 56

58 (iii) ˆβ and ˆσ 2 are independent ( why? We ve argued this previously!! ) (iv) ˆσ2 σ 2 χ2 (n k 1). ( We ve already shown this!! ) n k 1 H (v) By (i) (iv), W 0 1 F (q, n k 1). (vi) Now you can set an α, find the critical value from the F table, and establish your α-level test!! 57

59 Homework 1 1. Consider the regression model y = Xβ + ε, (1) where E(ε) = 0 and E(εε ) = Σ > 0. (1) Show that the generalized LSE ˆβ (w) = (X Σ 1 X) 1 X Σ 1 y is the best linear unbiased estimator ( BLUE ). (2) Show that BLUE are unique. 58

60 2. Assume in (1), y = y 1., X = 1 x 1.. and Σ = y n σ x n = diag(σ 1, 2, σn) σn 2 ( ) Let ˆβ ˆβ(w) (w) 0 = ˆβ (w). Show that 1 ˆβ (w) 0 = ȳ (w) (w) x ˆβ(w) 1 59

61 and where ˆβ (w) 1 = n n 1 (x i x (w) )y i σ 2 i=1 i y i, n 1 (x i x (w) ) 2 σ 2 i=1 i n x i ȳ (w) = σ 2 i=1 i n 1 and x (w) = σ 2 i=1 i n 1, σ 2 i=1 i σ 2 i=1 i which are called weighted means of {y 1,, y n } and {x 1,, x n }, respectively. 60

62 3. Assume in (1), β = β 0 β 1. β 10 and ε N(0, σ 2 I n ) with n = 50 and σ 2 unknown. Please construct an α-level test, α = 5%, of the null hypothesis, H 0 : β 1 β 3 = 2, 2β 2 + β 4 = 4 and β 1 4β 2 + 2β 4 = 7, against the alternative, H A : β 1 β 3 2, 2β 2 + β 4 4 or β 1 4β 2 + 2β

63 ( z1 ) (( ) ( )) u1 Σ11 Σ Let z = N, = N(u, Σ), z 2 u 2 Σ 21 Σ 22 where z 1 is r 1 -dimensional and z 2 is r 2 -dimensional. ( ) (i) Show that AΣA I r1 0 = D where A = and D = Σ 21 Σ 1 11 I r2 ( ) Σ Σ 22 Σ 21 Σ 1 11 Σ, where 0 is the zero matrix. 12 (ii) Use (i) to obtain the conditional joint pdf of z 2 given z 1, noting that f 2 1 (z 2 z 1 ) = f(z 1, z 2 ) f 1 (z 1 ), where f 2 1(z 2 z 1 ) is the conditional joint pdf of z 2 given z 1, f(z 1, z 2 ) is the joint pdf of z, and f 1 (z 1 ) is the joint pdf of z 1. 62

64 (iii) Use (ii) to find E(z 2 z 1 ) and V ar(z 2 z 1 ). { X1 if z = 1 5. Let X 1 N(0, 1). Define X 2 =, where Z is X 1 if z = 1 a random variable independent of X 1, and has a distribution obeying P (Z = 1) = P (Z = 1) = 1/2. (i) Find the distribution of X 2. (ii) Find Cov(X 1, X 2 ). ) (iii) Does ( X1 X 2 have a bivariate normal distribution? 63

65 Gaussian Regression (c) T -test. Consider the following hypothesis, H 0 : β j = b, where 1 j k, b is known against the alternative, We have H A : β j b. (i) ˆβ β N(0, (X X) 1 σ 2 ) [see D 1 ], and hence ˆβ j b H0 = e j( ˆβ β) N(0, e j(x X) 1 e j σ 2 ), where e j = (0,, 0, 1, 0,, 0), the jth component is 1, and the others are zeros. 64

66 (ii) ˆσ2 σ 2 χ2 (n k 1). [see D 2 ] n k 1 (iii) ˆσ 2 and ˆβ j are independent. (why?) (iv) By (i) (iii) ˆβ j b e j (X X) 1 e j σ 2 = ˆσ 2 σ 2 ˆβ j b T H0 e j (X X) 1 e j ˆσ 2 t(n k 1) where t(n k 1) is the t-distribution with n k 1 degrees of freedom (see p.6 of Student s t-distribution.pdf). 65

67 (v) Testing rule: Reject H 0 if T > t α/2 (n k 1). We have and hence this is a level α test. P H0 ( T > t α/2 (n k 1)) = α, 66

68 E. Interval Estimation We first recall some results on point estimation: (i) E( ˆβ) = β and E(ˆσ 2 ) = σ 2 (unbiasedness). (ii) V ar( ˆβ) = (X X) 1 σ 2, (iii) ˆβ is BLUE!! (iv) V ar(ˆσ 2 ) = 2σ 4 0, as n (under the normal assump- n k 1 tion) [which is desired result because it shows the estimation quality is getting better and better when sample size is getteing larger and larger!!] 67

69 To see this, note first that (a) ˆσ2 σ 2 χ2 (n k 1) n k 1 (b) E(χ 2 (n k 1)) = n k 1 (c) V ar(χ 2 (n k 1)) = 2(n k 1) By (a) (c), V ar(ˆσ 2 ) = 2σ 4 n k 1 follows. 68

70 However, if the normal assumption fails to hold, how should us calculate V ar(ˆσ 2 )? Some ideas: ˆσ 2 = = 1 1 n k 1 y (I M k )y = n k 1 ε (I M k )ε 1 n n A ij ε i ε j, n k 1 i=1 j=1 where [A 1 i,j n ] A = I M k. 69

71 It is clear that E(ˆσ 2 ) = why? = Moreover, we have E(ˆσ 4 ) = = 1 n n A ij E(ε i ε j ) n k 1 i=1 j=1 1 n A ij σ 2 = n k 1 i=1 σ 2 n k 1 tr((i M k)) = σ 2. ( ) 2 1 n n n n A ij A kl E(ε i ε j ε k ε l ) n k 1 i=1 j=1 k=1 l=1 ( ) 2 1 n A 2 n k 1 iie(ε 4 i ) (i = j = k = l) i=1 70

72 ( ) 2 n 1 + n k 1 i=1 ( ) 2 n 1 + n k 1 i=1 ( ) 2 n 1 + n k 1 i=1 where n n i=1 j=1 i j n k=1 i k n j=1 i j n j=1 i j A ii A kk E(ε 2 i )E(ε2 k ) (i = j k = l) A 2 ij E(ε2 i )E(ε2 j ) (i = k j = l) A ij A ji E(ε 2 i )E(ε2 j ) (i = l j = k) A ij A ji E(ε 2 i )E(ε2 j ) = n A ij A ji = A 2 ij since A is symmetric n i=1 j=1 i j A 2 ij E(ε2 i )E(ε2 j ) 71

73 = = ( ) 2 1 (E(ε 4 n k 1 1) 3σ 4 ) ( ) σ 4 n k 1 i=1 k=1 1 (n k 1) 2 (E(ε4 1) 3σ 4 ) n i=1 A 2 ii n n n n A ii A kk + 2 n A 2 ii + σ 4 + i=1 i=1 j=1 A 2 ij 2σ 4 n k 1. Note. (i) E(ε 4 1) 3σ 4 = 0 if ε is normal n n (ii) A ii A kk = (tr(a)) 2 = (tr(i M k )) 2 = (n k 1) 2 (iii) i=1 k=1 n i=1 j=1 n A 2 ijtr(a 2 ) = tr((i M k ) 2 ) = tr(i M k ) = n k 1 72

74 Hence V ar(ˆσ 2 ) = 1 (n k 1) 2 (E(ε4 1) 3σ 4 ) n A 2 ii + i=1 Question: Will (I) converge to zero as n? 2σ 4 = (I) + (II). n k 1 Answer. Yes, because n n A 2 ii A ii = tr(a) = tr(i M k ) = n k 1. i=1 i=1 To see this, we note that the idempotent property of A yields n A ii = A 2 ij A 2 ii (which also yields 0 A ii 1). j=1 73

75 Interval Estimation We now get back to interval estimation. (i) The first goal is to find an interval such that P (β i I α ) = 1 α where α is small and is decided by the users, 1 α is called a confidence level. Question: How to construct I α? (a) ˆβ i β i t(n k 1) e i (X X) 1 e iˆσ 2 (b) P (β i ( ˆβ i t 1 α/2 (n k 1) e i (X X) 1 e iˆσ 2, ˆβ i + t 1 α/2 (n k 1) e i (X X) 1 e iˆσ 2 )) = 1 α. 74

76 (c) Does the interval described in (b) have the shortest length? To answer this question, we need to solve the following problem: minimizing b a subject to F (b) F (a) = 1 α, where F ( ) denotes the distribution function of t(n k 1) distribution, and ( P a < ˆβ i β i e i (X X) 1 e iˆσ 2 b ) = F (b) F (a) = 1 α. 75

77 By the Lagrange method, define g(a, b, λ) = b a λ(f (b) F (a) (1 α)) and let Dg(a, b, λ) = 0, where Dg = The last identity yield, { f(b) = f(a) = 1 λ, F (b) F (a) = 1 α, g a g b g λ where f( ) is the pdf of t(n k 1) distribution.. ( ) Since the pdf of t(n k 1) is symmetric and strictly decreasing (increasing) when x 0 (when x 0), ( ) implies b = a and b > 0. As a result, the unique solution to ( ) is ( b, b) with 2F (b) = 2 α, i.e., ( t 1 α/2 (n k 76

78 1) + t 1 α/2 (n k 1)). To check whether 2t 1 α/2 (n k 1) minimizes b a, we still need to consider the so-called bordered Hessian matrix evaluated at s = a b λ = t 1 α/2 (n k 1) t 1 α/2 (n k 1) 1 f(t 1 α/2 (n k 1)) Note that the bordered Hessian matrix is defined by D 2 g = 2 g a a 2 g a b 2 g b b 2 g a λ 2 g b λ 2 g λ λ,. 77

79 where 2 g λ λ = 0, and it is straightforward to show that D 2 g s = = a b λ f ( t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) 0 f( t 1 α/2 (n k 1)) 0 f (t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) f( t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) 0. Since the principal submatrix f ( t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) 0 0 f (t 1 α/2 (n k 1)) f(t 1 α/2 (n k 1)) is positive definite, it follows that 2t 1 α/2 (n k 1) minimizes b a subject to F (b) F (a) = 1 α. 78

80 Interval Estimation (ii) The second goal is to find a (k + 1)-dimensional set V α such that P ( ˆβ V α ) = 1 α. Question: How to construct V α? (a) ( ˆβ β)x X( ˆβ β) σ 2 χ 2 (k + 1). (by Fact 6) (b) ( ˆβ β)x X( ˆβ β) (k + 1)ˆσ 2 F (k + 1, n k 1). (c) V α : a ( ˆβ β)x X( ˆβ β) (k + 1)ˆσ 2 b, where F (b) F (a) = 1 α and F ( ) is the distribution function of F (k + 1, n k 1). 79

81 (d) It can be shown that the volume of the larger ellipsoid is π k+1 2 Γ ( k ) ( (k + 1)ˆσ 2 b ) k+1 2 (det(x X)) 1/2, and that of the smaller one is π k+1 2 Γ ( k ) ( (k + 1)ˆσ 2 a ) k+1 2 (det(x X)) 1/2. 80

82 Hence the volume of V α is minimized by minimizing b k+1 2 a k+1 2 subject to F (b) F (a) = 1 α. However, in general, this minimization problem does not have a closed form solution, but it can be shown that when k = 1, a = 0, and b = F 1 α (k + 1, n k 1), and when both n and k are large and n k, a 0 and b F 1 α (k + 1, n k 1). Note also that unlike the t-distributions, when d 1 > 1, the pdfs of F distributions have very small values near the origin. 81

83 82

84 F. Another look at ˆβ Let X k = (X k 1, x k ), we have ) M k y = (X k 1, x k ) ( ˆβk 1 ˆβ k why? = (X k 1, (I M k 1 )x k ) ( X k 1 y x k (I M k 1)y ( (X k 1 X k 1 ) 1 0 ), 0 1 x k (I M k 1)x k ) because C(X) and C(X k 1, (I M k 1 )x k ) are the same. 83

85 yielding X k 1 ˆβk 1 + x k ˆβk = X k 1 [(X k 1X k 1 ) 1 X k 1y where β k = x k (I M k 1)y x k (I M k 1)x k. (X k 1X k 1 ) 1 X k 1x k β k] + x k β k, In addition, since X k is of full rank, we obtain ˆβ k = β k = x k (I M k 1)y x k (I M k 1)x k. This shows that ˆβ k is equivalent to the LSE of the simple regression of (I M k 1 )y on (I M k 1 )x k. As a result, ˆβ k can only be viewed as the marginal contribution of x k to y when the effects of the other variables are removed in advance. 84

86 G. Model Selection Mallows C p : Let X p be a submodel of X k 85

87 Can we construct a measure to describe its prediction performance? Let M p be the orthogonal projection matrix of X p. Then, M p y can be used to predict new observations y New = X k β + ε New, where ε New and ε are independent, but have the same distribution. The performance of M p y can be measured by E y New M p y 2 = E Xk β + ε New M p y 2 ( ): since ε New and y are independent. ( ) = nσ 2 + E X k β M p y 2. 86

88 Let X = (X p, X p ), β = ( βp β p ). Moreover, we have E X k β M p y 2 = E X p β p + X p β p M p (X p β p + X p β p + ε) 2 Hence, = E (I M p )X p β p M p ε 2 why? = pσ 2 + β px p(i M p )X p β p. E y New M p y 2 = (n + p)σ 2 + β px p(i M p )X p β p. 87

89 To estimate this expectation, we start by considering SS Res(p) = y (I M p )y. Note first that E(SS Res(p) ) = E(X p β p + ε) (I M p )(X p β p + ε) = β px p(i M p )X p β p + E(ε (I M p )ε) = β px p(i M p )X p β p + (n p)σ 2. Therefore, E(SS Res(p) + 2pˆσ 2 ) = β px p(i M p )X p β p + (n + p)σ 2 = E y New M p y 2. 88

90 Now, Mallows C p is defined by SS Res(p) + 2pˆσ 2, which is an unbiased estimate of E y New M p y 2. 89

91 H. Prediction (a) How to predict E(y n+1 ) = x n+1β when x n+1 = (1, x n+1,1,, x n+1,k ) is available? point prediction: x ˆβ n+1 prediction interval (under normality): (i) x n+1( ˆβ β) N(0, x n+1(x X) 1 x n+1 σ 2 ) (ii) Sometimes I use X k to replace X, in particular, when model selection issue is taken into account. x n+1( ˆβ β) T (n k 1) x n+1 (X X) 1 x n+1ˆσ 2 (iii) Please construct a (1 α) level prediction interval by yourself. 90

92 (iv) What if the normal assumption is violated? (b) How to predict y n+1? point prediction: x n+1 ˆβ (Still, we have this guy.) prediction interval (under normality): (i) y n+1 x ˆβ n+1 = ε n+1 x n+1( ˆβ β) N(0, (1 + x n+1(x X) 1 x n+1 )σ 2 ). y n+1 x ˆβ n+1 is called prediction error (ii) y n+1 x ˆβ n+1 T (n k 1). (1 + x n+1 (X X) 1 x n+1 )ˆσ 2 (iii) Please construct your own 1 α level prediction interval for y n+1. 91

93 Towards Large Sample Theory I Motivation: Consider again y = Xβ + ε. If ε t s are not normally distributed, how do we make inference for β and σ 2? How do we perform prediction? Q 1 : Is ˆβ = (X X) 1 X y β in probability? Q 2 : Is ˆσ 2 = 1 n (k + 1) y (I M k )y σ 2 in probability? Q 3 : If the answer to Q 1 is yes, what is the limiting distribution of ˆβ? Q 4 : How do we construct confidence intervals for β based on the answer of Q 3? 92

94 Q 5 : How do we test linear or even nonlinear hypotheses without normality? Q 6 : How do we do prediction without normality? 1 x 1 We first answer Q 1 in the special case where X =... 1 x n Definition. A sequence of r.v.s {Z n } is said to converge in probability to a r.v. ε > 0, which is denoted by Z n P r. Z. Z (which can be a non-random constant) if for any lim P ( Z n Z > ε) = 0, n 93

95 Remark. A sequence of random vectors {Z n = (Z 1n,, Z kn ) } is said to be convergent in probability to a random vector Z = (Z 1,, Z k ) P r. if Z in Z i, i = 1,, k, which is denoted by Z n P r. Z. An answer to Q 1 : Since S xx + n x 2 V ar( ˆβ) = (X X) 1 σ 2 = σ 2 ns xx we have n x ns xx n x 1 ns xx S xx P ( ˆβ 0 β 0 > ε) ( ) σ2 ε 2 S xx + n x 2 ns xx 0 if 94, x 2 S xx 0,

96 ( ): by Chebychev s inequality. Let E(X) = u, V ar(x) = σ 2. Then P ( X u > ε) σ2 ε 2. and P ( ˆβ 1 β 1 > ε) σ2 1 ε 2 0 if S xx n noting that S xx = (x i x) 2 and x = 1 n x i. n i=1 As a result, to ensure ˆβ P r. β, we need i=1 1 S xx 0, x 2 S xx 0 and 1 S xx 0 as n. 95

97 Remark. (i) Please give a heuristic explanation of why ˆβ 0 to converge to β 0 in probability. x2 S xx 0 is needed for (ii) Please explain why (βˆ0, βˆ1) is positive (negative) correlated when x < 0 ( x > 0). (iii) What are the sufficient conditions for ˆβ β in general cases? For Q 2, we have shown previously that the variance of\hat{\sigma}^2 converges to 0 as n \to \infty. 96

98 Therefore, by Chebyshev s inequality, ˆσ 2 P r. σ 2. 97

99 Before answering Q 3, let us consider the so-called spectral decomposition for symmetric matrices. Let A be a k k symmetric matrix. Then there exist real numbers λ 1,, λ k and a k-dimensional orthogonal matrix P = (p 1,, p k ) satisfying P P = P P = I and Ap i = λ i p i such that A = P DP, λ where D = λ k 98

100 Remark. (1) λ i is called an eigenvalue of A and p i is the eigenvector corresponding to λ i. (2) Let A be positive definite. Then λ i > 0 for i = 1,, k. proof. 0 p.d. < p i Ap i = p i P DP p i why? = λ i. by the spectral decomposition (3) Let A be positive definite. Define A 1/2 = P D 1/2 P, 99

101 where D 1/2 = Then, we have (A 1/2 ) 2 = A. λ 1/ λ 1/2 k. (4) Let A be positive definite, and define λ max (A) = max{λ 1,, λ k }, λ min (A) = min{λ 1,, λ k }. 100

102 Then, proof. As shown before, λ max (A) = sup a Aa, a =1 λ min (A) = inf a =1 a Aa. λ i = p iap i sup a Aa. a =1 Moreover, for any a R k with a = 1, we have a = P b, where b = 1. Thus, a Aa = b P P DP P b = b Db = 101 k λ i b 2 i λ max (A), i=1

103 where b = (b 1,, b k ). This yields λ max (A) = sup a Aa. The a =1 second statement can be proven similarly. (5) Let A be positive definite. Then λ max (A 1 ) = 1 λ min (A). (6) Let B be any real matrix. Define the spectral norm of B as follows: We have B = ( 1/2 sup a B Ba) = (λ max (B B)) 1/2. a =1 102

104 (i) If B is symmetric with eigenvalues λ 1,, λ k, then B = max{ λ 1,, λ k }. (ii) AB A B, where A is another real matrix whose number of columns is the same as the number of the rows of B. (iii) A+B A + B, where A and B have the same numbers of rows and columns. (iv) If B is positive definite, B tr(b) = 1,, k, are the eigenvalues of B. k λ i, where λ i, i = i=1 (7) Let X be the design matrix of a regression model, i.e., 103

105 X = 1 x 11 x 1k Then, 1 x n1 x nk λ max (X X) = sup and λ min (X X) = a =1 i=1 inf a =1 i=1 n (a x i ) 2, n (a x i ) 2. (8) Let x N(0, Σ) be a p-dimensional multivariate normal vector. Then, Σ 1/2 x N(0, I), and hence x Σ 1 x χ 2 (p), which has been shown previously in a different way. 104

106 We now revisit the question of what makes ˆβ P r. β in general cases. The answer to this question is simple. Since V ar( ˆβ) = (X X) 1 σ 2, we only needs to show that each diagonal elements of (X X) 1 converges to 0. To show ( ), note first that ( ) X X = T (T ) 1 X XT 1 T, 105

107 where 1 x 1 x k T = and T 1 = 1 x 1 x k Moreover, we have (T ) 1 X XT = n 0 0 o X (I E n ) o X, 106

108 where x 11 x 1k o X=.. x n1 x nk and E = , noting that (I E n ) o X= x 11 x 1 x 1k x k. x n1 x 1 x nk x k.. where (x 11 x 1,, x 1k x k ) and (x n1 x 1,, x nk x k ) are the centered data vectors. 107

109 Hence (X X) 1 = T 1 n ( o X (I E n ) o X) 1 (T 1 ), yielding (X X) 1 = 1 n + x D 1 x D 1 x xd 1 D 1, where (T 1 ) = (T ) 1, x = ( x 1,, x k ), and D = o X (I E n ) o X. 108

110 This implies for each 1 i k + 1, { 1 (X X) 1 ii max n + x D 1 x, } λ max (D 1 ) { 1 max n + x2 2 } λ min (D), 1, λ min (D) where λ max (D 1 ) = (i) λ min( o X (I E n ) o X) n, (ii) k i=1 x 2 i λ min( o X (I E n ) o X) 1, which converges to 0 if λ min (D) n 0. 請將此兩條件與 Q 1 中的答案比較 109

111 圖示如下 : 110

112 上述條件要求 : (i) 資料在散佈最窄的方向 ( 從 ( x 1,, x k ) 的位置看 ), 也要有足夠大的 sum of squares (information) (ii) 資料的中心點距原點的距離平方比起 λ min ( X o (I E n ) X) o 是微不足道的 111

113 Toward Large Sample Theory II Q 3 : Note first that ˆβ β = (X X) 1 X (y Xβ) = (X X) 1 X ε n ( n ) 1 ε i = x i x i=1 i n, i=1 x i ε i noting that we only consider X = i=1 1 x 1... Since for ε N(0, σ 2 I), we have 1 x n ˆβ β N(0, σ 2 (X X) 1 ), 112

114 it is natural to conjecture that when ε is not normally distributed, But what is d? (X X) 1/2 ( σ ˆβ β) d N(0, I). ( ) Definition. A sequence of random vectors, {x n }, is said to converge to a random vector, x, in distribution if P (x n c) P (x c) F (c) as n, for all continuous points of F ( ), the distribution function of x, which is denoted by x n d x. 113

115 Remark. Cramér-Wold Device: x n d x a x n a x for any a = 1. Therefore, ( ) holds iff ( a (X X) 1/2 n ) 1/2 ( σ ˆβ β) = a x i x i = i=1 n ( w1n + w 2n x i i=1 ( n 1/2 where (w 1n, w 2n ) = a x i x i). i=1 114 σ ) ε i n i=1 n i=1 ε i σ x i ε i σ d N(0, 1),

116 Lindeberg s Central Limit Theorem. (for the sum of independent random variables) Let Z 1n,, Z nn be a sequence of independent n n r.v.s with E(Z in ) = 0 and E(Zin) 2 = σin 2 = 1 for all n. If for any δ > 0, i=1 i=1 n E ( ZinI Zin >δ) 2 0, as n, (Lindeberg s Condition) i=1 無獨尊者, 故在 均勻 混合後, 原來分配之特性消失, 成為常態分配 then n i=1 Z in d N(0, 1). 115

117 Remark. (1) Lindegerg s condition implies max 1 i n σ2 in 0 as n. To see this, we note that for any δ > 0 max 1 i n σ2 in = max 1 i n E(Z2 in) = max E ( Z 2 ) ini Zin >δ + δ 2. 1 i n Since the first term converges to 0 by Lindeberg s condition and since δ can be arbitrarily small, the desired conclusion follows. (2) Lindeberg s condition CLT + max 1 i n σ2 in 0 as n. 116

118 Now, we are in the position to check Lindeberg s condition for ( ) w1n + w 2n x i denoted by Z in = ε i v in ε i. σ (i) E(v in ε i ) = 0. (ii) n E(vinε 2 2 i ) = 1. i=1 (easy) (easy but why?) (iii) Assume E 1/2 (ε 4 1) < C 1 <, for some constants C 1, C 2, n E i=1 [ ] vinε 2 2 i I {v 2 in ε 2 i >δ2 } = why? = 117 n i=1 v 2 ine [ ] ε 2 i I {v 2 in ε 2 i >δ2 } n vine 2 1/2 (ε 4 i )P 1/2 (vinε 2 2 i > δ 2 ) i=1

119 n C 1 v 2 E 1/2 (vin 2 ε2 i ) in δ i=1 ( n ) C 2 vin 2 max v in C 3 max v in. 1 i n 1 i n i=1 Therefore, Lindeberg s condition holds for v in ε i if ( n ) 1/2 ( 1 max 1 i n (v2 in) = max a x i x i 1 i n i=1 ( n 1 a x i x i) a(1 + max i=1 x i 1 i n x2 i ) ) 2 118

120 ( ) = λ max ( n ) 1 x i x i (1 + max 1 i n x2 i ) i=1 1 + max 1 i n x2 i ( n ) 0, as n. λ min x i x i i=1 where ( ) To see this, we have by spectral decomposition for A, A = P DP, D = λ 1... with 0 < λ 1 λ 2 λ k, and P = (p 1,, p k ) satisfies Ap i = λ k 119

121 λ i p i, p i p j = 0, i j, p i p i = 1. Hence, λ 1 p kap k = p kp DP p k = (0,, 0, 1)... = λ k sup a Aa. a =1 λ k On the other hand, for any a R k with a = 1, we can express it as a = P b with b = 1. Thus, a Aa = b P P D P P b = b Db = 120 k λ i b 2 i λ k, i=1

122 where b = (b 1,, b k ). As a result, λ k = sup a Aa. a =1 Similarly, it can be shown that λ 1 = inf a =1 a Aa. To give a more comprehensive sufficient condition, we note that ( n ) (( )) n λ min x i x xi i = λ min xi x 2 i i=1 (( ) ( = λ min x 1 x 1 ( ) ( )) 1 x 1 x ) ( ) n xi xi x 2 i 121

123 where S xx = (x i x) 2. = λ min (( 1 0 x 1 ) ( n 0 why? min{n, S xx }λ min (( 1 0 x 1 ) ( 1 x )) 0 S xx 0 1 ) ( )) 1 x 0 1 ( ) C min{n, S xx }, provided x <, Explain for: why? : λ min (B AB) λ min (B B)λ min (A) (( ) ( )) x ( ) : if x <, λ min is bounded away x from 0. (We will show this later.) 122

124 In view of this, a set of more transparent sufficient conditions for the Lindeberg s condition is (i) max 1 i n x2 i n 0, (ii) S xx, [this one is also needed for ˆβ P r. β.] (iii) max 1 i n x2 i S xx 0. Can you answer Q 3 under general multiple regression models? 123

125 事實上, 對一般的多元迴歸 (k 1), it is not difficult to show that Lindeberg s condition holds when 1 + max k x 2 ij 1 i n j=1 λ min (X X) 0 as n. (2) ( 請對照 k = 1 的 case) 124

126 進一步的問題是, 我們能不能得到類似 k = 1 case 中 (i), (ii), (iii) 條件, 使得 (2) 成立 為了回答此一問題, 我們需要一點 linear algebra. (1) Let T = 1 c 1 c k ( 1 c = 0 I k ), where c = (c 1,, c k ), I k is the k-dimensional identity matrix. Then, we have 125

127 λ min (T 1 T ) 2 + c c. ( ) proof. Since ( ) holds trivially when c = 0, we only consider the case c 0. Note first that E = T T = c ( 1 c ) cc + I k, and the eigenvalues of E are those λ s satisfying det(e λi k+1 ) = 0, ( ) where I k+1 is the k + 1-dimensional identity matrix. 126

128 In addition, ( ) 1 λ c det(e λi k+1 ) = det c cc + (1 λ)i k ( ) 0 c det c cc = ( ) 1 λ 0 det c (1 1 1 λ )cc + (1 λ)i k if λ = 1 if λ 1 For λ = 1, det ( 0 c c cc ) { c 2 = 1 0 if k = 1 0 if k > 1 127

129 For λ 1, ( ) 1 λ 0 det c (1 1 1 λ )cc + (1 λ)i k (( = (1 λ) det 1 1 ) ) cc + (1 λ)i k 1 λ because this is a triangular matrix ( ( ) ) 1 = (1 λ) k+1 det I k + 1 λ 1 (1 λ) 2 cc det(aa k ) = a k det(a k ) = (1 λ) k+1 det(i k ) ( 1 + ( ) ) 1 1 λ 1 (1 λ) 2 c c Please try to prove det(a + bb ) = det(a) det(1 + b A 1 b) = (1 λ) k 1 (λ 2 (2 + c c)λ + 1) 128

130 Therefore, the roots for ( ) are (2 + c c) ± (2 + c c) λ = 1 or λ = 2 yielding λ min (T T ) min = 1, { min 1, 1 ( 1 1 ( (2 + c c) 1 1 } c c 2 + c c ) 4 (2 + c c) 2 4 (2 + c c) 2 ) since 1 x 1 x 2,

131 Thus the proof of ( ) is complete. (2) We have shown previously that X X = T ( n 0 0 D ) T, where T = 1 x 1 x k and D = o X (I E n ) o X. 130

132 By λ min (B AB) λ min (B B)λ min (A) and ( ), we obtain ( ) n 0 λ min (X X) λ min (T T )λ min 1 k 2 + i=1 x 2 i λ min ( n 0 0 D min{n, λ min (D)} 0 D ) V min{n, λ min(d)}. 此地, 我們假設 = min{n, λ min (D)} k x 2 i < V < ( 讓討論更聚焦 ) i=1 131

133 (3) 最後為了讓 (2) 成立, 我們給出以下充分條件 : (i) k max x 2 ij 1 i n j=1 n 0, (ii) λ min (D), ( 我已解釋過 它 的意義 ) (iii) k max x 2 ij 1 i n j=1 λ min (D) 0. 明顯看出 (i), (ii), (iii) 與 (i), (ii), (iii) 是對應的 132

134 Large Sample Theory III Q 4 and Q 5 : How does one construct confidence intervals (CIs) and testing rules when ε is not normal? Some basic probabilistic tools. (a) Slutsky s Theorem. If X n d X, Y n pr. a and Z n where a is a vector of real numbers and b is a real number, then Y nx n + Z n d a X + b. pr. b, Corollary. If X n d X and Y n X n pr. 0, then Y n d X. proof. Since Y n = X n (X n Y n ), the conclusion follows immediately from Slutsky s Theorem. 133

135 (b) Big O and Small O notation for a sequence of random vectors. Let a n be a sequence of positive numbers. We say X n = O p (a n ), where X n is a sequence of random vectors, if for any ε > 0, there exist 0 < M ε < and a positive integer N such that for all n N, ( ) X n P > M ε < ε, and if X n a n pr. 0. a n X n = o p (a n ), 134

136 (c) (for a sequence of vectors of real numbers) Let {w n } be a sequence of vectors of real numbers and {a n } be a sequence of positive number. We say w n = O(a n ) if there exist 0 < M < and a positive integer N such that for all n N, and w n = o(a n ) if w n /a n 0. (d) Some rules. w n a n < M, Let X n = o p (1), O p (1), o(1) or O(1). Y n = o p (1), O p (1), o(1) or O(1). 135

137 For + : o p O p o O o p o p O p o p O p O p O p O p O p o o O O O For (product): o p O p o O o p o p o p o p o p O p O p o p O p o o o O O (e) If X n = O p (a n ), then X n /a n = O p (1), If X n = o p (a n ), then X n /a n = o p (1). (f) If X n d X, then X n = O p (1), and if E X n q < K < for some q > 0 and for all n, then X n = O p (1). 136

138 X n (g) If X n pr. X and Y n d d X and Y n Y, then and {Y n } are independent. f(x). pr. Y, then ( Xn Y n (h) A continuous mapping theorem. pr. or d If X n (I) A delta method. If n(z n u) ) d ( Xn Y n ( X Y ) ) pr. ( X Y ). If, provided {X n } X and f( ) is a continuous function, then f(x n ) d N(0 k 1, V k k ) and pr. or d 137

139 f 1 ( ) f( ) =. : Rk R m f m ( ) is a sufficiently smooth function, then n(f(zn ) f(u)) d N(0 m 1, ( f(u)) V ( f(u))), ( ) where f( ) = f 1( ) x 1. f 1( ) x k f m( ) x 1. f m( ) x k is an k m matrix. 138

140 Sketch of the proof: f(z n ) f(u) + f(u)(z n u), (by Taylor s Theorem) which yields n(f(zn ) f(u)) f(u) n(z n u). This and the CLT for Z n (given as an assumption) lead to the desired conclusion. 139

141 We are now ready to answer Q 4 & Q 5. (1) An alternative version of CLT for ˆβ. Recall that under suitable conditions. Assume (X X) 1/2 ( σ ˆβ d β) N(0, I), ˆR n = 1 n X X = 1 n where R is a positive definite matrix. What are they? n x i x i i=1 n R, 140

142 Then, it can be shown that By ( ), we have 1 σ R1/2 n( ˆβ d β) N(0, I). n( ˆβ β) d N(0, R 1 σ 2 ). ( ) ( ) (2) Consider the problem of testing a nonlinear null hypothesis H 0 : β 0 + β 2 1 = d, for some known d against the alternative hypothesis H A : β 0 + β 2 1 d. 141

143 Additional Materials (1) 1 1/2 ( ˆR n R 1/2 ) n( σ ˆβ β) 1 σ ˆR 1/2 n R 1/2 n( ˆβ β) ( Ax 2 = x A Ax A 2 x 2 ) (2) ˆR 1/2 R 1/2 = o(1) (it s obvious) (3) E n( ˆβ β) 2 ( (X why? X = tr n ) 1 ) σ 2 1 = tr( ˆR n )σ 2 n tr(r 1 )σ 2 < R is p.d. 142

144 (4) By (1), (2) and (3), we have 1 1/2 ( ˆR n R 1/2 ) n( σ ˆβ β) = o(1)o p(1) = o p (1), yielding 1 σ R1/2 n( ˆβ β) and 1 σ 1/2 ˆR n n( ˆβ β) have the same limiting distribution (by Slutsky s Theorem), which is N(0, I). 143

145 For simplify the discussion, we again assume that 1 X 1 ( ) X =.., hence ˆβ ˆβ0 = and β = ˆβ 1 1 X n Set f(β) = β 0 + β 2 1. Then f(β) = ( ), we obtain ( 1 2β 1 ) ( β0 β 1 ). By the δ-method and. H n(f( ˆβ) f(β)) = 0 n(f( ˆβ) d) ( ( d 1 N 0, (1, 2β 1 )R 1 2β 1 ) σ 2 ), 144

146 which implies n(f( ˆβ) d) ( σ 1 (1, 2β1 )R 1 2β 1 ) d N(0, 1). ( ) Moreover, it holds that ( ) ( ˆσ (1, pr. ˆβ1 ) ˆR n 2 ˆβ σ 1 (1, 2β1 )R 1 1 2β 1 ( ˆβ1 pr. β 1 ˆR n R ) ). 145

147 This, (***) and Slutsky s Theorem together imply n(f( ˆβ) d) ( ˆσ (1, ˆβ1 ) ˆR n 2 ˆβ 1 ) d N(0, 1). This result enables us to construct the following testing rule: reject H 0 if f( ˆβ) = ˆβ 0 + ˆβ 2 1 > d ˆσ (1, 2 ˆβ1 ) n ˆR 1 n ( 1 2 ˆβ 1 ) 146

148 or f( ˆβ) < d 1.96ˆσ (1, 2 ˆβ1 ) n ˆR 1 n ( 1 2 ˆβ 1 ) which is an asymptotic level 5% test, i.e., P H0 (reject H 0 ) n 5%. 147

149 (3) Consider the problem of testing the linear hypothesis H 0 : D q k β k 1 = γ q 1 against H A : H 0, where D q k and γ q 1 are known. Set f(β) = Dβ. Then by the δ- method and the CLT for ˆβ, we have under H 0, n(f( ˆβ) γ) d N(0, DR 1 D σ 2 ), and hence by the continuous mapping theorem, n(f( ˆβ) γ) (DR 1 D ) 1 (f( ˆβ) γ) σ 2 d χ 2 (q). 148

150 This, ˆσ 2 pr. σ 2, ˆR n n (some algebraic manipulations are needed!!) R, and Slutsky s theorem further give w 1 = n(f( ˆβ) γ) 1 (D ˆR n D ) 1 (f( ˆβ) γ) d ˆσ 2 χ 2 (q). Therefore, the following testing rule: reject H 0 if w 1 > χ 2 1 α(q), or w 1 < χ 2 β(q), with α + β = 0.05, is an asymptotic level 5% test. (Note that (α, β) is not unique.) Please compare this asymptotic test with its counterpart derived from the finite-sample theory under normal assumptions. 149

151 150

0 0 = 1 0 = 0 1 = = 1 1 = 0 0 = 1

0 0 = 1 0 = 0 1 = = 1 1 = 0 0 = 1 0 0 = 1 0 = 0 1 = 0 1 1 = 1 1 = 0 0 = 1 : = {0, 1} : 3 (,, ) = + (,, ) = + + (, ) = + (,,, ) = ( + )( + ) + ( + )( + ) + = + = = + + = + = ( + ) + = + ( + ) () = () ( + ) = + + = ( + )( + ) + = = + 0

More information

Linear Regression. Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) SDA Regression 1 / 34

Linear Regression. Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) SDA Regression 1 / 34 Linear Regression 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) SDA Regression 1 / 34 Regression analysis is a statistical methodology that utilizes the relation between

More information

生物統計教育訓練 - 課程. Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所. TEL: ext 2015

生物統計教育訓練 - 課程. Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所. TEL: ext 2015 生物統計教育訓練 - 課程 Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所 tchsieh@mail.tcu.edu.tw TEL: 03-8565301 ext 2015 1 Randomized controlled trial Two arms trial Test treatment

More information

= lim(x + 1) lim x 1 x 1 (x 2 + 1) 2 (for the latter let y = x2 + 1) lim

= lim(x + 1) lim x 1 x 1 (x 2 + 1) 2 (for the latter let y = x2 + 1) lim 1061 微乙 01-05 班期中考解答和評分標準 1. (10%) (x + 1)( (a) 求 x+1 9). x 1 x 1 tan (π(x )) (b) 求. x (x ) x (a) (5 points) Method without L Hospital rule: (x + 1)( x+1 9) = (x + 1) x+1 x 1 x 1 x 1 x 1 (x + 1) (for the

More information

國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題

國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題 國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題 基礎數學 I.(2%) Test for convergence or divergence of the following infinite series cos( π (a) ) sin( π n (b) ) n n=1 n n=1 n 1 1 (c) (p > 1) (d) n=2 n(log n) p n,m=1 n 2 +

More information

Chapter 1 Linear Regression with One Predictor Variable

Chapter 1 Linear Regression with One Predictor Variable Chapter 1 Linear Regression with One Predictor Variable 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 1 1 / 41 Regression analysis is a statistical methodology

More information

Algorithms and Complexity

Algorithms and Complexity Algorithms and Complexity 2.1 ALGORITHMS( 演算法 ) Def: An algorithm is a finite set of precise instructions for performing a computation or for solving a problem The word algorithm algorithm comes from the

More information

Chapter 22 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Electric Potential 電位 Pearson Education, Inc.

Chapter 22 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Electric Potential 電位 Pearson Education, Inc. Chapter 22 Lecture Essential University Physics Richard Wolfson 2 nd Edition Electric Potential 電位 Slide 22-1 In this lecture you ll learn 簡介 The concept of electric potential difference 電位差 Including

More information

邏輯設計 Hw#6 請於 6/13( 五 ) 下課前繳交

邏輯設計 Hw#6 請於 6/13( 五 ) 下課前繳交 邏輯設計 Hw#6 請於 6/3( 五 ) 下課前繳交 . A sequential circuit with two D flip-flops A and B, two inputs X and Y, and one output Z is specified by the following input equations: D A = X A + XY D B = X A + XB Z = XB

More information

相關分析. Scatter Diagram. Ch 13 線性迴歸與相關分析. Correlation Analysis. Correlation Analysis. Linear Regression And Correlation Analysis

相關分析. Scatter Diagram. Ch 13 線性迴歸與相關分析. Correlation Analysis. Correlation Analysis. Linear Regression And Correlation Analysis Ch 3 線性迴歸與相關分析 相關分析 Lear Regresso Ad Correlato Aalyss Correlato Aalyss Correlato Aalyss Correlato Aalyss s the study of the relatoshp betwee two varables. Scatter Dagram A Scatter Dagram s a chart that

More information

Differential Equations (DE)

Differential Equations (DE) 工程數學 -- 微分方程 51 Differenial Equaions (DE) 授課者 : 丁建均 教學網頁 :hp://djj.ee.nu.edu.w/de.hm 本著作除另有註明外, 採取創用 CC 姓名標示 - 非商業性 - 相同方式分享 台灣 3. 版授權釋出 Chaper 8 Sysems of Linear Firs-Order Differenial Equaions 另一種解 聯立微分方程式

More information

Chapter 20 Cell Division Summary

Chapter 20 Cell Division Summary Chapter 20 Cell Division Summary Bk3 Ch20 Cell Division/1 Table 1: The concept of cell (Section 20.1) A repeated process in which a cell divides many times to make new cells Cell Responsible for growth,

More information

tan θ(t) = 5 [3 points] And, we are given that d [1 points] Therefore, the velocity of the plane is dx [4 points] (km/min.) [2 points] (The other way)

tan θ(t) = 5 [3 points] And, we are given that d [1 points] Therefore, the velocity of the plane is dx [4 points] (km/min.) [2 points] (The other way) 1051 微甲 06-10 班期中考解答和評分標準 1. (10%) A plane flies horizontally at an altitude of 5 km and passes directly over a tracking telescope on the ground. When the angle of elevation is π/3, this angle is decreasing

More information

1 dx (5%) andˆ x dx converges. x2 +1 a

1 dx (5%) andˆ x dx converges. x2 +1 a 微甲 - 班期末考解答和評分標準. (%) (a) (7%) Find the indefinite integrals of secθ dθ.) d (5%) and + d (%). (You may use the integral formula + (b) (%) Find the value of the constant a for which the improper integral

More information

2019 年第 51 屆國際化學奧林匹亞競賽 國內初選筆試 - 選擇題答案卷

2019 年第 51 屆國際化學奧林匹亞競賽 國內初選筆試 - 選擇題答案卷 2019 年第 51 屆國際化學奧林匹亞競賽 國內初選筆試 - 選擇題答案卷 一 單選題 :( 每題 3 分, 共 72 分 ) 題號 1 2 3 4 5 6 7 8 答案 B D D A C B C B 題號 9 10 11 12 13 14 15 16 答案 C E D D 送分 E A B 題號 17 18 19 20 21 22 23 24 答案 D A E C A C 送分 B 二 多選題

More information

期中考前回顧 助教 : 王珊彗. Copyright 2009 Cengage Learning

期中考前回顧 助教 : 王珊彗. Copyright 2009 Cengage Learning 期中考前回顧 助教 : 王珊彗 考前提醒 考試時間 :11/17( 四 )9:10~12:10 考試地點 : 管二 104 ( 上課教室 ) 考試範圍 :C1-C9, 選擇 + 計算 注意事項 : 考試請務必帶工程計算機 可帶 A4 參考紙 ( 單面 不能浮貼 ) 計算過程到第四位, 結果寫到小數點第二位 不接受沒有公式, 也不接受沒算出最後答案 考試只會附上 standard normal distribution

More information

台灣大學開放式課程 有機化學乙 蔡蘊明教授 本著作除另有註明, 作者皆為蔡蘊明教授, 所有內容皆採用創用 CC 姓名標示 - 非商業使用 - 相同方式分享 3.0 台灣授權條款釋出

台灣大學開放式課程 有機化學乙 蔡蘊明教授 本著作除另有註明, 作者皆為蔡蘊明教授, 所有內容皆採用創用 CC 姓名標示 - 非商業使用 - 相同方式分享 3.0 台灣授權條款釋出 台灣大學開放式課程 有機化學乙 蔡蘊明教授 本著作除另有註明, 作者皆為蔡蘊明教授, 所有內容皆採用創用 姓名標示 - 非商業使用 - 相同方式分享 3.0 台灣授權條款釋出 hapter S Stereochemistry ( 立體化學 ): chiral molecules ( 掌性分子 ) Isomerism constitutional isomers butane isobutane 分子式相同但鍵結方式不同

More information

Ch.9 Liquids and Solids

Ch.9 Liquids and Solids Ch.9 Liquids and Solids 9.1. Liquid-Vapor Equilibrium 1. Vapor Pressure. Vapor Pressure versus Temperature 3. Boiling Temperature. Critical Temperature and Pressure 9.. Phase Diagram 1. Sublimation. Melting

More information

EXPERMENT 9. To determination of Quinine by fluorescence spectroscopy. Introduction

EXPERMENT 9. To determination of Quinine by fluorescence spectroscopy. Introduction EXPERMENT 9 To determination of Quinine by fluorescence spectroscopy Introduction Many chemical compounds can be excited by electromagnetic radication from normally a singlet ground state S o to upper

More information

HKDSE Chemistry Paper 2 Q.1 & Q.3

HKDSE Chemistry Paper 2 Q.1 & Q.3 HKDSE 2017 Chemistry Paper 2 Q.1 & Q.3 Focus areas Basic chemical knowledge Question requirement Experimental work Calculations Others Basic Chemical Knowledge Question 1(a)(i) (1) Chemical equation for

More information

Advanced Engineering Mathematics 長榮大學科工系 105 級

Advanced Engineering Mathematics 長榮大學科工系 105 級 工程數學 Advanced Engineering Mathematics 長榮大學科工系 5 級 姓名 : 學號 : 工程數學 I 目錄 Part I: Ordinary Differential Equations (ODE / 常微分方程式 ) Chapter First-Order Differential Equations ( 一階 ODE) 3 Chapter Second-Order

More information

2. Suppose that a consumer has the utility function

2. Suppose that a consumer has the utility function 中正大學 94-6 94 { 修正 }. Suose that you consume nothing ut milk and cake and your references remain constant through time. In 3, your income is $ er week, milk costs $ er ottle, cake costs $ er slice and you

More information

Candidates Performance in Paper I (Q1-4, )

Candidates Performance in Paper I (Q1-4, ) HKDSE 2016 Candidates Performance in Paper I (Q1-4, 10-14 ) 7, 17 November 2016 General Comments General and Common Weaknesses Weak in calculations Unable to give the appropriate units for numerical answers

More information

Candidates Performance in Paper I (Q1-4, )

Candidates Performance in Paper I (Q1-4, ) HKDSE 2018 Candidates Performance in Paper I (Q1-4, 10-14 ) 8, 9 November 2018 General and Common Weaknesses Weak in calculations Weak in conversion of units in calculations (e.g. cm 3 to dm 3 ) Weak in

More information

Frequency Response (Bode Plot) with MATLAB

Frequency Response (Bode Plot) with MATLAB Frequency Response (Bode Plot) with MATLAB 黃馨儀 216/6/15 適應性光子實驗室 常用功能選單 File 選單上第一個指令 New 有三個選項 : M-file Figure Model 開啟一個新的檔案 (*.m) 用以編輯 MATLAB 程式 開始一個新的圖檔 開啟一個新的 simulink 檔案 Help MATLAB Help 查詢相關函式 MATLAB

More information

壓差式迴路式均熱片之研製 Fabrication of Pressure-Difference Loop Heat Spreader

壓差式迴路式均熱片之研製 Fabrication of Pressure-Difference Loop Heat Spreader 壓差式迴路式均熱片之研製 Fabrication of Pressure-Difference Loop Heat Spreader 1 2* 3 4 4 Yu-Tang Chen Shei Hung-Jung Sheng-Hong Tsai Shung-Wen Kang Chin-Chun Hsu 1 2* 3! "# $ % 4& '! " ( )* +, -. 95-2622-E-237-001-CC3

More information

Chapter 6. Series-Parallel Circuits ISU EE. C.Y. Lee

Chapter 6. Series-Parallel Circuits ISU EE. C.Y. Lee Chapter 6 Series-Parallel Circuits Objectives Identify series-parallel relationships Analyze series-parallel circuits Determine the loading effect of a voltmeter on a circuit Analyze a Wheatstone bridge

More information

授課大綱 課號課程名稱選別開課系級學分 結果預視

授課大綱 課號課程名稱選別開課系級學分 結果預視 授課大綱 課號課程名稱選別開課系級學分 B06303A 流體力學 Fluid Mechanics 必 結果預視 課程介紹 (Course Description): 機械工程學系 三甲 3 在流體力學第一課的學生可能會問 : 什麼是流體力學? 為什麼我必須研究它? 我為什麼要研究它? 流體力學有哪些應用? 流體包括液體和氣體 流體力學涉及靜止和運動時流體的行為 對流體力學的基本原理和概念的了解和理解對分析任何工程系統至關重要,

More information

Statistical Intervals and the Applications. Hsiuying Wang Institute of Statistics National Chiao Tung University Hsinchu, Taiwan

Statistical Intervals and the Applications. Hsiuying Wang Institute of Statistics National Chiao Tung University Hsinchu, Taiwan and the Applications Institute of Statistics National Chiao Tung University Hsinchu, Taiwan 1. Confidence Interval (CI) 2. Tolerance Interval (TI) 3. Prediction Interval (PI) Example A manufacturer wanted

More information

Chapter 1 Linear Regression with One Predictor Variable

Chapter 1 Linear Regression with One Predictor Variable Chapter 1 Linear Regression with One Predictor Variable 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 1 1 / 52 迴歸分析 Regression analysis is a statistical methodology

More information

Statistics and Econometrics I

Statistics and Econometrics I Statistics and Econometrics I Probability Model Shiu-Sheng Chen Department of Economics National Taiwan University October 4, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I October 4, 2016

More information

d) There is a Web page that includes links to both Web page A and Web page B.

d) There is a Web page that includes links to both Web page A and Web page B. P403-406 5. Determine whether the relation R on the set of all eb pages is reflexive( 自反 ), symmetric( 对 称 ), antisymmetric( 反对称 ), and/or transitive( 传递 ), where (a, b) R if and only if a) Everyone who

More information

國立成功大學 航空太空工程學系 碩士論文 研究生 : 柯宗良 指導教授 : 楊憲東

國立成功大學 航空太空工程學系 碩士論文 研究生 : 柯宗良 指導教授 : 楊憲東 國立成功大學 航空太空工程學系 碩士論文 波函數的統計力學詮釋 Statistical Interpretation of Wave Function 研究生 : 柯宗良 指導教授 : 楊憲東 Department of Aeronautics and Astronautics National Cheng Kung University Tainan, Taiwan, R.O.C. Thesis

More information

Chapter 8 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Gravity 重力 Pearson Education, Inc. Slide 8-1

Chapter 8 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Gravity 重力 Pearson Education, Inc. Slide 8-1 Chapter 8 Lecture Essential University Physics Richard Wolfson 2 nd Edition Gravity 重力 Slide 8-1 In this lecture you ll learn 簡介 Newton s law of universal gravitation 萬有引力 About motion in circular and

More information

雷射原理. The Principle of Laser. 授課教授 : 林彥勝博士 Contents

雷射原理. The Principle of Laser. 授課教授 : 林彥勝博士   Contents 雷射原理 The Principle of Laser 授課教授 : 林彥勝博士 E-mail: yslin@mail.isu.edu.tw Contents Energy Level( 能階 ) Spontaneous Emission( 自發輻射 ) Stimulated Emission( 受激發射 ) Population Inversion( 居量反轉 ) Active Medium( 活性介質

More information

原子模型 Atomic Model 有了正確的原子模型, 才會發明了雷射

原子模型 Atomic Model 有了正確的原子模型, 才會發明了雷射 原子模型 Atomic Model 有了正確的原子模型, 才會發明了雷射 原子結構中的電子是如何被發現的? ( 1856 1940 ) 可以參考美國物理學會 ( American Institute of Physics ) 網站 For in-depth information, check out the American Institute of Physics' History Center

More information

GSAS 安裝使用簡介 楊仲準中原大學物理系. Department of Physics, Chung Yuan Christian University

GSAS 安裝使用簡介 楊仲準中原大學物理系. Department of Physics, Chung Yuan Christian University GSAS 安裝使用簡介 楊仲準中原大學物理系 Department of Physics, Chung Yuan Christian University Out Line GSAS 安裝設定 CMPR 安裝設定 GSAS 簡易使用說明 CMPR 轉出 GSAS 實驗檔簡易使用說明 CMPR 轉出 GSAS 結果簡易使用說明 1. GSAS 安裝設定 GSAS 安裝設定 雙擊下載的 gsas+expgui.exe

More information

Chapter 1 Physics and Measurement

Chapter 1 Physics and Measurement Chapter 1 Physics and Measurement We have always been curious about the world around us. Classical Physics It constructs the concepts Galileo (1564-1642) and Newton s space and time. It includes mechanics

More information

5.5 Using Entropy to Calculate the Natural Direction of a Process in an Isolated System

5.5 Using Entropy to Calculate the Natural Direction of a Process in an Isolated System 5.5 Using Entropy to Calculate the Natural Direction of a Process in an Isolated System 熵可以用來預測自發改變方向 我們現在回到 5.1 節引入兩個過程 第一個過程是關於金屬棒在溫度梯度下的自然變化方向 試問, 在系統達平衡狀態時, 梯度變大或更小? 為了模擬這過程, 考慮如圖 5.5 的模型, 一孤立的複合系統受

More information

A Direct Simulation Method for Continuous Variable Transmission with Component-wise Design Specifications

A Direct Simulation Method for Continuous Variable Transmission with Component-wise Design Specifications 第十七屆全國機構機器設計學術研討會中華民國一零三年十一月十四日 國立勤益科技大學台灣 台中論文編號 :CSM048 A Direct Simulation Method for Continuous Variable Transmission with Component-wise Design Specifications Yu-An Lin 1,Kuei-Yuan Chan 1 Graduate

More information

2001 HG2, 2006 HI6, 2010 HI1

2001 HG2, 2006 HI6, 2010 HI1 - Individual 9 50450 8 4 5 8 9 04 6 6 ( 8 ) 7 6 8 4 9 x, y 0 ( 8 8.64) 4 4 5 5 - Group Individual Events I 6 07 + 006 4 50 5 0 6 6 7 0 8 *4 9 80 0 5 see the remark Find the value of the unit digit of +

More information

Chapter 5-7 Errors, Random Errors, and Statistical Data in Chemical Analyses

Chapter 5-7 Errors, Random Errors, and Statistical Data in Chemical Analyses Chapter 5-7 Errors, Random Errors, and Statistical Data in Chemical Analyses Impossible: The analytical results are free of errors or uncertainties. Possible: Minimize these errors and estimate their size

More information

Ch2 Linear Transformations and Matrices

Ch2 Linear Transformations and Matrices Ch Lea Tasfoatos ad Matces 7-11-011 上一章介紹抽象的向量空間, 這一章我們將進入線代的主題, 也即了解函數 能 保持 向量空間結構的一些共同性質 這一章討論的向量空間皆具有相同的 体 F 1 Lea Tasfoatos, Null spaces, ad ages HW 1, 9, 1, 14, 1, 3 Defto: Let V ad W be vecto spaces

More information

KWUN TONG GOVERNMENT SECONDARY SCHOOL 觀塘官立中學 (Office) Shun Lee Estate Kwun Tong, Kowloon 上學期測驗

KWUN TONG GOVERNMENT SECONDARY SCHOOL 觀塘官立中學 (Office) Shun Lee Estate Kwun Tong, Kowloon 上學期測驗 觀塘官立中學 Tel.: 2343 7772 (Principal) 9, Shun Chi Street, 2343 6220 (Office) Shun Lee Estate Kwun Tong, Kowloon 各位中一至中三級學生家長 : 上學期測驗 上學期測驗將於二零一二年十月二十四日至十月三十日進行 安排如下 : 1. 測驗於 24/10, 25/10, 26/10, 29/10 早上八時三十五分至十時四十分進行

More information

Boundary Influence On The Entropy Of A Lozi-Type Map. Cellular Neural Networks : Defect Patterns And Stability

Boundary Influence On The Entropy Of A Lozi-Type Map. Cellular Neural Networks : Defect Patterns And Stability Boundary Influence On The Entropy Of A Lozi-Type Map Yu-Chuan Chang and Jonq Juang Abstract: Let T be a Henon-type map induced from a spatial discretization of a Reaction-Diffusion system With the above-mentioned

More information

Elementary Number Theory An Algebraic Apporach

Elementary Number Theory An Algebraic Apporach Elementary Number Theory An Algebraic Apporach Base on Burton s Elementary Number Theory 7/e 張世杰 bfhaha@gmail.com Contents 1 Preliminaries 11 1.1 Mathematical Induction.............................. 11

More information

論文與專利寫作暨學術 倫理期末報告 班級 : 碩化一甲學號 :MA 姓名 : 林郡澤老師 : 黃常寧

論文與專利寫作暨學術 倫理期末報告 班級 : 碩化一甲學號 :MA 姓名 : 林郡澤老師 : 黃常寧 論文與專利寫作暨學術 倫理期末報告 班級 : 碩化一甲學號 :MA540117 姓名 : 林郡澤老師 : 黃常寧 About 85% of the world s energy requirements are currently satisfied by exhaustible fossil fuels that have detrimental consequences on human health

More information

FUNDAMENTALS OF FLUID MECHANICS. Chapter 8 Pipe Flow. Jyh-Cherng. Shieh Department of Bio-Industrial

FUNDAMENTALS OF FLUID MECHANICS. Chapter 8 Pipe Flow. Jyh-Cherng. Shieh Department of Bio-Industrial Chapter 8 Pipe Flow FUNDAMENTALS OF FLUID MECHANICS Jyh-Cherng Shieh Department of Bio-Industrial Mechatronics Engineering National Taiwan University 1/1/009 1 MAIN TOPICS General Characteristics of Pipe

More information

Multiple sequence alignment (MSA)

Multiple sequence alignment (MSA) Multiple sequence alignment (MSA) From pairwise to multiple A T _ A T C A... A _ C A T _ A... A T _ G C G _... A _ C G T _ A... A T C A C _ A... _ T C G A G A... Relationship of sequences (Tree) NODE

More information

ApTutorGroup. SAT II Chemistry Guides: Test Basics Scoring, Timing, Number of Questions Points Minutes Questions (Multiple Choice)

ApTutorGroup. SAT II Chemistry Guides: Test Basics Scoring, Timing, Number of Questions Points Minutes Questions (Multiple Choice) SAT II Chemistry Guides: Test Basics Scoring, Timing, Number of Questions Points Minutes Questions 200-800 60 85 (Multiple Choice) PART A ----------------------------------------------------------------

More information

14-A Orthogonal and Dual Orthogonal Y = A X

14-A Orthogonal and Dual Orthogonal Y = A X 489 XIV. Orthogonal Transform and Multiplexing 14-A Orthogonal and Dual Orthogonal Any M N discrete linear transform can be expressed as the matrix form: 0 1 2 N 1 0 1 2 N 1 0 1 2 N 1 y[0] 0 0 0 0 x[0]

More information

Numbers and Fundamental Arithmetic

Numbers and Fundamental Arithmetic 1 Numbers and Fundamental Arithmetic Hey! Let s order some pizzas for a party! How many pizzas should we order? There will be 1 people in the party. Each people will enjoy 3 slices of pizza. Each pizza

More information

pseudo-code-2012.docx 2013/5/9

pseudo-code-2012.docx 2013/5/9 Pseudo-code 偽代碼 & Flow charts 流程圖 : Sum Bubble sort 1 Prime factors of Magic square Total & Average Bubble sort 2 Factors of Zodiac (simple) Quadratic equation Train fare 1+2+...+n

More information

Permutation Tests for Difference between Two Multivariate Allometric Patterns

Permutation Tests for Difference between Two Multivariate Allometric Patterns Zoological Studies 38(1): 10-18 (1999) Permutation Tests for Difference between Two Multivariate Allometric Patterns Tzong-Der Tzeng and Shean-Ya Yeh* Institute of Oceanography, National Taiwan University,

More information

適應控制與反覆控制應用在壓電致動器之研究 Adaptive and Repetitive Control of Piezoelectric Actuators

適應控制與反覆控制應用在壓電致動器之研究 Adaptive and Repetitive Control of Piezoelectric Actuators 行 精 類 行 年 年 行 立 林 參 理 劉 理 論 理 年 行政院國家科學委員會補助專題研究計畫成果報告 適應控制與反覆控制應用在壓電致動器之研究 Adaptive and Repetitive Control of Piezoelectric Actuators 計畫類別 : 個別型計畫 整合型計畫 計畫編號 :NSC 97-2218-E-011-015 執行期間 :97 年 11 月 01

More information

統計學 Spring 2011 授課教師 : 統計系余清祥日期 :2011 年 3 月 22 日第十三章 : 變異數分析與實驗設計

統計學 Spring 2011 授課教師 : 統計系余清祥日期 :2011 年 3 月 22 日第十三章 : 變異數分析與實驗設計 統計學 Spring 2011 授課教師 : 統計系余清祥日期 :2011 年 3 月 22 日第十三章 : 變異數分析與實驗設計 Chapter 13, Part A Analysis of Variance and Experimental Design Introduction to Analysis of Variance Analysis of Variance and the Completely

More information

Linear Algebra 18 Orthogonality

Linear Algebra 18 Orthogonality Linear Algebra 18 Orthogonality Wei-Shi Zheng, wszheng@ieeeorg, 2011 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter 1: 1 0 0 e 1 = 0, e 2 = 1, e 3 =

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Chap. 4 Force System Resultants

Chap. 4 Force System Resultants Chap. 4 Force System Resultants Chapter Outline Moment of a Force Scalar Formation Cross Product Moment of Force Vector Formulation Principle of Moments Moment of a Force about a Specified xis Moment of

More information

基因演算法 學習速成 南台科技大學電機系趙春棠講解

基因演算法 學習速成 南台科技大學電機系趙春棠講解 基因演算法 學習速成 南台科技大學電機系趙春棠講解 % 以下程式作者 : 清大張智星教授, 摘自 Neuro-Fuzzy and Soft Computing, J.-S. R. Jang, C.-T. Sun, and E. Mizutani 讀者可自張教授網站下載該書籍中的所有 Matlab 程式 % 主程式 : go_ga.m % 這是書中的一個範例, 了解每一個程式指令後, 大概就對 基因演算法,

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

磁振影像原理與臨床研究應用 課程內容介紹 課程內容 參考書籍. Introduction of MRI course 磁振成像原理 ( 前 8 週 ) 射頻脈衝 組織對比 影像重建 脈衝波序 影像假影與安全 等

磁振影像原理與臨床研究應用 課程內容介紹 課程內容 參考書籍. Introduction of MRI course 磁振成像原理 ( 前 8 週 ) 射頻脈衝 組織對比 影像重建 脈衝波序 影像假影與安全 等 磁振影像原理與臨床研究應用 盧家鋒助理教授國立陽明大學物理治療暨輔助科技學系 alvin4016@ym.edu.tw 課程內容介紹 Introduction of MRI course 2 課程內容 磁振成像原理 ( 前 8 週 ) 射頻脈衝 組織對比 影像重建 脈衝波序 影像假影與安全 等 磁振影像技術與分析技術文獻討論 對比劑增強 功能性影像 擴散張量影像 血管攝影 常用分析方式 等 磁振影像於各系統應用

More information

Digital Integrated Circuits Lecture 5: Logical Effort

Digital Integrated Circuits Lecture 5: Logical Effort Digital Integrated Circuits Lecture 5: Logical Effort Chih-Wei Liu VLSI Signal Processing LAB National Chiao Tung University cwliu@twins.ee.nctu.edu.tw DIC-Lec5 cwliu@twins.ee.nctu.edu.tw 1 Outline RC

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

ON FINITE DIMENSIONAL APPROXIMATION IN NONPARAMETRIC REGRESSION

ON FINITE DIMENSIONAL APPROXIMATION IN NONPARAMETRIC REGRESSION Journal of the Chinese Statistical Association Vol. 54, (206) 86 204 ON FINITE DIMENSIONAL APPROXIMATION IN NONPARAMETRIC REGRESSION Wen Hsiang Wei Department of Statistics, Tung Hai University, Taiwan

More information

CH 5 More on the analysis of consumer behavior

CH 5 More on the analysis of consumer behavior 個體經濟學一 M i c r o e c o n o m i c s (I) CH 5 More on the analysis of consumer behavior Figure74 An increase in the price of X, P x P x1 P x2, P x2 > P x1 Assume = 1 and m are fixed. m =e(p X2,, u 1 ) m=e(p

More information

Ph.D. Qualified Examination

Ph.D. Qualified Examination Ph.D. Qualified Examination (Taxonomy) 1. Under what condition, Lamarckism is reasonable? 2. What is the impact to biology and taxonomy after Darwin published Origin of Species? 3. Which categories do

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Lecture Note on Linear Algebra 16. Eigenvalues and Eigenvectors

Lecture Note on Linear Algebra 16. Eigenvalues and Eigenvectors Lecture Note on Linear Algebra 16. Eigenvalues and Eigenvectors Wei-Shi Zheng, wszheng@ieee.org, 2011 November 18, 2011 1 What Do You Learn from This Note In this lecture note, we are considering a very

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

Lecture Notes on Propensity Score Matching

Lecture Notes on Propensity Score Matching Lecture Notes on Propensity Score Matching Jin-Lung Lin This lecture note is intended solely for teaching. Some parts of the notes are taken from various sources listed below and no originality is claimed.

More information

國立交通大學 電子工程學系電子研究所碩士班 碩士論文

國立交通大學 電子工程學系電子研究所碩士班 碩士論文 國立交通大學 電子工程學系電子研究所碩士班 碩士論文 萃取接觸阻抗係數方法之比較研究 CBKR 結構與改良式 TLM 結構 A Comparison Study of the Specific Contact Resistivity Extraction Methods: CBKR Method and Modified TLM Method 研究生 : 曾炫滋 指導教授 : 崔秉鉞教授 中華民國一

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Learning to Recommend with Location and Context

Learning to Recommend with Location and Context Learning to Recommend with Location and Context CHENG, Chen A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Computer Science and Engineering The Chinese

More information

CHAPTER 2. Energy Bands and Carrier Concentration in Thermal Equilibrium

CHAPTER 2. Energy Bands and Carrier Concentration in Thermal Equilibrium CHAPTER 2 Energy Bands and Carrier Concentration in Thermal Equilibrium 光電特性 Ge 被 Si 取代, 因為 Si 有較低漏電流 Figure 2.1. Typical range of conductivities for insulators, semiconductors, and conductors. Figure

More information

Answers: ( HKMO Heat Events) Created by: Mr. Francis Hung Last updated: 23 November see the remark

Answers: ( HKMO Heat Events) Created by: Mr. Francis Hung Last updated: 23 November see the remark 9 50450 8 4 5 8-9 04 6 Individual 6 (= 8 ) 7 6 8 4 9 x =, y = 0 (= 8 = 8.64) 4 4 5 5-6 07 + 006 4 50 5 0 Group 6 6 7 0 8 *4 9 80 0 5 see the remark Individual Events I Find the value of the unit digit

More information

個體經濟學二. Ch10. Price taking firm. * Price taking firm: revenue = P(x) x = P x. profit = total revenur total cost

個體經濟學二. Ch10. Price taking firm. * Price taking firm: revenue = P(x) x = P x. profit = total revenur total cost Ch10. Price taking firm 個體經濟學二 M i c r o e c o n o m i c s (I I) * Price taking firm: revenue = P(x) x = P x profit = total revenur total cost Short Run Decision:SR profit = TR(x) SRTC(x) Figure 82: AR

More information

FUNDAMENTALS OF FLUID MECHANICS Chapter 3 Fluids in Motion - The Bernoulli Equation

FUNDAMENTALS OF FLUID MECHANICS Chapter 3 Fluids in Motion - The Bernoulli Equation FUNDAMENTALS OF FLUID MECHANICS Chater 3 Fluids in Motion - The Bernoulli Equation Jyh-Cherng Shieh Deartment of Bio-Industrial Mechatronics Engineering National Taiwan University 09/8/009 MAIN TOPICS

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

REAXYS NEW REAXYS. RAEXYS 教育訓練 PPT HOW YOU THINK HOW YOU WORK

REAXYS NEW REAXYS. RAEXYS 教育訓練 PPT HOW YOU THINK HOW YOU WORK REAXYS HOW YOU THINK HOW YOU WORK RAEXYS 教育訓練 PPT Karen.liu@elsevier.com NEW REAXYS. 1 REAXYS 化學資料庫簡介 CONTENTS 收錄內容 & 界面更新 資料庫建置原理 個人化功能 Searching in REAXYS 主題 1 : 化合物搜尋 主題 2 : 反應搜尋 主題 3 : 合成計畫 主題 4 :

More information

Algebraic Algorithms in Combinatorial Optimization

Algebraic Algorithms in Combinatorial Optimization Algebraic Algorithms in Combinatorial Optimization CHEUNG, Ho Yee A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Philosophy in Computer Science and Engineering

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

在雲層閃光放電之前就開始提前釋放出離子是非常重要的因素 所有 FOREND 放電式避雷針都有離子加速裝置支援離子產生器 在產品設計時, 為增加電場更大範圍, 使用電極支援大氣離子化,

在雲層閃光放電之前就開始提前釋放出離子是非常重要的因素 所有 FOREND 放電式避雷針都有離子加速裝置支援離子產生器 在產品設計時, 為增加電場更大範圍, 使用電極支援大氣離子化, FOREND E.S.E 提前放電式避雷針 FOREND Petex E.S.E 提前放電式避雷針由 3 個部分組成 : 空中末端突針 離子產生器和屋頂連接管 空中末端突針由不鏽鋼製造, 有適合的直徑, 可以抵抗強大的雷擊電流 離子產生器位於不鏽鋼針體內部特別的位置, 以特別的樹脂密封, 隔絕外部環境的影響 在暴風雨閃電期間, 大氣中所引起的電場增加, 離子產生器開始活化以及產生離子到周圍的空氣中

More information

Sparse Learning Under Regularization Framework

Sparse Learning Under Regularization Framework Sparse Learning Under Regularization Framework YANG, Haiqin A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Computer Science and Engineering The Chinese

More information

心智科學大型研究設備共同使用服務計畫身體 心靈與文化整合影像研究中心. fmri 教育講習課程 I. Hands-on (2 nd level) Group Analysis to Factorial Design

心智科學大型研究設備共同使用服務計畫身體 心靈與文化整合影像研究中心. fmri 教育講習課程 I. Hands-on (2 nd level) Group Analysis to Factorial Design 心智科學大型研究設備共同使用服務計畫身體 心靈與文化整合影像研究中心 fmri 教育講習課程 I Hands-on (2 nd level) Group Analysis to Factorial Design 黃從仁助理教授臺灣大學心理學系 trhuang@ntu.edu.tw Analysis So+ware h"ps://goo.gl/ctvqce Where are we? Where are

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Digital Image Processing

Digital Image Processing Dgtal Iage Processg Chater 08 Iage Coresso Dec. 30, 00 Istructor:Lh-Je Kau( 高立人 ) Deartet of Electroc Egeerg Natoal Tae Uversty of Techology /35 Lh-Je Kau Multeda Coucato Grou Natoal Tae Uv. of Techology

More information

Ch2. Atoms, Molecules and Ions

Ch2. Atoms, Molecules and Ions Ch2. Atoms, Molecules and Ions The structure of matter includes: (1)Atoms: Composed of electrons, protons and neutrons.(2.2) (2)Molecules: Two or more atoms may combine with one another to form an uncharged

More information

Chapter 13. Enzyme Kinetics ( 動力學 ) and Specificity ( 特異性 專一性 ) Biochemistry by. Reginald Garrett and Charles Grisham

Chapter 13. Enzyme Kinetics ( 動力學 ) and Specificity ( 特異性 專一性 ) Biochemistry by. Reginald Garrett and Charles Grisham Chapter 13 Enzyme Kinetics ( 動力學 ) and Specificity ( 特異性 專一性 ) Biochemistry by Reginald Garrett and Charles Grisham Y.T.Ko class version 2016 1 Essential Question What are enzymes? Features, Classification,

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

第二章 : Hydrostatics and Atmospheric Stability. Ben Jong-Dao Jou Autumn 2010

第二章 : Hydrostatics and Atmospheric Stability. Ben Jong-Dao Jou Autumn 2010 第二章 : Hydrostatics and Atmospheric Stability Ben Jong-Dao Jou Autumn 2010 Part I: Hydrostatics 1. Gravity 2. Geopotential: The concept of geopotential is used in measurement of heights in the atmosphere

More information

允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請. 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵

允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請. 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵 注意 : 允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵 ccmp@seed.net.tw Notice: Individual students, nonprofit libraries, or schools are

More information

Chapter 7 Propositional and Predicate Logic

Chapter 7 Propositional and Predicate Logic Chapter 7 Propositional and Predicate Logic 1 What is Artificial Intelligence? A more difficult question is: What is intelligence? This question has puzzled philosophers, biologists and psychologists for

More information

Lecture Note on Linear Algebra 14. Linear Independence, Bases and Coordinates

Lecture Note on Linear Algebra 14. Linear Independence, Bases and Coordinates Lecture Note on Linear Algebra 14 Linear Independence, Bases and Coordinates Wei-Shi Zheng, wszheng@ieeeorg, 211 November 3, 211 1 What Do You Learn from This Note Do you still remember the unit vectors

More information

奈米微污染控制工作小組 協辦單位 台灣賽默飛世爾科技股份有限公司 報名方式 本參訪活動由郭啟文先生負責 報名信箱

奈米微污染控制工作小組 協辦單位 台灣賽默飛世爾科技股份有限公司 報名方式 本參訪活動由郭啟文先生負責 報名信箱 SEMI AMC TF2 微污染防治專案為提升國內微污染量測之技術水平, 特舉辦離子層析儀之原理教學與儀器參訪活動, 邀請 AMC TF2 成員參加, 期盼讓 SEMI 會員對於離子層析技術於微汙染防治之儀器功能及未來之應用有進一步的認識 以實作參訪及教學模式, 使 SEMI 會員深刻瞭解離子層析技術之概念與原理, 並於活動中結合產業專家, 進行研討, 提出未來應用與發展方向之建議 參訪日期 中華民國

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information