Haruhiko Ogasawara. This article gives the first half of an expository supplement to Ogasawara (2015).

Size: px
Start display at page:

Download "Haruhiko Ogasawara. This article gives the first half of an expository supplement to Ogasawara (2015)."

Transcription

1 Economic Review (Otaru University of Commerce, Vol.66, No. & 3, December, 5. Expository supplement I to the paper Asymptotic expansions for the estimators of Lagrange multipliers and associated parameters by the maximum likelihood and weighted score methods Haruhiko Ogasawara This article gives the first half of an expository supplement to Ogasawara (5.. Asymptotic expansions of the estimators with restrictions for model identification In this section, the estimator θ W identification is dealt with. Recall that in this case in (A.4, we have with restrictions for model η W. Setting W η l * n q * θw θ θ Λ / O ( n p / p O( n O ( n * q L Λ n ( θ W θ ( θ W θ θ O Op ( n p 3/ O ( n (S. 3 l ( θ θ ( θ ( θ W θ h Op ( n 9

2 4 l 3 ( θ θ 3 ( θ 3 W θ Op( n. 6 h 3 ( θ 3/ Op ( n From the above result, l * θw θ * * q Λ θ n Λ / * L Λ Λ ( θw θ O n ( * q * θ W θ Λ θ E ( J Λ θ θ ( θ O p O O p ( n p ( n 3/ ( n (3 T * h ( W Op ( n J E ( J Λ θ θ O (3 (3 * T ( W 3/ Op ( n 3/ O ( n (4 E T( J * 3 3 Λ ( W O 6 h θ θ 3 ( θ Recalling p (. p n O( n M L Λ, Equation (S. gives the following result: (S.

3 ( ( * θ θ Λ ( n Λ q W O( n θ / Op ( n Λ MΛ ( ( l θ l Op ( n (3 E T( J ( ( ( ( l Λ Λ h Λ ( θ θ Op ( n ( ( * ( ( ( ( n Λ MΛ q Λ MΛ MΛ 3/ O ( p n (3 E T( J ( ( ( ( l Λ M( Λ Λ h Λ ( θ θ n q θ * ( ( Λ Λ l θ 3/ Op ( n E ( J (3 T ( ( ( l ( Λ Λ h Λ (A ( θ θ (B n ( * ( ( Λ q Λ MΛ l θ 3/ Op ( n 3/ Op ( n l θ E ( J ( Λ Λ Λ ( p (3 T ( ( ( l h θ θ (B (A O ( n 3/ (S.3

4 Λ ( J E ( J Λ 3 i ( (3 (3 ( T (4 E T( J 3 ( ( ( 3 ( l Op ( Λ Λ n 6 h Λ 3 ( θ θ Λ l n Λ q O ( n ( ( i ( i ( * W p Λ ( i ( i i/ W l p In (S.3, l θ 3/ O ( n 3/ Op ( n O(, O ( n, i,, 3. Λ l Λ l Λ where ( ( ( ( ( W ML l θ Λ Λ Λ, l l / θ, ( ( ( ( W ML Λ l Λ l ( ( ( ( W ML (3 E T( J ( ( l ( ( ( l Λ MΛ ( Λ Λ h Λ θ ( θ θ ( ( l l ( ΛML ΛML v( M,, θ θ ( ( ( ( ( Λ Λ Λ with Λ and Λ where ML ML ML { ( / } q q q q and q ML p ( ML submatrices, respectively, being (S.4 (S.5

5 ( Λ ( Λ ( Λ ( ( ij ( ML ( ijk* ( ij i jk* * ( i j q; k,..., q, E ( J (3 T ( ( ( ( ( ML ( ij i j h ( θ ( Λ ( Λ Λ {( Λ ( Λ } ( i, j,..., q, Λ l Λ l Λ l (S.6 (3 (3 (3 (3 ( (W W ML W n, Λ l ( Λ Λ Λ Λ (3 (3 (3 (3 (33 (34 ML ML ML ML ML l l v( M, v( M, θ θ 3 (3 (3 l l vec{( J E T ( J },, θ θ ( (W ( ( l ΛW n l ( ΛW ΛW n v( M,, θ ( Λ ( Λ ( Λ ( Λ (3 ( ij ( k* l* ( ML ( ijk* l* m i jk* l* m ( ij ( k* l* * * ( i j q; k l q; m,..., q, 3

6 ij ( Λ ( i (3 ( ML ( ijk* l* Λ ( ij (3 E T( J ( ( ( ( ( Λ Λ h {( Λ k* ( Λ l* } ( θ (3 E T ( J ( ( ( ( ij ( ( Λ Λ h ( Λ k* ( Λ i ( Λ jl* ( ij ( θ * * ( i j q; k, l,..., q, ( Λ ( Λ ( Λ ( Λ (33 ( ( ( ML ( ijk* l* m i jl* k* m * * ( i, j, k, l, m,..., q, E ( J ( Λ ( Λ Λ ( Λ (3 T (34 ( ( ( ML ( ijk* h ( θ (A * ( i, j, k,..., q, (3 E T ( J ( ( ( ( ( Λ Λ h {( Λ j ( Λ k} (B ( θ (B (A (4 E T( J ( ( 3 ( ( ( ( Λ Λ {( i ( j ( k* } 6 h Λ Λ Λ 3 ( θ i j 4

7 ( Λ ( Λ ( Λ q ( ( ij ( * W ( ij i j ( ij ( i j q, q ( Λ Λ ( Λ * ( ( ( W i θ E ( J (3 T ( ( ( ( * h i ( θ ( Λ Λ {( Λ ( Λ q } ( i,..., q. Equation (S.3 is alternatively written as i 3 ( i ( i ( (w ( * W ML W n n Op n i θ θ Λ l Λ l Λ q ( θ θ Λ n l n Λ q O ( n. ( (W ( * ML W p (S.7. Properties of augmented matrices Define column-wise q blocks of a matrix A as A (,..., ( j j q A ( A A A. Similarly, define row-wise p blocks of A as ( ( ( q A (,..., ( i i p i.e., A ( A( A( A( p. Let B ( B( B( where B( and B( are b q and b r submatrices, respectively; i.e.,, I( q O q r, q r or q r. Post-multiply B by G C I. Then, BG ( r becomes ( B( B( C: B( with the colon being used to show separation for clarity. Matrix B is restored from BG by subtracting B( C from the first column-wise block of BG, which is given by * BGG B, G * I( q O C I. Since ( r * G G. That is, we have elementary results: 5

8 Lemma S. I( q C I( q C O I O I. ( r ( r I( q O I( q O C I C I and ( r ( r B D Let C O be a non-singular ( q r ( q r matrix, where the q q diagonal block B may be asymmetric and singular. Lemma S. The first column-wise and row-wise blocks of 6 B D C O B DC D equal to those of C O, respectively. Equivalently, the two matrices are different only in their second diagonal blocks. B DI( q O B DC D Proof. Since C O C I( r C O, B D I( q O B DC D C O C I( r C O. Using Lemma S, I( q O B D B DC D ( r C I C O C O. Noting that pre-multiplication of a I( q O matrix by C I does not change the first row-wise brock of the ( r multiplied matrix, we find that the first row-wise blocks of the inverted matrices are equal. Similarly, from I( q D B D B DC D ( r O I C O C O we have B D B DC D I( q D C O C O O I. Using Lemma S, ( r are

9 B D I( q D B DC D C O O I( r C O, which shows that the first column-wise blocks of the inverted matrices are equal. Q.E.D. B D When B is non-singular and when q r, C O obtained by the formula of the inverse of a partitioned square matrix is straightforwardly E F E E FJGE E FJ G H, where J ( H GE F with JGE J the assumption of the existence of the inverses, which gives B D B B D( C B D C B B D( C B D ( ( C O. C B D C B C B D When B is possibly singular, and B D C O with q r and B DC are non-singular, we have Lemma S3. Let A B DC. Then, B D A A D( C A D C A A D( C A D C O ( ( r ( C A D C A I C A D. Proof. Using the result in the proof of Lemma S, and the formula of the inverse of a partitioned matrix, B D I( q O B DC D C O C I( r C O I( q O A A D( C A D C A A D( C A D ( r ( ( C I C A D C A C A D A A D( C A D C A A D( C A D. Q.E.D. ( C A D C A I( r ( C A D In the case of the augmented information matrix per observation q > r, we have * I with 7

10 ( ( * I H I C I I Theorem S. Let I ( ( H O C O, where I I I may be singular; A I CC and C A C are non-singular. Then, I A A C( C A C C A A C( C A C C A C C A I C A C. Proof. Use Lemma S3 with B I and D C. Q.E.D. / / It is known that the asymptotic covariance matrix of ( n θ, n η * ( ( r ( * I O * under correct model specification is given by I I O O, whose alternative expression (see Silvey, 959, Lemma 6; Jennrich, 974; Lee, 979 is given by the following: Corollary S. When I in the augmented matrix may be singular, ( * I O I * O I ( O O O I I. Proof. * I O * A A C( C A C C A I I ( ( A CC O O C A C C A I O A A C( C A C C A O ( I ( A A C( C A C C A A C( C A C O C A C I O Q.E.D. ( ( r * When I is non-singular, since I is obtained by the formula of the inverse of a partitioned matrix as * ( ( I C I I C C I C C I I C C I C I C O ( (, it C I C C I C I C 8 W W

11 follows that * * ( I O I I C C I C C I O I I O O ( O C I C ( I O, ( O I which is also known (Aitchison & Silvey, 958, Theorem. ( ( ( The result I II I in Corollary S shows that I is a ( generalized inverse of I. However, the reverse is not true (Jennrich, 974, which is shown as follows. I I I ( A CC{ A A C( C A C C A }( A CC ( A CC CC A CC ( A CC A C( C A C C A ( A CC A CC CC A CC { C( C A C C CC CC A CC} A C C A C C ( I C I C I CC C C I [ ( r { ( } ], which indicates that I ( I is not a generalized inverse of I. I Noting that Theorem S holds when 9 is non-singular as well as when is singular, we have Corollary S. When I is non-singular and A I CC, A A C( C A C C A A C( C A C ( C A C C A I ( C A C ( r I I C( C I C C I I C( C I C ( C I C C I ( C I C.

12 Proof. An alternative direct proof of Corollary S is given as follows. First, we derive the equality of the second diagonal blocks of the above equation. Since C A C C( I CC C C{ I I C( C I C I C I } C ( r C I C C I C( C I C I ( C I C I I ( r ( r ( r C I C( C I C I, ( r I ( C A C I ( C I C I ( C I C ( r ( r ( r ( C I C. The equality of the lower-left (or upper-right blocks is given as follows. ( C A C C A { I ( C I C } C{ I I C( C I C I C I } ( r ( r { I ( C I C }[ C I { I ( C I C I } C I ] ( r ( r ( r { I ( C I C }( C I C I C I ( r ( r { I ( C I C }[ ( C I C ( r ( C I C {( C I C I } ( C I C ] C I ( r { I ( C I C }{( C I C I } ( C I C C I ( r ( r ( C I C C I. The equality of the first diagonal blocks is given as follows. A A C( C A C C A A A C( C I C C I I I C( C I C I C I ( r { I I C( C I C I C I } C( C I C C I ( r I I C( C I C C I. Q.E.D. A different expression of the result of Lemma S3 when B is non-singular is given from the result before Lemma S3 as

13 A B DC as in Lemma S3, Lemma S4. When B is non-singular and B D A A D( C A D C A A D( C A D C O ( ( r ( C A D C A I C A D B B D( C B D C B B D( C B D. ( ( C B D C B C B D Let C and D in Lemmas S, S3 and S4 be partitioned as C ( C( C( and D ( D( D(, where C( and D( are q s submatrices, and C( and D( are qt submatrices with s t r after possible simultaneous permutations of the columns of C and D. Assume that B D C ( ( is non-singular, where B may or may not be singular. The quantity t ( t r is not necessarily the minimum value to yield the non-singular B D C ( ( when B is singular. Lemma S5. The first and second row-wise blocks of B D C O B D ( C( D corresponding to the first q rows and the following s C O rows are equal. Similarly, the first and second column-wise blocks of the matrices corresponding to the first q columns and the following s columns are equal. Proof. From the identity, I( q O O B D B D( C( D O I ( s O C O C O C( O I( t I ( q O O B D ( C ( D O I( s O B D we have C O C O C( O I( t Similarly, from the identity, and

14 I( q O D ( B D B D( C( D O I( s O C O C O, O O I( t using Lemma S we have I ( q O D ( B D B D( C( D O I( s O C O C O O O I( t The above results show the required equalities. Q.E.D. Define A B D C ( ( (. Then, using Lemma S5, we have Theorem S. The three submatrices other than the second diagonal block on the right-hand side of the identity B D( C( D C O A( A( D( C A( D C A( D( C A( D ( ( ( ( ( C A D C A C A D are unchanged irrespective of the choice of C( and D(. Theorem S3. Theorem S holds even if B D C ( ( is replaced by B D EC ( (, where E is a non-singular t t matrix. Proof. From the identity I( q O O B D B D( EC( D O I( s O C O C O EC( O I( t I ( q O O B D( EC( D O I( s O B D C O C O EC( O I( t, we obtain. Similarly,

15 I( q O D ( E B D B D( EC( D O I( s O C O C O O O I( t I( q O D ( E B D( EC( D O I( s O gives B D C O C O. These two results O O I( t B D B D ( EC( D show that C O and are the same except C O their second diagonal blocks. Q.E.D. Crowder s (984, Lemma 6 result for the augmented matrix derived in a different way is a special case of Theorem S3 when B I and D C H. Note that the arbitrariness of E corresponds to the arbitrary expression of the restriction Eh = with E being non-singular. Using Lemmas S, S5 and Theorem S, we have Corollary S3. When B is possibly singular, I ( q O O B D B D( C( D O I( s O C O C O C ( O I( t A( A( D( C A( D C A( D( C A( D. O O ( C A( D C A( ( C A( D O I(t Equating the results of Lemma S3 and Corollary S3, we find that O O ( C A( D I( r ( C A D O I or equivalently ( t I( s O ( C A D ( C A( D. When B is non-singular. O O 3

16 O O ( C A( D ( C B D O I. ( t 3. Applications of the general formulas to Example. (Example. 3. Non-studentized estimators n n l x n x n n, θ ( ( i i i i m n m n n,, ( ( n m n ( l ( { ( } n, θ θ n m n ( ( { ( } n l ( E n θ θ n ( c ( c ( I ( c n / n O(, c n / n O(, 4

17 l l M E θ θ θ θ m n ( { ( } n, m n ( { ( } c l l ( E n n I, θ θ c ( l c E ( M n (,, θ { ( } l c E ( M n, (, θ { ( } l l E ( M E ( M, θ θ Λ I * I H H. Since, define ( ( c ( c and I θ θ. Then, using c ( c, 5

18 Λ I I (, I (, I sym. * (, I (, I (, I c c ( cc ( c, c sym. c c, cc cc ( c, c ( c c c ( c c c cc ( c, c ( c ( c, cc ( c, c ( * k k k k q, k, ( ( k( (,. ( ( ( ( ( ( l ( l W ML θ ( c, c θ Λ l Λ l Λ, ( ( Λ Λ ( ( Λ ( Λ ( ( Λ, l l / θ where W ML. 6

19 n n Λ * k where ( J ( ( ( q ( n Λ ( c, c ( k(, ( c c /{ ( } Λ ( q * ( k. For nonzero elements of (3 (, (3 J, n( ( n ( m n 3 { ( } { ( } { ( } n( ( n ( m n, 3 { ( } { ( } { ( } (3 E {( J (, } c, { ( } { J E ( J } (3 (3 (, Similarly, E ( n ( m n. 3 { ( } { ( } {( J } c, (3 (, { ( } { J E ( J } (3 (3 (, ( n ( m n. 3 { ( } { ( } 7

20 Nonzero elements of E E (4 J are (4 3( 3 (, c 3 { ( } { ( } {( J } ( 6 c, 3 { ( } { ( } {( } 6. (4 ( J (, c 3 { ( } { ( } ( ( ( ( ( ( l l ΛW l ΛMLl ( ΛML ΛML v( M, θ θ, where ( ( ij ( ( ΛML ( ijk* ( Λ ( i ( ij Λ jk* ( ij ( ij ( c, c i { ( } { ( } ( ij ( ( ( c, c i( ij / ij * ( i j ; k,, 8

21 (3 ( * E ( ( ( ( ML J Λ ( ij Λ {( Λ i ( Λ j} O ( (3 ( ( Λ E ( J {( Λ i ( Λ j} ( (3 E ( J { ( } ( c, c ( ( c, c ( i, j,..., q. c c ( ( ( Λ l Λ l Λ l (3 (3 (3 (3 ( (W W ML W n, Λ l (3 (3 ML l l ( Λ Λ Λ Λ v( M, v( M, (3 (3 (33 (34 ML ML ML ML θ θ 3 (3 (3 l l vec(( J E T ( J,, θ θ 9

22 ( (W ( ( l ΛW n l ( ΛW ΛW n v( M,, θ ( Λ ( Λ ( Λ ( Λ (3 ( ij ( k* l* ( ML ( ijk* l* m i jk* l* m ij k l ( ( ( ij k* l* { ( } ( ij ( k* l* ( c 4, c i 3 { ( } ( ij ( * * k l ( c (, c i{( ij / }{ ( } ij ( ( * * * * ( i j ; k l ; m,, ( Λ (3 ML ( ijk* l* ( ij ( ij ( ij ( (3 ( ( ( Λ i Λ E ( J {( Λ k* ( Λ l* } ( (3 ( ( ij ( Λ E ( J ( Λ k* ( Λ i ( Λ jl* ( ij ( ij ( c ( Λ i ( Λ j ( c ( c Λ ( ( ( ij c j 3

23 ( ij ( ij ( c, c i ( ( ( ( c ( ( ( ij c ( c, c 3( { ( } ( ij ij ( c (, c i ( ( ij * * ( i j ; k, l,, ( Λ ( Λ ( Λ ( Λ (33 ( ( ( ML ( ijk* l* m i jl* k* m ( { ( } ( c, c ( { ( } ( c, c i * * ( i, j, k, l, m,. ( Λ (34 ML ( ijk* i ( (3 ( ( (3 ( ( Λ E ( J ( E ( {( } Λ i Λ J Λ j Λ k ( (4 ( ( ( Λ E ( J {( Λ i ( Λ j Λ k* }, 6 where the first term is 3

24 ( (3 ( ( c c Λ E ( J ( Λ Λ ( i ( (3 Λ J E ( ( ( ( c Λ 4 ( ( c ( ( and the second term is ( c Λ 6 {( ( } 6 c [ ( ( { ( } ]. Then, (34 ( ΛML ( ijk* ( ( { ( } ( ( ij ( * ( ΛML ( ij ( Λ i ( Λ q j ( ij ( ij ( Λ i { k( } ( ij ( ij k ( ij ( c, c i ( ( ( ( ij k( ( c, c i ( ij ( ij ( i j,. 3

25 k( ( (,, where q ( Λ (, and consequently, * ( c c ( * ( Λ q k is used. q ( Λ Λ E ( J {( Λ ( Λ q } Λ ( Λ * ( ( (3 ( ( * ( ( ML i i i θ J k E ( ( ( ( (3 Λ ( Λ I ( Λ Λ ( ( k ( ( { ( } ( c 4 k( c ( ( ( k ( ( { ( } Λ ( 4 k( ( { ( } ( k( ( c c { 4 ( ( } k ( i,. ( ( c c ( i 3. Studentized estimators 3.. t W ( tw and t W Using and rather than for differentiation, 33

26 I Ι H { E ( Λ } H * * Ι Ι (, Ι (, Ι Sym. where (, Ι (, Ι (, Ι I c ( ( c c ( ( c Then, noting I ( / c ( / c ( ( (, Ι c c and, it follows that * A b I b d, where ( c ( ( A ( c c c ( c ( (,, ( c c c ( ( ( ( b,, c c c c., 34

27 ( ( and d c c. * Since direct differentiation of I with respect to k is tedious, we use the * * * * formula of I / I ( I / I, where I I k* k* c ( Ι H c ( c ( c cc ( c, c ( * H * looks simple. I ( i I I ( * ( * * ( 3/ j i j though c ( { ( } { ( } { ( } j 3/ c j ( { ( } ( j,, 3/ after evaluation using I c ( i I I ( ( { ( } * ( * * ( 3/ j 3/ j i j ( j,. ( ( i i. Note that That is, 35

28 * * * I * * I * I I I I c j ( ( j, j. j / / In tw n ( i W ( W, î W * I with θ θ replaced by W and becomes W ( W / / t W n { W ( W } ( W, where note that t W / n element of is the first or second diagonal. Then, is defined using / rather than n W. That is, W and t W reduce to those of a single group proportion with size n and pseudocounts 4k in total. ( ( i ( jk* I I I I I I I I 4 * * * * * * * * 3/ { ( } j k* j k* 3 I I I I I I 8 * * * * * * 5/ { ( } j k* c j ( ( jk* c jck* ( } ( ( 3 5/ c jck* ( { ( } 8 * ( j, k,, where 3/ c( * { ( } * I I c (, { ( } and non-zero second derivatives are 36

29 ( c 3 * { ( } { ( } I (, ( * I c 3 { ( } { ( } (. ( ( i i (, ( i ( jk* ( i ( jk*. Incidentally, ( ( ( i ( j, k* ( i ( j, k* j k* j k* c c ( ( ( c c( c c { ( } ( ( 3 ( 5/ c c ( c c ( { ( } 8 3/ 3 5/ { ( } ( { ( } 8 ( i, which is a scalar in the case of a single group. 3/ 3.. t W where / / ( i W / cc ( i about (, * cc avar( n W n ( I n (. Expand 37

30 ( I i ( i ( i ( * / / * * 3/ W I I θw θ j j I I I * * * * * * * * 3/ I I I I I ( i j k* 4 j k* j k* * 3 * I * * * I * 5/ I I I I ( i 8 j k* 3/ ( θw θ j( θw θ k* Op( n ( i i ( θ θ i ( θ θ O ( n, where I I / ( ( 3/ W W p I c ( * * * I c c cc { ( } { ( } I * * * I c c { ( } consequently, * I ( i j I I ( i j ( * * 3/ where ( cc cc c j { ( } ( j / ( cc c { ( } I I / ( c c, ( j,, 3/ * * * I j c j { ( }., j 38

31 ( jk* ( jk* 3 c j { ( } { ( } ( i ( c c ( * ( cc ( jk ( cc 3 { ( } ( c c ( cc 4 8 c jck* { ( } ( ( jk* ( c c ( { ( } 5/ ( c c ( { ( } { ( } / jk* / c j 3/ / 3/ 3 ( c c ( { ( }. 3/ 3/ 8 c jck* / / Note that while tw n ( i W ( W, / / t W n ( iw ( n W, where nw cwc W( pw pw / n W Op ( n n (. Alternatively, W W 3/ t n ( i. / / W W W 4. Results using special properties of Example. (Example. 4. Partial derivatives of with respect to p and p Note that cc ( p p c p c p and n (, then (, p p n p θ c c ( ( c, c ( { ( p } c c (,, ( p θ 39

32 n c c ( ( p { ( } c c cc p θ c ( c p p p p ( 3 { ( } c { ( } c c c c c ( c c { ( } c c ( ( c, c c, c c, c, { ( n } ( 3 c 3 c pθ 3 ( p { ( } { ( } c c c c c c c c c c 3 { ( } { ( } {( c, c c, c c c ( p θ, c c c, c c c, c c c, c c, c ( c, c, c c, c c, c c, c c, c, c } ( cc 3 ( 3 c, c cc, c cc, { ( } { ( } where c c c c c, c c c, c c c, c c c,3c ( c, cc, cc, c (, ( c, c, c c, c c, c c, c c, c, c. 4. Asymptotic cumulants of and by ML 4

33 c p c p is a usual sample proportion of size n, Since (, ( n (, ( n ( (, 3 3 4( n ( { 6 ( }. On the other hand, since cc ( p p cc ( p p n ( ( c p c p( c p c p, we use ( p, ( p n ( vec diag( c, c, ( p n ( ( ( c,, c, 3 (6 ( p n ( { 6 ( }( c,, c (4 ( n cov( p ( diag( c, c. n ( n E {( } O( n p p ( p n n ( vec diag( c, c O( n ( p c c c c ( c c vec diag(, n c c ( O n n O n ( c c ( ( O( n ( ( (. 4

34 n p ( n n ( diag( c, c n n n n 3( p, p ( p, p p ( p 4 ( p ( p 3 n 3 n 3 n ( p, p O( n 3 3 p ( p (, (, n cc ( diag( c, c cc ( ( n p (, cc ( c c n cc ( { ( } ( ( ( c,, c (6 n n ( p { n cov( p } n ( p c c 3 n 3 n n [ n cov( p vec{ n cov( p }] O( n 3 p ( p c c n n c c ( ( ( cc { ( } cc 4 c c c c c c { ( } n ( ( { ( } (,,, diag,,, ( c, c c, c c, c c cc cc c ( cc c n (, ( c,,, c ( c ( { 3 ( } { ( } { ( } ( 3 c, c c c, c c c, c c c, c c c, c c c, c c c,3 c O( n 3 4

35 c c n n c c n ( ( { ( } n ( cc ( ( c c 4 4 { ( } cc ( ( cc ( { ( } 3 (,( 3 c c, cc 3 O( n cc ( ( { ( } ( n {4( cc cc ( c c } { ( } ( 3 ( { ( } cc cc ( 3 n n c c n ( c c ( c c c c O( n n n O( n ( ( ( n n O n 3 (, where 4( cc cc ( c c cc and are used. 3 n 3 n n n 3 ( ( p p n (3 3 n n ( p p 3 n 3 n vec ncov( p O( n ( p c c c c / ( c c {vec ncov( p } 43

36 ( c c n (, ( ( ( c,, c (6 { ( } n 3 n n n 3 ncov( p ncov( p O( n p p p p c c ( c c ( c c n ( 3 ( 3 n { ( } { ( } c c c c c 3 (, O( n c c c c c 3 cc ( c c ( cc n ( n 3 ( { ( } { ( } c c c c 3 ( c, c O( n c c c c c c ( c c ( c c n ( 3 ( 3 n { ( } { ( } c c 3 c c O( n cc n c c ( c c O( n n 3 { ( } O( n,

37 4 3 n 3 4 n n n 4 ( ( p p 3 ( 3 n n n {vec ncov( n 3( } p p p ( p n p ( p 3 p ( p n n n n 3 (5 {vec ncov( p } 6 n O( n, where the first term is n 4 3 n 3 p n ( p ( c c n (, ( { 6 ( }( c,, c (4 { ( } 6 ( n ( c c c c { ( } n cc ( 3 cc 6, { ( } ( the second term is 45

38 3 ( 3 n n n {vec ncov( n 3( } p p p ( p ( c c n ( {(, ( c, c c, c c, c } { ( } ( c c c (6 c { ( } ( (,,, (,, 4 3 ( cc 3 n ( 3 {(, ( c, c c, c c, c } { ( } ( ( c,,, c ( c, (6, c and the third term is n p ( p 3 p ( p n n n n 3 (5 {vec n cov( p } 3 3 ( c c n ( {(, ( c, c c, c c, c } Then, { ( } 4 ( cc ( { ( } { ( } { ( } {(, ( 3 c, c c c, c c c, c c c, c c c, 3 (5 3 3 cc c, cc c,3 c }{ ( } ( c,,, c. 46

39 c c ( 3 c c ( n n 6 { ( } ( 3 4 (A ( c c 4 3 ( 3 {(, ( c, c c, c c, c} { ( } ( ( c,,, c ( c, (6, c 3 ( ( c c {(, ( c, c c, c c, c } 4 3 { ( } (B 4 ( 3 { ( } { ( } 3 {(, ( 3 c, c c c, c c c, c c c, c c c, 3 (5 3 cc c, cc c,3 c } 4 ( c,,, c 6 O( n n O n (, (B 4.3 Asymptotic cumulants of t and t by maximum likelihood / n ( t / { ( }, where is the MLE. The asymptotic cumulants of t reduce to those of the studentized sample proportion (see e.g., Ogasawara,, p.: (A 47

40 p, q, ( t 3 3 / / ( pq 3/ / ( t /3 ( t n ( p O( n n O( n, 7 ( t ( t n ( p ( pq 3 O( n n O( n 4 (, ( t n ( pq ( p O( n n O( n / / 3/ / ( t / ( t n {( pq 9( p ( pq 6} O( n n O( n, 3/ ( t 4 4 ( t / / n acov(, ( p ( pq ( pq, 4 n acov(, ( p ( pq 4( pq. t ( t / / 3 ( ( / { ( } / / n n n cc p p / / ( i [ cc / { ( }] n ( c c ( p p p p / / / / { ( } { ( / ( ncc } cc ( p p where n cc ( is the MLE and i (. ( Noting that ( ncc n n and var( p p ( n n p p., 3,, we find that t is the studentized 48

41 ( p p ( c p c p (,, ( c, c, p p p t / / (, p p ( n cc ( ( / 3/,, c c { ( } { ( p } t c c n ( c c p c c / / 3/ ( { ( } 3 ( ( c p p, 3/ 5/ { ( } 4 { ( } c 3 t / / 3 ( n ( c 3 c 3/ 5/ ( p { ( } 4 { ( } c c c c. c c c c 3/ ( t E {( } O( n p p c c / / / 4{ ( } 3/ ( / t ( p t 3/ n ( vec diag( c, c O( n ( p n ( c c ( c c O n n / ( cc ( 3/ / ( ( O n { ( } O( n (, 3/ ( t vec diag( c, c 49

42 t t ( t n ( diag( c, c p p t t t t p ( p 4 ( p ( p 3( p, p ( p, p 3 t t 3 ( t ( p, p n ( 3 O( n, 3 p ( p where the first term is c c c ( (, c ( the second term is c c ( n { ( } c c c c ( ( ( ( c c ( ( n ( c c n, the third term is n t { n cov( p } ( p ( p, ( c, (6, c c c ( n { ( } ( c, c c, c c, c 3 8{ ( } diag( c, c c, c c, c ( c, c c, c c, c cc ( ( c c n ( cc n ( 4 ( and the fourth term is t 5

43 n c c 3 ( (,[ n cov( p vec{ n cov( p }] / 3/ 5/ { ( } { ( } 4 { ( } c c c c c c c c 3 ( c n cc (, ( c 4 ( c,,, c ( 3 c, c c c, c c c, c c c, c c c, c c c, c c c,3 c 3 ( n cc (,( 3 c c, cc 3 4 ( 3 ( n cc ( c c 4 ( n 3 (. 4 ( Consequently, n O( n ( t ( t c c 3 ( ( t n O( n 4 4 ( n O( n (. 3 (3 t 3 t t 3( t 3( p {vec cov( } p p ( p p n 3 O( n, / ( t 3/ where the first term is 5

44 ( c c n ( ( (, ( c,, c 3/ / 3 3/ (6 { ( } ( c c ( n ( c c n 3/ / / { ( } ( c c ( / / { cc ( } the sum of the second and third (actually terms is 3 cov( p cov( p p pp p t t t ( c c 3 n { ( } 3/ / 3/ ( { ( } c c c c c c (, (, (, c c 3/ / 3 ( cc ( / (, c c c c { ( } c c c c n c c, where the quadratic form on the left-hand side of the last equation is. Then, ( c c ( ( t n O( n / 3/ 3 / { cc ( } n O( n. / ( t 3/ 3 5

45 p p ( p 4 3 t t t 4( t 4( p ( {vec cov( p ( p } 3 t t t t ( 3 p p p ( p (5 3 ( t 3 {vec cov( p } n 6 O n where the first term is (, ( c c n ( { 6 ( }(, ( c,, c (4 { ( } c c n 6. c c ( Since becomes n 3 3 c c c c 3 cc ( c c 3 c c c c c c c c, the first term 3 6. c c ( The second term is ( c c n { ( } ( 3/ 3/ { ( } { ( } ( 3 c c c c c c c c c (6 c {(, (,,, } (,,, (,, and the third term is 3 53

46 3 ( c c ( n {(, ( c, c c, c c, c } 4 { ( } 4 ( c c ( 3 3/ 3/ 5/ 3 { ( } { ( } 4 { ( } (, ( 3 c, c c c, c c c, c c c, c c c,c c Then, (B 3 (5 3 3 cc c,3 c { ( } ( c,,, c. 4( t n 3 6 cc ( (A 3 ( ( ( ( c c {(, ( c, c c, c c, c } ( c,,, c ( c, (6, c 3 ( cc ( {(, ( c, c c, c c, c } 8 ( 3 ( ( cc 3 4 ( c (, ( 3 c, c c c, c c c, c c c, c c c, c c c, n 3 (5 3 ( t cc c,3 c ( c,,, c 6 O( n O( n. ( t 4 ( t nacov( n, (B (recall that, ( t (A, 54

47 ( / t p p nacov( n, 3 ( cc ( c c n acov, / ( { ( } ( c c ( ( { ( } { ( } / ( c c / 3/ c c c ( (, c. 4.4 Asymptotic cumulants of W and W by the weighted score method m k n k n 4k n k W n 4k n 4k n 4k n 4k n 4k n k O ( n n k( O ( n. p p In the following, only the asymptotic cumulants different from those by ML are shown. ( n n k( O( n W n k( O( n n W O( n (recall that, ( n n ( 8 k O( n W 3 n n W O( n ( W ; W. nwcwcw ( p W pw W, where W ( W n k n k nw n 4 k, cw, cw, n 4k n 4k m k m k p W, pw, n k n k 55

48 c n k W c c n k( c O( n, n 4k c c n k( c O( n, W p n k p n c k p p n c k( p O ( n, W p n k n c k p p n c k( p O ( n, W p p p p p n { ( k c c c p c p } O ( n, W W { ( } { ( } { ( } ( ( O ( n W W W p { ( } n { ( } k( Op( n. From the above results, n n ( n 4 k{ c n k( c }{ c n k( c } W [ p p n k{ c c ( c p c p }] k( n O ( p n ( { ( } cc ( p p ( k( n 4k kc ( c kc ( c ( cc ( p p cc k{ c c ( c p c p} Op( n ( ( 4 n n k k k( c c ( k c c c p c p O ( n Then, { ( } p(. n p 56

49 ( c c( ( n W n n k O( n ( ( c c ( n k O( n ( n O( n W (recall that and, ( ( n n n 4k cc ( W where ( c c ( n k ( (, c c c { ( } c p ( c p c p c n 3 4 k O( n c p p ( n n k 4 cc ( (A (B cc ( n n O( n (, 3 W W O n (B (A 3 ( c n ( c, c c p and ( c p c p c n c c c ( c, c c c p p ( c c cc cc c c ( ( n cc p ( are used with. 57

50 References Aitchison, J., & Silvey, S. D. (958. Maximum-likelihood estimation of parameters subject to restraints. The Annals of Mathematical Statistics, 9, Crowder, M. (984. On constrained maximum likelihood estimation with non-i.i.d. observations. Annals of the Institute of Statistical Mathematics, 36, A, Jennrich, R. I. (974. Simplified formulae for standard errors in maximum likelihood factor analysis. British Journal of Mathematical and Statistical Psychology, 7, -3. Lee, S.-Y. (979. Constrained estimation in covariance structure analysis. Biometrika, 66, Ogasawara, H. (. Cornish-Fisher expansions using sample cumulants and monotonic transformations. Journal of Multivariate Analysis, 3, -8. Ogasawara, H. (5. Asymptotic expansions for the estimators of Lagrange multipliers and associated parameters by the maximum likelihood and weighted score methods. Submitted for publication. Silvey, S. D. (959. The Lagrangian multiplier test. The Annals of Mathematical Statistics, 3,

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Matrix Operations: Determinant

Matrix Operations: Determinant Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. 7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix

More information

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW 20 CHAPTER 1 Systems of Linear Equations REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW The last type of operation is slightly more complicated. Suppose that we want to write down the elementary

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Supplement to the paper Accurate distributions of Mallows C p and its unbiased modifications with applications to shrinkage estimation

Supplement to the paper Accurate distributions of Mallows C p and its unbiased modifications with applications to shrinkage estimation To aear in Economic Review (Otaru University of Commerce Vol. No..- 017. Sulement to the aer Accurate distributions of Mallows C its unbiased modifications with alications to shrinkage estimation Haruhiko

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Matrices and RRE Form

Matrices and RRE Form Matrices and RRE Form Notation R is the real numbers, C is the complex numbers (we will only consider complex numbers towards the end of the course) is read as an element of For instance, x R means that

More information

3.2 Gaussian Elimination (and triangular matrices)

3.2 Gaussian Elimination (and triangular matrices) (1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving

More information

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 46 Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition Peter J. Hammond Autumn 2012, revised Autumn 2014 University

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Introduction to Determinants

Introduction to Determinants Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.

More information

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system Finite Math - J-term 07 Lecture Notes - //07 Homework Section 4. - 9, 0, 5, 6, 9, 0,, 4, 6, 0, 50, 5, 54, 55, 56, 6, 65 Section 4. - Systems of Linear Equations in Two Variables Example. Solve the system

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Determinants. Recall that the 2 2 matrix a b c d. is invertible if Determinants Recall that the 2 2 matrix a b c d is invertible if and only if the quantity ad bc is nonzero. Since this quantity helps to determine the invertibility of the matrix, we call it the determinant.

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility

Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*

More information

Determinants Chapter 3 of Lay

Determinants Chapter 3 of Lay Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j

More information

Components and change of basis

Components and change of basis Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition

More information

Solving Systems of Polynomial Equations

Solving Systems of Polynomial Equations Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

1 One-way analysis of variance

1 One-way analysis of variance LIST OF FORMULAS (Version from 21. November 2014) STK2120 1 One-way analysis of variance Assume X ij = µ+α i +ɛ ij ; j = 1, 2,..., J i ; i = 1, 2,..., I ; where ɛ ij -s are independent and N(0, σ 2 ) distributed.

More information

Math 4377/6308 Advanced Linear Algebra

Math 4377/6308 Advanced Linear Algebra 3.1 Elementary Matrix Math 4377/6308 Advanced Linear Algebra 3.1 Elementary Matrix Operations and Elementary Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/

More information

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1 Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an

More information

3.4 Elementary Matrices and Matrix Inverse

3.4 Elementary Matrices and Matrix Inverse Math 220: Summer 2015 3.4 Elementary Matrices and Matrix Inverse A n n elementary matrix is a matrix which is obtained from the n n identity matrix I n n by a single elementary row operation. Elementary

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Chapter 4 Systems of Linear Equations; Matrices

Chapter 4 Systems of Linear Equations; Matrices Chapter 4 Systems of Linear Equations; Matrices Section 5 Inverse of a Square Matrix Learning Objectives for Section 4.5 Inverse of a Square Matrix The student will be able to identify identity matrices

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Winfried Just, Ohio University September 22, 2017 Review: The coefficient matrix Consider a system of m linear equations in n variables.

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Lectures on Linear Algebra for IT

Lectures on Linear Algebra for IT Lectures on Linear Algebra for IT by Mgr Tereza Kovářová, PhD following content of lectures by Ing Petr Beremlijski, PhD Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 3 Inverse Matrix

More information

All of my class notes can be found at

All of my class notes can be found at My name is Leon Hostetler I am currently a student at Florida State University majoring in physics as well as applied and computational mathematics Feel free to download, print, and use these class notes

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Graduate Econometrics I: Maximum Likelihood II

Graduate Econometrics I: Maximum Likelihood II Graduate Econometrics I: Maximum Likelihood II Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

Matrices. In this chapter: matrices, determinants. inverse matrix

Matrices. In this chapter: matrices, determinants. inverse matrix Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

Homework Set #8 Solutions

Homework Set #8 Solutions Exercises.2 (p. 19) Homework Set #8 Solutions Assignment: Do #6, 8, 12, 14, 2, 24, 26, 29, 0, 2, 4, 5, 6, 9, 40, 42 6. Reducing the matrix to echelon form: 1 5 2 1 R2 R2 R1 1 5 0 18 12 2 1 R R 2R1 1 5

More information

MTHSC 3110 Section 1.1

MTHSC 3110 Section 1.1 MTHSC 3110 Section 1.1 Kevin James A system of linear equations is a collection of equations in the same set of variables. For example, { x 1 + 3x 2 = 5 2x 1 x 2 = 4 Of course, since this is a pair of

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Unit Roots in White Noise?!

Unit Roots in White Noise?! Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of

More information

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to : MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..

More information

Likelihood Ratio tests

Likelihood Ratio tests Likelihood Ratio tests For general composite hypotheses optimality theory is not usually successful in producing an optimal test. instead we look for heuristics to guide our choices. The simplest approach

More information

Graphical Model Selection

Graphical Model Selection May 6, 2013 Trevor Hastie, Stanford Statistics 1 Graphical Model Selection Trevor Hastie Stanford University joint work with Jerome Friedman, Rob Tibshirani, Rahul Mazumder and Jason Lee May 6, 2013 Trevor

More information

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding Elementary matrices, continued To summarize, we have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

EM Algorithm II. September 11, 2018

EM Algorithm II. September 11, 2018 EM Algorithm II September 11, 2018 Review EM 1/27 (Y obs, Y mis ) f (y obs, y mis θ), we observe Y obs but not Y mis Complete-data log likelihood: l C (θ Y obs, Y mis ) = log { f (Y obs, Y mis θ) Observed-data

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Chapter 4. Determinants

Chapter 4. Determinants 4.2 The Determinant of a Square Matrix 1 Chapter 4. Determinants 4.2 The Determinant of a Square Matrix Note. In this section we define the determinant of an n n matrix. We will do so recursively by defining

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Determinant: 3.2 Evaluation of Determinant with Elementary

Determinant: 3.2 Evaluation of Determinant with Elementary Determinant: 3.2 Evaluation of Determinant with Elementary Operations September 18 As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not

More information

The LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305

The LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305 The LIML Estimator Has Finite Moments! T. W. Anderson Department of Economics and Department of Statistics Stanford University, Stanford, CA 9435 March 25, 2 Abstract The Limited Information Maximum Likelihood

More information

SOLUTION OF EQUATIONS BY MATRIX METHODS

SOLUTION OF EQUATIONS BY MATRIX METHODS APPENDIX B SOLUTION OF EQUATIONS BY MATRIX METHODS B.1 INTRODUCTION As stated in Appendix A, an advantage offered by matrix algebra is its adaptability to computer use. Using matrix algebra, large systems

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure)

COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure) STAT 52 Completely Randomized Designs COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure) Completely randomized means no restrictions on the randomization

More information

Distance Between Ellipses in 2D

Distance Between Ellipses in 2D Distance Between Ellipses in 2D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Testing Structural Equation Models: The Effect of Kurtosis

Testing Structural Equation Models: The Effect of Kurtosis Testing Structural Equation Models: The Effect of Kurtosis Tron Foss, Karl G Jöreskog & Ulf H Olsson Norwegian School of Management October 18, 2006 Abstract Various chi-square statistics are used for

More information

UNIT 3 MATRICES - II

UNIT 3 MATRICES - II Algebra - I UNIT 3 MATRICES - II Structure 3.0 Introduction 3.1 Objectives 3.2 Elementary Row Operations 3.3 Rank of a Matrix 3.4 Inverse of a Matrix using Elementary Row Operations 3.5 Answers to Check

More information

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2 Week 22 Equations, Matrices and Transformations Coefficient Matrix and Vector Forms of a Linear System Suppose we have a system of m linear equations in n unknowns a 11 x 1 + a 12 x 2 + + a 1n x n b 1

More information

Properties of the Determinant Function

Properties of the Determinant Function Properties of the Determinant Function MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Overview Today s discussion will illuminate some of the properties of the determinant:

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Computational methods for mixed models

Computational methods for mixed models Computational methods for mixed models Douglas Bates Department of Statistics University of Wisconsin Madison March 27, 2018 Abstract The lme4 package provides R functions to fit and analyze several different

More information

Linear Algebra 1 Exam 1 Solutions 6/12/3

Linear Algebra 1 Exam 1 Solutions 6/12/3 Linear Algebra 1 Exam 1 Solutions 6/12/3 Question 1 Consider the linear system in the variables (x, y, z, t, u), given by the following matrix, in echelon form: 1 2 1 3 1 2 0 1 1 3 1 4 0 0 0 1 2 3 Reduce

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Background on Linear Algebra - Lecture 2

Background on Linear Algebra - Lecture 2 Background on Linear Algebra - Lecture September 6, 01 1 Introduction Recall from your math classes the notion of vector spaces and fields of scalars. We shall be interested in finite dimensional vector

More information