Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Size: px
Start display at page:

Download "Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved."

Transcription

1 Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention: Let A be a T m matrix; the notation vec(a) will mean the Tmelement column vector whose first set of T elements are the first colum of A, that is a. 1 using the dot notation for columns; the second set of T elements are those of the second column of A, a. 2, and so on. Thus A = [a. 1, a. 2,, a. m ] in the dot notation. An immediate consequences of the above Convention is Vec of a product of two matrices contains a Kronecker with the identity, (remember to transpose and write the second matrix before the kronecker). Exercise: Let A, B be T m, m q respectively. Then using the Kronecker product notation, we have vec(ab) = (B I) vec(a) = (I A) vec(b) (1) [ ] [ ] [ ] a1 a 2 b1 b 2 a1 b 1 + a 2 b 3 a 1 b 2 + a 2 b 4 For example, if A= and B= then AB= a 3 a 4 b 3 b 4 a 3 b 1 + a 4 b 3 a 3 b 2 + a 4 b 4 vec(ab)= [] = (B I) vec(a) = [ ] 1 0 b [ ] 1 0 b [ ] 1 0 b [ ] 1 0 b [] times V ec(a) can then be verified. 1.1 Proposition 1 Vec of a Product of Three Matrices has Kronecker with Identity unless the vec of the middle matrix is used: Let A 1, A 2, A 3 be conformably dimensioned matrices. Then vec(a 1 A 2 A 3 ) = (I A 1 A 2 ) vec(a 3 ) (2) = (A 3 A 1) vec(a 2 ) 1

2 = (A 3 A 2 I) vec(a 1). 1.2 Proposition 2 Vec of a Sum of Two Matrices is the sum of vecs. Let A, B be T n. Then vec(a + B) = vec(a) + vec(b). (3) Corollary: Let A, B, C, D be conformably dimensioned matrices. Then from (1) vec[(a + B)(C + D)] = [(I A) + (I B)][vec(C) + vec(d)] (4) = [(C I) + (D I)][vec(A) + vec(b)] 1.3 Proposition 3 Trace of a Product of Two Matrices in the Vec notation (prime of the vec of a prime). Let A, B be conformably dimensioned matrices. Then tr(ab) = vec(a ) vec(b) = vec(b ) vec(a). (5) For above example, tr(ab)= (a 1 b 1 + a 2 b 3 ) + ( a 3 b 2 + a 4 b 4 ) = [ a 1 a 2 a 3 a 4 ] [] illustrates the first part of the proposition 3 regarding the trace. 1.4 Proposition 4 Trace of a product of three matrices involves the prime of a vec and a kronecker with the identity. Let A 1, A 2, A 3 be conformably dimensioned matrices. Then tr(a 1 A 2 A 3 ) = vec(a 1 ) (A 3 I)vec(A 2 ) (6) = vec(a 1 ) (I A 2 )vec(a 3 ) = vec(a 2 ) (I A 3 ) vec(a 1 ) = vec(a 2 ) (A 1 I) vec(a 3 ) = vec(a 3 ) (A 2 I) vec(a 1 ) 2

3 2 Basics of Vector and Matrix Differentiation In the derivation of the least squares we minimize the error sum of squares by using the first order conditions, i.e., we differentialte the error sum of squares. This can be formulated in matrix notation. Sometimes we need to differentiate quantities like tr(ax) with respect to the elements of X, or quantities like Ax, z Ax with respect to the elements of (the vectors) x and/or z. Although no fundamentally new concept is involved in carrying out such matrix differentiations, they can seem to be cumbersome. If we have simple rules and conventions, matrix differentiation becomes a powerful tool for research. Convention of using the same numerator subscript along each row. Let y = ψ(x), where y is a T 1 vector and x is an n 1 vector. The symbol x = i x j, i = l,2,, T, and j = l,2, n (1) denotes the T n matrix of first-order partial derivatives, the so-called Jacobian matrix of the transformation from x to y. x = (x 1 x 2 ) the convention states that: x = For example, if y = (y 1 y 2 y 3 ) and 1 / x 1 1 / x 2 2 / x 1 2 / x 2 is, in general, a T n matrix. (1) 3 / x 1 3 / x 2 Observe that ith row contains the derivatives of the ith element of y with respect to the elements of x. The numerator is y and its subscript is fixed along each row according to the convention. When both x and y are vectors, the / x has as many rows are are the rows in y and as many columns as are the rows in x. In particular, if y is 1 1 (i.e., a scalar), then the above Convention implies that / x is a row vector. If we wish to represent it as a column vector we may do so by writing / x, or (/ x). On the other hand, if x is a scalar (/ x) is a column vector. In general, if Y is a T p matrix and x is an n 1 vector, Y/ x is a Tp n matrix Y x = x vec(y). (2) 3

4 where / x makes an n 1 ROW vector of the same dimension as x. In the context of Taylor series and elsewhere it may be convenient to depart from this convention and let / x be a column vector. We now state some useful results involving matrix differentiation. Instead of formal proofs, we indicate some analogies with the traditional calculus and make some other comments to help the reader in remembering the results. In (scalar) calculus if y= 3x, then / x = 3. Similarly, in matrix algebra Derivative of a matrix times a vector is the matrix: Let y be a T 1 vector, x be an n 1 vector, and let A be a T n matrix which does not depend on x. We have y = Ax, implies (/ x) = A. (3) Note that the i-th element of y is given by y i = n k=1 a ik x k. Hence, i x j = a ij. Exercise 1: If y= a x = x a, where a and x are T 1 vectors show that / x is a row vector according to our convention, and it equals a. Exercise 2: If S= y y, where y is a T 1 vector show that S/ β = 0, where β and 0 are p 1 vectors, since S is not a function of β at all. This is useful in the context of minimizing the error sum of squaes u u in the regression model y= Xβ +u. Exercise 3: If S= y Xβ, where y is a T 1 vector, X is a T p matrix, and β is a p 1 vector show that S/ β = y X, This is also used in minimizing the residual sum of squaes in regression. Chain Rule in Matrix Differentiation: In calculus if y= f(x), and x= g(θ) then / = (/ x)( x/). Similarly in matrix algebra we have the following. If y = Ax, as in (3) above, except that the vector x is now a function of another set of variables, say those contained in the r-element column vector θ, then we have (the T r matrix) = x x = A x (where x is an n r matrix) (4) Convention for second order derivatives (vec the matrix of first partials before taking second partials). Let y = ψ(x) where y is a T 1 vector and x is an n 1 vector. Now by the symbol 2 y/ x x we shall mean 4

5 (the Tn n matrix) 2 y x x = x vec[ x ] (5) so that the second order partial is a matrix of dimension (Tn) n. Operationally one has to first convert the T n matrix of first partials illustrated by (1) above into a long Tn 1 vector, and then compute the second derivatives of each element of the long vector with respect to the n elements of x written along the n columns giving n columns for each of the Tn rows. Chain Rule forsecond order partials w.r.t. θ. Let y = Ax be as in (3) above. Then 2 y = (A I r ) 2 x. (6) Exercise: True or False? 2 y/( ) is of dimension Tnr nr, (A I r ) is of dimension Tn n and 2 x/ ( ) is nr r. First order partial, x and A are functions of θ : Let y = Ax, where y is T 1, A is T n, x is n 1, and both A and x depend on the r-element vector θ. Then the T r matrix = (x I T ) A + A x. (7) where A/ is Tn r. Next we consider the differentiation of bilinear and quadratic forms. Derivative of a Bilinear form involves first vector prime times the matrix (or the second vector prime times the matrix prime.) Let y = z Ax, where y is a scalar, z is T 1, A is T n, x is n 1, and A is independent of z and x. Now the 1 T vector of derivatives is as follows. z = x A, and the 1 n vector x = z A. First Derivative of a Quadratic form. Let y = x Ax, where x is n 1, and A is n n square matrix and independent of x. Then a 1 n 5

6 vector x = x (A + A ). Exercise 1: If A is a symmetric matrix, show that / x = 2 x A. Exercise 2: Show that ( β X Xβ)/ β = 2β X X. Exercise 3: In a regression model y= Xβ + u, minimize u u and show that a necessary condition for a minimum is 0= 2β X X 2y X, and solve for β. Second Derivative of a Quadratic form. Let A, y, and x be as in Proposition 6 above, then 2 y x x = A + A, and, for the special case where A is symmetric, 2 y x x = 2A. Derivatives of a Bilinear Form with respect to θ. If y = z Ax, where z is T 1, A is T n, x is n 1, and both z and x are a function of the r-element vector θ, while A is independent of θ. Then = x A z + z A x where follows. z is 1 r, x is T r and is n r. Now the second derivcative is as 2 y = [ z ] A x + [ x ] A [ z ] + (x A I) 2 z + (z A I) 2 x. Derivatives of a Quadratic Form with respect to θ.. Consider the quadratic form y = x Ax where x is n 1, A is n n, and x is a function of the r-element vector θ, while A is independent of θ. Then = x (A + A) x, 2 y = ( x ) (A + A ( x ) + (x [A + A] I) 2 x. Derivatives of a Symmetric Quadratic Form with respect to θ. Consider the same situation as in Corollay above but suppose in addition that A is symmet- 6

7 ric. Then = 2x A x, 2 y = 2 ( x ) A ( x ) + (2x A I) 2 x. First Derivative of a Bilinear form w.r.t the matrix. Let y = a Xb, where a and b are n 1 vectors of constants and X is an n n square matrix. Then an n n matrix X = ab. First Derivative of a Quadratic form w.r.t the matrix. Let y = a Xa, where a is an n 1 vector of constants and X is an n n symmetric square matrix. Then an n n matrix X = 2aa diag(aa ). where diag(.) denotes the diagonal matrix based on the diagonal elements of the indicated matrix expression. 3 Differentiation of the trace of a matrix Convention. If it is desired to differentiate, say, tr(ab) with respect to the elements of A, the operation involved will be interpreted as the rematricization of the vector in the following sense Note that tr(ab) vec(a), is a vector. (exercise: what dimension?). Having evaluated it we put the resulting vector in a matrix form expressed by tr(ab) A. Let A be a square matrix of order m. Then tr(a) A = I. If the elements of A are functions of the r-element vector θ, then 7

8 tr(a) = tr(a) vec(a) vec(a) = vec(i) vec(a) Differentiation of the Trace of a product of a number of matrices. Differentiating the trace of products of matrices with respect to the elements of one of the matrix factors is a special case of differentiation of (1) Linear form: a x, where a is a vector, (2) Nonlinear forms including a bilinear form: z Ax, where A a matrix, and z and x appropriately dimensioned vectors. and quadratic form x Ax. Hence the second-order partials are easily follow from the corresponding results regarding these forms. (1) Trace of a Linear Forms: Derivative of tr(a X )= A : Let A be T n, and X be n T ; then tr(ax) X = A. Exercise: Verify that tr(a B) B = A Exercise: Verify this by first choosing A= I, the identity matrix. If X is a function of the elements of the vector θ, then tr(ax) = tr(ax) vec(x) vec(x) = vec(a ) vec(x) (2) Trace of Nonlinear Forms: Let A be T n, X be n q, and B be q q; then X tr(axb) = A B. If x is a function of the r-element vector θ then tr(axb) = vec(a B) X. Derivatives of the trace of four matrices, skip the w.r.t matrix and prime them all. Let A be T n, X be n q, B be q r, and Z be r q; then tr(axbz) X = A Z B, and tr(axbz) Z = B X A. If X and Z are functions of the r-element vector θ, then 8

9 tr(axbz) = vec(a Z B ) vec(x) + vec(b X A ) vec(z) Let A be T T, X be q T, B be q q; then tr(ax BX) X = B XA + BXA. Exercise: Verify that tr(x AXB) X = AXB + A XB If X is a function of the r-element vector θ, then tr(ax BX) = vec(x) [(A B) + (A B )] vec(x). Derivative of a power of trace is that power times the inverse: tr(an ) A = na 1 4 Differentiation of Determinants. Let A be a square matrix of order T; then A A = A, where A is the matrix of cofactors of A defined as C= (c ij ), where i,j the element is c ij = ( 1) i+j det(a ij ) where A ij denotes an (T-1) (T -1) matrix obtained by deleting i-th row and j-th column ofa. If the elements of A are functions of the r elements of the vector θ, then A = vec(a ) vec(a). Derivative of the log of a determinant is simply transpose of its inverse: log(deta) da = (A 1 ) If A is symmetric, log(deta) da = 2A 1 diag(a 1 ) ( Note, a ij = a ji ) 9

10 where diag(w) denotes a matrix based on the selection of diagonal terms only of W. 5 Further Matrix Derivative Formulas for (Long) Vec and Inverse Matrices Derivative of a product of three matrices with respect to the vec operator applied to the middle matrix involves a kronecker product of the transpose of the last and the first matrix. vec(axb) vecx = B A Derivative of the inverse matrix with respect to the elements of the original matrix (X 1 ) x ij = X 1 ZX 1 where Z is a matrix of mostly zeroes except for a 1 in the (i,j) th position. In this formula if x ij = x ji making it a symmetric matrix, the same formula holds except that the matrix Z has one more 1 in the symmetric (j,i) th position. Why minus? When the matrix X is 1 1 or scalar we know from elementar calculus that the derivative of (1/X) is 1/X 2. The above formula is a generalization of this. Exercise: Verify the above formulas with simple examples. 6 Some Matrix Results for Multivariate Normal Variables If x N(µ, V), x is an n dimensional (multivariate) normal with mean vector µ and the n n covariance matrix V, a linear transformation of x is also normal. Consider the linear transformation y = A x + b, where y and b are n 1 vectors and A is an n n matrix. Thus x N(µ, V) and y = A x + b y N(b + Aµ, A V A ) (1) If x and y are jointly multivariate normal random variables [ x y [ E(x) ] N( [ E(y) ], Vxx V yx ] V xy V yy ) (1) 10

11 then the conditional distribution of x conditional on y is also normal x y N( [ E(x) + V xy V 1 yy (y E(y)], V xx V xy V 1 yy V yx ) (2) Similarly, y conditional on x is also normal. y x N ( [ E(y) + V yx V 1 xx (x E(x)], V yy V yx V 1 xx V xy ) (3) If x is a p 1 vector of multivariate normal variables with mean vector µ and covariance matrix V, that is, x N(µ, V ), and let A and B be p p matrices of constants, it can be shown that E[(x Ax)(x Bx) = tr(av )tr(bv ) + 2 tr(av BV )+(µ Aµ)tr(BV ) + (µ Bµ)tr(AV )+ + 4µ (AV B)µ + (µ Aµ)(µ Bµ) Cov[(x Ax)(x Bx)] = 2tr(AV BV ) + 4µ (AV B)µ. var(x Ax) = 2tr(AV ) 2 + 4µ (AV B)µ For a proof see Graybill(1983, Matrices with Applications in Statistics, Wadsworth, Belmont, Calif., p367) 7 Taylor Series in Matrix Notation In calculus Taylor s Theorem is described as follows. We use the operator notation: (h x + k ) f(x o, y o ) = h f x (x o, y o ) + k f y (x o, y o ) where f x denotes f/ x and f y denotes f/. Similarly, for the 2-nd power: (h x + k )2 f(x o, y o ) = h 2 f xx (x o, y o ) +2hkf xy (x o, y o ) + k 2 f yy (x o, y o ) where f xx = 2 f/ x x, f xy = 2 f/ x and f yy denotes 2 f/. In general, 11

12 (h x + k )n f(x o, y o ) is obtained by evaluating a Binomial expansion of the n-th power. Given that the first n derivatives of a function f(x,y) exist in a closed region, and that the (n+1) st derivative exists in an open region. Taylor s Theorem states that f(x o + h, y o + k) = f(x o, y o ) +(h x + k ) f(x o, y o ) + 1 2! (h x + k )2 f(x o, y o ) n! (h x + k )n f(x o, y o ) + 1 (n+1)! (h x + k )n+1 f(x o + αh, y o + αk) where 0< α < 1, x= x o + h, y= y o + k. The term involving the (n+1) th partial is called the remainder term. When one writes this in the matrix notaion it is usually intended as an approximation, and all terms containing higher than second order partials are ignored. Let x be a p 1 vector with elements x 1, x 2,, x p. Now f(x) is a function of p variables. Let f(x)/ x denote a p 1 vector with elements f(x)/ x i, where i= 1, 2, p. Similarly, let 2 f(x)/ x x denote a p p matrix with (i,j)-th element 2 f(x)/ x i x j. Now Taylor s approximation is f(x)= f(x o ) + Σ p i=1 (x i x o i ) that is, x f(xo ) Σp i=1 Σp j=1 (x i x o i )[ 2 x i x j f(x o )] (x j x o j ), f(x)= f(x o )+ (x x o ) f(x o ) x (x xo ) [ 2 f(x o ) x x ] (x x o ) in the matrix notaion. If the second derivative term is to be the remainder term we replace 2 f(x o ) by 2 f( x) where x= α x o + (1 α) x, with 0 α 1. 8 Matrix Inverse by Recursion If we want to compute the inverse of (A zi) 1 recursively let us write it as (A zi) 1 = (zi A) 1 = 1 [ Izn 1 + B 1 z n B n 1 ] where = det (zi A) = z n + a n 1 z n a 0 12

13 is a polynomial in complex number z of the so-called z-transform. This is often interpreted as z= L 1, where L is the lag operator in time series (Lx t = x t 1 ). Multiplying both sides by we get polynomials in z on both sides. Equating the like powers of z we obtain the recursion: B 1 = A + a n 1 I B 2 = AB 1 + a n 2 I B k = AB k + a n k I for k = 2,, n-1 and finally since B n is absent above, we have B n = 0 = AB n 1 + a 0 I 9 Matrix Inversion When Two Terms Are Involved Let G be an n n matrix defined by G = [A + BDB ] 1, (1) where A and D are nonsingular matrices of order n and m respectively, and B is n m. Then G = A 1 A 1 B [D 1 + B A 1 B] 1 B A 1. (2) The result is verified directly by showing that GG 1 = I. Let E= D 1 + B A 1 B. Then G 1 G = I BE 1 B A 1 + BDB A 1 BDB A 1 BE 1 B A 1 = I + [ BE 1 + BD BDB A 1 BE 1 ]B A 1 Then = I + BD[ D 1 E 1 + I B A 1 BE 1 ]B A 1 = I + BD[I EE 1 ]B A 1 = I, where (I I) = 0 eliminates the second term. An important special case arises when B= b is an n 1 vector, and D = 1. (A + bb ) 1 = A 1 A 1 bb A 1 1+b A 1 b. (3) Many other useful results on matrix inversion are found in Jazwinski (1970, pp ). 13

14 10 Useful Results Normal / Chi-square Distributions 1. If y is normal, N(µ, σ 2 ), then z = b + ay is normal, N(aµ + b, a 2 σ 2 ) for a If y is N(µ, σ 2 ), then z = (y µ)/σ is N(0, 1). 3. If y and z are independent normal variables, then y + z is normal. 4. If each y i, where i = 1, 2, /, n, is independent N(µ, σ 2 ), then defining ȳ = Σ i y i /n implies ȳ is N(µ, σ 2 /n). 5. If y is a random variable with mean µ and variance σ 2, then for n sufficiently large, then mean ȳ from a random sample of size n, is approximately normal N(µ, σ 2 /n) Chi-square Distributions 6. If y is N(0,1), then y 2 is χ If z 1 is χ 2 n and z 2 is χ 2 m and z 1 and z 2 are independent, then z 1 + z 2 is χ 2 m+n. 8. If y 1, y 2,, y n are each independent N(0,1), then Σ i yi 2 is distributed as χ 2 n. 9. If z 1, z 2,, z n are each independent N(µ, σ 2 ), then Σ i (z i µ) 2 /σ 2 is distributed as χ 2 n. 10. If the sample variance of a random sample y 1, y 2,, y n is ˆσ 2 = ȳ) 2 /(n 1), and each y i is N(µ, σ 2 ), then (n 1)ˆσ 2 /σ 2 is χ 2 n 1. i (y i 11. If the (n 1) vector y is distributed N(0, I n ), then y y is distributed as χ 2 n. 12. If y is an (n 1 ) vector distributed as N(0,I n ) and A is an (n n) symmetric idempotent matrix of rank r, then y A y is distributed as χ 2 r. 13. If the (n 1) vector z is distributed as N(0,σ 2 I n ) and A is an (n n) symmetric idempotent matrix of rank r, then (z Az/σ 2 ) is distributed as χ 2 r. 14. Let y be an (m 1) vector that is distributed as N(δ, σ 2 I n ), A an (n n) symmetric idempotent matrix such that Aδ = 0, B an (m n) matrix and AB = 0. Then B y is distributed independently of the quadratic form (y Ay). 14

15 15. If y is an (n 1) vector that is distributed as N(0, σ 2 I n ) and A and B are idempotent (n n) matrices of rank r and s and AB = 0, then y Ay is distributed independently of the quadratic form y By. 16. If y is an (n 1) vector distributed as N(0, σ 2 I n ) and A and B are idempotent (n n) matrices of rank r and s respectively, then u, the ratio of y Ay/σ 2 and y By/σ 2 each divided by its rank is distributed as F r,s. 17. If y is F n,m, then z = 1/y is distributed as F m,n. 18. If the n 1 vector e is distributed as N(µ, A), then e A 1 e is distributed as χ 2 m,γ with γ = 0.5µ A 1 µ. 19. If the (n 1) vector z is distributed as N(µ, σ 2 I), then W = z Az/σ 2, where A is a symmetric idempotent matrix of rank k, is distributed as y 2 k,γ with noncentrality parameter γ = µ/2σ 2. If B is an (n n) symmetric idempotent matrix of rank q with BA = 0, Bµ = 0 and Z = z Bz/σ 2, then the quantity u = (W/Z)(q/k) is distributed as non-central F k,q,γ. 20. If the random variable z has a χ 2 kγ distribution that is independent of w, which is distributed as χ 2 q, then u = qz/(kw) is distributed as F k,q,γ. 21. Let the (k 1) vector ˆβ be distributed as N(β, σ 2 (X X) 1 ) and ˆβ i is distributed as N(β i, σ 2 c ii ), where c ii is the ith diagonal element of (X X) 1 = C. Consequently ( ˆβ i β i )/σc ii is distributed as N(0,1) and it is independent of (T k)ˆσ 2 /σ 2, which is distributed as χ 2 T k. Therefore, v = ˆβ i β i (σ 2 ) 0.5 σc ii = β i β (ˆσ 2 ) 0.5 ˆσc ii is distributed as Student s t T k. 22. If the (n 1) random vector z is N(0,σ 2 I n ) the expected value of the quadratic form z Az/σ 2 is equal to tra. Therefore, if A is an (n n) symmetric idempotent matrix of rank r, then E(z Az)σ 2 = r. 23. The reciprocal of the central χ 2 random variable with degrees of freedom has expected value E[(χ 2 r) 1 ] = 1/(r 2), for r The central χ 2 random variable with r degrees of freedom has variance 2r. 25. The square of the reciprocal of a central χ 2 random variable with r degrees of freedom has expected value E[(χ 2 r) 2 ] = 1/[(r 2)(r 4)]. 26. Any non-central χ 2 random variable with r degrees of freedom and noncentrality parameter γ, may be represented as a central χ 2 random variable 15

16 with (r + 2j) degrees of freedom (conditional on j) where j is a Poisson random variable with parameter γ. 11 Matrix Algebra Review for Normal Theory in Statistics 1. The characteristic roots of an (n n) matrix A are the n roots of the polynomial A γi, where γ is a scalar. 2. For an (n n) orthogonal matrix C (where C C = CC = I) C is orthogonal. 3. If C is orthogonal then its determinant C is either 1 or If A is an (n n) symmetric matrix, then there exists an (n n) matrix P which is orthogonal and P AP = a diagonal matrix with diagonal elements that are the characteristic roots of A and the rank of A is equal to the number of non-zero roots. 5. If A is an (n n) symmetric matrix then A is positive definite if and only if all its characteristic roots are positive, where a positive definite matrix is one where the quadratic form y Ay is positive for all y If A is an (n n) positive definite matrix, then A > 0, the rank of A is equal to n and A is non-singular. 7. If A is an (n n) positive definite matrix and P is an (n m) matrix with rank m, then P AP is positive definite. 8. If A is an (n n) positive definite matrix then there exists a positive definite matrix A 1/2 such that A 1/2 AA 1/2 = I and A 1/2 A 1/2 = A 1. Also we write A 1/2 = (A 1/2 ) 1 and A 1/2 A 1/2 = A. [ ] γ1 0 In particular, if P is an orthogonal matrix such that P P = A, 0 γ p where the γ i are the characteristic roots of A, then A 1/2 = P [ γ 1/2 1.. γp 1/2 ] P. 9. Given an (n n) symmetric idempotent matrix A (i.e., A = A and AA = A), then if A is of rank r, A has r characteristic roots equal to 1 and (n r) roots equal to zero, the rank of A is equal to tra and there is an orthogonal matrix C such that [ ] Ir C 0 AC =. 0 0 n r 16

17 10. The identity matrix is the only non-singular idempotent matrix and a symmetric idempotent matrix is positive semi-definite. 11. If A and B are (n n) symmetric matrices and B is positive definite, there exists a non-singular matrix Q such that Q AQ = Λ and Q BQ = I, where Λ is a diagonal matrix [Rao (1973, p. 41)]. 12. If A and B are two symmetric matrices, a necessary and sufficient condition for an othogonal matrix C to exist such that C AC = Λ and C BC = M, where Λ and M are diagonal, is that A and B commute, i.e., AB = BA. 13. If A is a symmetric (n n) matrix with characteristic roots λ 1, λ 2,, λ n and corresponding characteristic vectors p 1, p 2,, p n, then A = λ 1 p 1 p λ n p n p n, I = p 1 p 1 + p 2p p np n and sup y (y Ay/y y) = λ 1 ; inf y (y Ay/y y) = λ n, where y is a column vector. 14. The characteristic roots of A are those of BAB 1, where A,B are nonsingular matrices. 15. If B is (n n) non-singular matrix and η is (n 1) column vector, then max y (y ηη y/y By) = η B 1 η. 17

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Ch4. Distribution of Quadratic Forms in y

Ch4. Distribution of Quadratic Forms in y ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Mathematics. EC / EE / IN / ME / CE. for

Mathematics.   EC / EE / IN / ME / CE. for Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

ANALYSIS OF VARIANCE AND QUADRATIC FORMS

ANALYSIS OF VARIANCE AND QUADRATIC FORMS 4 ANALYSIS OF VARIANCE AND QUADRATIC FORMS The previous chapter developed the regression results involving linear functions of the dependent variable, β, Ŷ, and e. All were shown to be normally distributed

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Relevant

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Matrix Algebra, part 2

Matrix Algebra, part 2 Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1.

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1. The probability density function of a continuous random variable Y (or the probability mass function if Y is discrete) is referred to simply as a probability distribution and denoted by f(y; θ) where θ

More information

. D Matrix Calculus D 1

. D Matrix Calculus D 1 D Matrix Calculus D 1 Appendix D: MATRIX CALCULUS D 2 In this Appendix we collect some useful formulas of matrix calculus that often appear in finite element derivations D1 THE DERIVATIVES OF VECTOR FUNCTIONS

More information

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Exercise Set Suppose that A, B, C, D, and E are matrices with the following sizes: A B C D E

Exercise Set Suppose that A, B, C, D, and E are matrices with the following sizes: A B C D E Determine the size of a given matrix. Identify the row vectors and column vectors of a given matrix. Perform the arithmetic operations of matrix addition, subtraction, scalar multiplication, and multiplication.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

William Stallings Copyright 2010

William Stallings Copyright 2010 A PPENDIX E B ASIC C ONCEPTS FROM L INEAR A LGEBRA William Stallings Copyright 2010 E.1 OPERATIONS ON VECTORS AND MATRICES...2 Arithmetic...2 Determinants...4 Inverse of a Matrix...5 E.2 LINEAR ALGEBRA

More information

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X = The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

REFRESHER. William Stallings

REFRESHER. William Stallings BASIC MATH REFRESHER William Stallings Trigonometric Identities...2 Logarithms and Exponentials...4 Log Scales...5 Vectors, Matrices, and Determinants...7 Arithmetic...7 Determinants...8 Inverse of a Matrix...9

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES. (2π) 1 2 (σ 2 ) πσ

SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES. (2π) 1 2 (σ 2 ) πσ SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES 1. THE MULTIVARIATE NORMAL DISTRIBUTION The n 1 vector of random variables, y, is said to be distributed as a multivariate normal with mean vector

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Introduc)on to linear algebra

Introduc)on to linear algebra Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010 A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information