LINEAR MODELS IN STATISTICS
|
|
- Geoffrey Kelley
- 5 years ago
- Views:
Transcription
1 LINER MODELS IN STTISTICS Second Edition lvin C. Rencher and G. Bruce Schaalje Department of Statistics, Brigham Young University, Provo, Utah
2 5 2 Matrix lgebra If we write a linear model such as (.2) for each of n observations in a dataset, the n resulting models can be expressed in a single compact matrix expression. Then the estimation and testing results can be more easily obtained using matrix theory. In the present chapter, we review the elements of matrix theory needed in the remainder of the book. Proofs that seem instructive are included or called for in the problems. For other proofs, see Graybill (969), Searle (982), Harville (997), Schott (997), or any general text on matrix theory. We begin with some basic definitions in Section MTRIX ND VECTOR NOTTION 2.. Matrices, Vectors, and Scalars matrix is a rectangular or square array of numbers or variables. We use uppercase boldface letters to represent matrices. In this book, all elements of matrices will be real numbers or variables representing real numbers. For example, the height (in inches) and weight (in pounds) for three students are listed in the following matrix: To represent the elements of as variables, we use : (2:) a a 2 ¼ (a ij ) a 2 a 22 : (2:2) a 3 a 32 The first subscript in a ij indicates the row; the second identifies the column. The notation ¼ (a ij ) represents a matrix by means of a typical element. Linear Models in Statistics, Second Edition, by lvin C. Rencher and G. Bruce Schaalje Copyright # 28 John Wiley & Sons, Inc.
3 2 MTRIX LGEBR Similarly b b ¼ b 2 þ b2 2 þþb2 p, (2:2) b 2 b b 2 b b p b 2 b b 2 bb 2 b 2 b p ¼..... : (2:2) B. b p b b p b 2 b 2 p Thus, b b is a sum of squares and bb is a (symmetric) square matrix. The square root of the sum of squares of the elements of a p vector b is the distance from the origin to the point b and is also referred to as the length of b: Length of b ¼ ffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffi p X p b b ¼ : (2:22) b 2 i i¼ If j is an n vector of s as defined in (2.6), then by (2.2) and (2.2), we have j j ¼ n, jj ¼ B... C ¼ J, where J is an n n square matrix of s as illustrated in (2.7). If a is n and is n p, then a j ¼ j a ¼ Xn i¼ a i, (2:24) X j ¼ a i, X i i a i2,..., X i P j a j P j a ip, a 2j j ¼. C : (2:25) P j a nj Thus a j is the sum of the elements in a, j contains the column sums of, and j contains the row sums of. Note that in a j, the vector j is n ; in j, the vector j is n ; and in j, the vector j is p.
4 6 MTRIX LGEBR Note that D = D. However, in the special case where the diagonal matrix is the identity, (2.29) and (2.3) become I ¼ I ¼ : (2:32) If is rectangular, (2.32) still holds, but the two identities are of different sizes. If is a symmetric matrix and y is a vector, the product y y ¼ X i a ii y 2 i þ X a ij y i y j (2:33) i=j is called a quadratic form. If x is n, y is p, and is n p, the product x y ¼ X ij a ij x i y j (2:34).is called a bilinear form Hadamard Product of Two Matrices or Two Vectors Sometimes a third type of product, called the elementwise or Hadamard product, is useful. If two matrices or two vectors are of the same size (conformal for addition), the Hadamard product is found by simply multiplying corresponding elements: a b a 2 b 2 a p b p a 2 b 2 a 22 b 22 a 2p b 2p (a ij b ij ) ¼ B... : a n b n a n2 b n2 a np b np 2.3 PRTITIONED MTRICES It is sometimes convenient to partition a matrix into submatrices. For example, a partitioning of a matrix into four (square or rectangular) submatrices of appropriate sizes can be indicated symbolically as follows: ¼ 2 : 2 22
5 2.4 RNK 9,(Finally, we note that if a matrix is partitioned as ¼ (, 2 ¼ (, 2 ) ¼ : RNK Before defining the rank of a matrix, we first introduce the notion of linear independence and dependence. set of vectors a, a 2,..., a n is said to be linearly dependent if scalars c, c 2,..., c n (not all zero) can be found such that :c a þ c 2 a 2 þ þ c n a n ¼ (2:4) If no coefficients c, c 2,..., c n can be found that satisfy (2.4), the set of vectors a, a 2,..., a n is said to be linearly independent. By (2.37) this can be restated as follows. The columns of are linearly independent if c ¼ implies c ¼. (If a set of vectors includes, the set is linearly dependent.) If (2.4) holds, then at least one of the vectors a i can be expressed as a linear combination of the other vectors in the set..mong linearly independent vectors there is no redundancy of this type The rank of any square or rectangular matrix is defined as rank() ¼ number of linearly independent columns of ¼ number of linearly independent rows of : It can be shown that the number of linearly independent columns of any matrix is always equal to the number of linearly independent rows. If a matrix has a single nonzero element, with all other elements equal to, then rank() ¼. The vector and the matrix O have rank. Suppose that a rectangular matrix is n p of rank p, where p, n. (We typically shorten this statement to is n p of rank p, n. ) Then has maximum possible rank and is said to be of full rank. In general, the maximum possible rank of an n p matrix is min(n, p). Thus, in a rectangular matrix, the rows or columns (or both) are.linearly dependent. We illustrate this in the following example Example 2.4a. The rank of ¼
6 2 MTRIX LGEBR is 2 because the two rows are linearly independent (neither row is a multiple of the other). Hence, by the definition of rank, the number of linearly independent columns is also 2. Therefore, the columns are linearly dependent, and by (2.4) there exist constants c, c 2, and c 3 such that c þ c þ c 3 ¼ : (2:4) 4 By (2.37), we can write (2.4) in the form c 2 c ¼ c 3 or c ¼ : (2:42) The solution to (2.42) is given by any multiple of c ¼ (4,, 2). In this case, the product c is equal to, even though = O and c =. This is possible because of the linear dependence of the column vectors of. We can extend (2.42) to products of matrices. It is possible to find = O and B = O such that for example B ¼ O; (2:43) ¼ : We can also exploit the linear dependence of rows or columns of a matrix to create expressions such as B ¼ CB, where = C. Thus in a matrix equation, we cannot, in general, cancel a matrix from both sides of the equation. There are two exceptions to this rule: () if B is a full-rank square matrix, then B ¼ CB implies ¼ C; (2) the other special case occurs when the expression holds for all possible values of the matrix common to both sides of the equation; for example if x ¼ Bx for all possible values of x, (2:44) then ¼ B. To see this, let x ¼ (,,..., ). Then, by (2.37) the first column of equals the first column of B. Now let x ¼ (,,,..., ), and the second column of equals the second column of B. Continuing in this fashion, we obtain ¼ B.
7 2.5 INVERSE 2 Example 2.4b. We illustrate the existence of matrices, B, and C such that B ¼ CB, where = C. Let Then ¼ 3 2, B ¼ 2 C ¼ : B ¼ CB ¼ 3 5 : 4 The following theorem gives a general case and two special cases for the rank of a product of two matrices. Theorem 2.4 (i) If the matrices and B are conformal for multiplication, then rank(b) rank() and rank(b) rank(b). (ii) Multiplication by a full rank square matrix does not change the rank; that is, if B and C are full rank square matrices, rank(b) ¼ rank(c) ¼ rank(). (iii) For any matrix, rank( ) ¼ rank( ) ¼ rank( ) ¼ rank(). INVERSE 2.5 full-rank square matrix is said to be nonsingular. nonsingular matrix has a unique inverse, denoted by, with the property that ¼ ¼ I: (2:45)
8 22 MTRIX LGEBR If is square and less than full rank, then it does not have an inverse and is said to be singular. Note that full-rank rectangular matrices do not have inverses as in (2.45). From the definition in (2.45), it is clear that is the inverse of : ( ) ¼ : (2:46) We can now prove Theorem 2.4(ii). PROOF. IfB is a full-rank square (nonsingular) matrix, there exists a matrix B such that BB ¼ I. Then, by Theorem 2.4(i), we have rank() ¼ rank(bb ) rank(b) rank(): Thus both inequalities become equalities, and rank() ¼ rank(b). Similarly, rank() ¼ rank(c) for C nonsingular. In applications, inverses are typically found by computer. Many calculators also compute inverses. lgorithms for hand calculation of inverses of small matrices can be found in texts on matrix algebra. If B is nonsingular and B ¼ CB, then we can multiply on the right by B to obtain ¼ C. (If B is singular or rectangular, we can t cancel it from both sides of B ¼ CB; see Example 2.4b and the paragraph preceding the example.) Similarly, if is nonsingular, the system of equations x ¼ c has the unique solution x ¼ c, (2:47)
9 2.5 INVERSE 23 since we can multiply on the left by to obtain x ¼ c Ix ¼ c: Two properties of inverses are given in the next two theorems. Theorem 2.5a. If is nonsingular, then is nonsingular and its inverse can be found as ( ) ¼ ( ) : (2:48) Theorem 2.5b. If and B are nonsingular matrices of the same size, then B is nonsingular and (B) ¼ B : (2:49) We now give the inverses of some special matrices. If is symmetric and nonsingular and is partitioned as ¼ 2, 2 22 and if B ¼ , then, provided and B exist, the inverse of is given by ¼ þ 2B 2 2B B 2 B : (2:5) s a special case of (2.5), consider the symmetric nonsingular matrix ¼ a 2 a, 2 a 22 in which is square, a 22 is a matrix, and a 2 is a vector. Then if exists, can be expressed as ¼ b þ a 2a 2 a 2 b a, (2:5) 2
10 24 MTRIX LGEBR where b ¼ a 22 a 2 a 2. s another special case of (2.5), we have O ¼ O 22 O O 22 : (2:52) If a square matrix of the form B þ cc is nonsingular, where c is a vector and B is a nonsingular matrix, then (B þ cc ) ¼ B B cc B þ c B c : (2:53) In more generality, if, B, and þ PBQ are nonsingular, then ( þ PBQ) ¼ PB(B þ BQ PB) BQ : (2:54) Both (2.53) and (2.54) can be easily verified (Problems 2.33 and 2.34) POSITIVE DEFINITE MTRICES where 3y 2 þ y2 2 þ 2y2 3 þ 4y y 2 þ 5y y 3 6y 2 y 3 ¼ y y, y y y 2, 6 : y 3 2 However, the same quadratic form can also be expressed in terms of the symmetric matrix 2 ( þ ) ¼ :
11 2.6 POSITIVE DEFINITE MTRICES 25 In general, any quadratic form y y can be expressed as y y ¼ y þ y, (2:55) 2 and thus the matrix of a quadratic form can always be chosen to be symmetric (and thereby unique). The sums of squares we will encounter in regression (Chapters 6 ) and analysis of variance (Chapters 2 5) can be expressed in the form y y, where y is an observation vector. Such quadratic forms remain positive (or at least nonne-gative).for all possible values of y. We now consider quadratic forms of this type If the symmetric matrix has the property y y. for all possible y except y ¼, then the quadratic form y y is said to be positive definite, and is said to be a positive definite matrix If y y for all y and there is at least one y = such that y y ¼, then y y and are said to be positive semidefinite Example 2.6. To illustrate a positive definite matrix, consider and the associated quadratic form ¼ 2 3 y y ¼ 2y 2 2y y 2 þ 3y 2 2 ¼ 2( y 2 y 2) 2 þ 5 2 y2 2, which is clearly positive as long as y and y 2 are not both zero. To illustrate a positive semidefinite matrix, consider which can be expressed as y y, with (2y y 2 ) 2 þ (3y y 3 ) 2 þ (3y 2 2y 3 ) 2, : If 2y ¼ y 2, 3y ¼ y 3, and 3y 2 ¼ 2y 3, then (2y y 2 ) 2 þ (3y y 3 ) 2 þ (3y 2 2y 3 ) 2 ¼. Thus y y ¼ for any multiple of y ¼ (, 2, 3). Otherwise y y > (except for y ¼ ).
12 Theorem 2.6a (i) If is positive definite, then all its diagonal elements a ii are positive. (ii) If is positive semidefinite, then all a ii. PROOF (i) Let y ¼ (,...,,,,..., ) with a in the ith position and s elsewhere. Then y y ¼ a ii.. (ii) Let y ¼ (,...,,,,..., ) with a in the ith position and s elsewhere. Then y y ¼ a ii. Theorem 2.6b. Let P be a nonsingular matrix. (i) If is positive definite, then P P is positive definite. (ii) If is positive semidefinite, then P P is positive semidefinite. PROOF (i) To show that y P Py. for y =, note that y (P P)y ¼ (Py) (Py). Since is positive definite, (Py) (Py). provided that Py =. By (2.47), Py ¼ only if y ¼, since P Py ¼ P ¼. Thus y P Py. ify =. (ii) See problem Corollary. Let be a p p positive definite matrix and let B be a k p matrix of rank k p. Then BB is positive definite. Corollary Theorem 2.6c. symmetric matrix is positive definite if and only if there exists a.nonsingular matrix P such that ¼ P P
13 2.6 POSITIVE DEFINITE MTRICES 27 PROOF. We prove the if part only. Suppose ¼ P P for nonsingular P. Then y y ¼ y P Py ¼ (Py) (Py): This is a sum of squares [see (2.2)] and is positive unless Py ¼. By (2.47), Py ¼ only if y ¼. Corollary. positive definite matrix is nonsingular One method of factoring a positive definite matrix into a product P P as in Theorem 2.6c is provided by the Cholesky decomposition (Seber and Lee 23, pp ), by which can be factored uniquely into ¼ T T, where T is a non-.singular upper triangular matrix For any square or rectangular matrix B, the matrix B B is positive definite or.positive semidefinite Theorem 2.6d. Let B be an n p matrix..i) If rank(b) ¼ p, then B B is positive definite (ii) If rank(b), p, thenb B is positive semidefinite. PROOF (i) To show that y B By. for y =, we note that y B By ¼ (By) (By), which is a sum of squares and is thereby positive unless By ¼. By (2.37), we can express By in the form By ¼ y b þ y 2 b 2 þþy p b p : This linear combination is not (for any y = ) because rank(b) ¼ p, and the columns of B are therefore linearly independent [see (2.4)]. (ii) If rank(b), p, then we can find y = such that By ¼ y b þ y 2 b 2 þþy p b p ¼ since the columns of B are linearly dependent [see (2.4)]. Hence y B By.
14 28 MTRIX LGEBR Note that if B is a square matrix, the matrix BB ¼ B 2 is not necessarily positive semidefinite. For example, let B ¼ 2 2 : B 2 ¼ 2, B 2 4 B ¼ In this case, B 2 is not positive semidefinite, but B B is positive semidefinite, since : y B By ¼ 2( y 2y 2 ) 2. Theorem 2.6e. If is positive definite, then 2 is positive definite. PROOF. By Theorem 2.6c, ¼ P P, where P is nonsingular. By Theorems 2.5a and 2.5b, ¼ (P P) ¼ P (P ) ¼ P (P ), which is positive definite by Theorem 2.6c. Theorem 2.6f. If is positive definite and is partitioned in the form ¼ 2, 2 22 where and 22 are square, then and 22 are positive definite. PROOF. We can write, for example, as ¼ (I, O) I, where I is the same O size as. Then by Corollary to Theorem 2.6b, is positive definite SYSTEMS OF EQUTIONS The system of n (linear) equations in p unknowns a x þ a 2 x 2 þþa p x p ¼ c a 2 x þ a 22 x 2 þþa 2p x p ¼ c 2 a n x þ a n2 x 2 þþa np x p ¼ c n (2:56).
15 2.7 SYSTEMS OF EQUTIONS 29 can be written in matrix form as x ¼ c, (2:57) where is n p, x is p, and c is n. If n ¼ p and is nonsingular, then by there exists a unique solution vector x obtained as x ¼ c.if n. p, so,(2.47) that has more rows than columns, then x ¼ c typically has no solution. If n, p,.so has fewer rows than columns, then x ¼ c has an infinite number of solutions If the system of equations x ¼ c has one or more solution vectors, it is said to be consistent. If the system has no solution, it is said to be inconsistent. To illustrate the structure of a consistent system of equations x ¼ c, suppose that is p p of rank r, p. Then the rows of are linearly dependent, and there exists some b such that [see (2.38)] b ¼ b a þ b 2a 2 þþb pa p ¼ : Then we must also have b c ¼ b c þ b 2 c 2 þþb p c p ¼, since multiplication of x ¼ c by b gives b x ¼ b c,or x ¼ b c. Otherwise, if b c =, there is no x such that x ¼ c. Hence, in order for x ¼ c to be consistent, the same linear relationships, if any, that exist among the rows of must exist among the elements (rows) of c. This is formalized by comparing the rank of with the rank of the augmented matrix (, c). The notation (, c) indicates that c has been appended to as an additional column. Theorem 2.7 The system of equations x ¼ c has at least one solution vector x if and only if rank() ¼ rank(, c). PROOF. Suppose that rank() ¼ rank(, c), so that appending c does not change the rank. Then c is a linear combination of the columns of ; that is, there exists some x such that x a þ x 2 a 2 þþx p a p ¼ c, which, by (2.37), can be written as x ¼ c: Thus x is a solution. Conversely, suppose that there exists a solution vector x such that x ¼ c. In general, rank () rank(, c) (Harville 997, p. 4). But since there exists an x such that x ¼ c, wehave rank(, c) ¼ rank(, x) ¼ rank[(i, x)] rank() [by Theorem 2:4(i)]:
16 2.8 GENERLIZED INVERSE GENERLIZED INVERSE generalized inverse of an n p matrix is any matrix 2 that satisfies ¼ : (2:58) generalized inverse is not unique except when is nonsingular, in which case.¼. generalized inverse is also called a conditional inverse Every matrix, whether square or rectangular, has a generalized inverse. This holds even for vectors. For example, let 2 x ¼ B 3 : 4 Then x ¼ (,,, ) is a generalized inverse of x satisfying (2.58). Other examples are x 2 ¼ (, 2,, ), x 3 ¼ (,, 3, ), and x 4 ¼ (,,, 4 ). For each x i,wehave xx i x ¼ x ¼ x, i ¼, 2, 3, 4: In this illustration, x is a column vector and x i generalized in the following theorem. is a row vector. This pattern is Theorem 2.8a. If is n p, any generalized inverse 2 is p n. In the following example we give two illustrations of generalized inverses of a singular matrix. Example Let : (2:59) The third row of is the sum of the first two rows, and the second row is not a multiple of the first; hence has rank 2. Let 2, : (2:6) It is easily verified that ¼ and 2 ¼.
17 34 MTRIX LGEBR Theorem 2.8b. Suppose is n p of rank r and that is partitioned as ¼ 2 ; 2 22 where is r r of rank r. Then a generalized inverse of is given by ¼ O O O, where the three O matrices are of appropriate sizes so that 2 is p n. PROOF. By multiplication of partitioned matrices, as in (2.35), we obtain ¼ I 2 O O ¼ : To show that 2 2 ¼ 22, multiply by B ¼ I O 2 I where O and I are of appropriate sizes, to obtain, B ¼ 2 O 22 2 : 2 The matrix B is nonsingular, and the rank of B is therefore r ¼ rank() [see Theorem 2.4(ii)]. In B, the submatrix is of rank r, and the columns O headed by 2 are therefore linear combinations of the columns headed by.by a comment following Example 2.3, this relationship can be expressed as ¼ Q (2:6) 2 O
18 2.8 GENERLIZED INVERSE 35 for some matrix Q. By (2.27), the right side of (2.6) becomes Q ¼ O Q ¼ OQ Q : O Thus ¼ O, or 22 ¼ 2 2: Corollary. Suppose that is n p of rank r and that is partitioned as in Theorem 2.8b, where 22 is r r of rank r. Then a generalized inverse of is given by ¼ O O O 22, where the three O matrices are of appropriate sizes so that 2 is p n. The nonsingular submatrix need not be in the or 22 position, as in Theorem 2.8b or its corollary. Theorem 2.8b can be extended to the following algorithm for finding a conditional inverse 2 for any n p matrix of rank r (Searle 982, p. 28):. Find any nonsingular r r submatrix C. It is not necessary that the elements of C occupy adjacent rows and columns in. 2. Find C 2 and (C ). 3. Replace the elements of C by the elements of (C ). 4. Replace all other elements in by zeros. 5. Transpose the resulting matrix. Theorem 2.8c. Let be n p of rank r, let 2 be any generalized inverse of, and let ( ) 2 be any generalized inverse of. Then (i) rank( 2 ) ¼ rank( 2 ) ¼ rank() ¼ r. (ii) ( 2 ) is a generalized inverse of ; that is, ( ) 2 ¼ ( 2 ). (iii) ¼ ( ) and ¼ ( ). (iv) ( ) is a generalized inverse of ; that is, ¼ ( ).
19 36 MTRIX LGEBR (v) ( ) is symmetric, has rank ¼ r, and is invariant to the choice of ( ) ; that is, ( ) remains the same, no matter what value of ( ) is used. generalized inverse of a symmetric matrix is not necessarily symmetric. However, it is also true that a symmetric generalized inverse can always be found for a symmetric matrix; Generalized Inverses and Systems of Equations Generalized inverses can be used to find solutions to a system of equations. Theorem 2.8d. If the system of equations x ¼ c is consistent and if 2 is any generalized inverse for, then x ¼ c is a solution. PROOF. Since 2 ¼, wehave x ¼ x: Substituting x ¼ c on both sides, we obtain c ¼ c: Writing this in the form ( c) ¼ c, we see that 2 c is a solution to x ¼ c. Different choices of 2 will result in different solutions for x ¼ c. Theorem 2.8e. If the system of equations x ¼ c is consistent, then all possible solutions can be obtained in the following two ways: (i) Use a specific 2 in x ¼ c þ (I )h, and use all possible values of the arbitrary vector h. (ii) Use all possible values of 2 in x ¼ c if c =. PROOF. See Searle (982, p. 238). necessary and sufficient condition for the system of equations x ¼ c to be consistent can be given in terms of a generalized inverse of (Graybill 976, p. 36).
20 38 MTRIX LGEBR. 2.9 DETERMINNTS Theorem 2.9a. (ii) The determinant of a triangular matrix is the product of the diagonal (v) If is positive definite, jj. : (vi) j j¼jj: (vii) If is nonsingular, j j¼ jj : Example 2.9a. We illustrate each of the properties in Theorem 2.9a. 2 (i) diagonal: 3 ¼ (2) (3) () () ¼ (2) (3). 2 (ii) triangular: 3 ¼ (2) (3) () () ¼ (2) (3). 2 (iii) singular: 3 6 ¼ () (6) (3) (2) ¼, 2 nonsingular: 3 4¼ () (4) (3) (2) ¼ (iv) positive definite: 2 4 ¼ (3) (4) ( 2) ( 2) ¼ (v) transpose: 2 ¼ (3)() (2)( 7) ¼ 7, ¼ (3)() ( 7)(2) ¼ 7. (vi) inverse: 3 2 :4 :2 ¼, 4 : : ¼, :4 :2 : :3 ¼ :. s a special case of (62), suppose that all diagonal elements are equal, say, D ¼ diag(c, c,..., c) ¼ ci. Then jdj ¼jcIj ¼ Yn i¼ c ¼ c n : (2:68)
21 2.9 DETERMINNTS 39 By extension, if an n n matrix is multiplied by a scalar, the determinant becomes jcj ¼c n jj: (2:69) The determinant of certain partitioned matrices is given in the following theorem. Theorem 2.9b. If the square matrix is partitioned as ¼ 2, (2:7) 2 22 and if and 22 are square and nonsingular (but not necessarily the same size), then jj ¼j jj j ð2:7þ ¼j 22 jj j: (2:72) Note the analogy of (2.7) and (2.72) to the case of the determinant of a 2 2 matrix: a a 2 a 2 a ¼ a a 22 a 2 a 2 22 ¼ a a 22 a 2a 2 a ¼ a 22 a a 2a 2 : a 22 Corollary. Suppose ¼ O 2 22 or ¼ 2, O 22 where and 22 are square (but not necessarily the same size). Then in either case jj ¼j jj 22 j: (2:73)
22 4 MTRIX LGEBR Corollary 2. Let ¼ O, O 22 where and 22 are square (but not necessarily the same size). Then jj ¼j jj 22 j: (2:74) Corollary 3. If has the form ¼ a 2 a, where 2 a is a nonsingular 22 matrix, a 2 is a vector, and a 22 is a matrix, then jj ¼ a 2 a 2 a 22 ¼j j(a 22 a 2 a 2): (2:75) B c Corollary 4. If has the form ¼ c, where c is a vector and B is a nonsingular matrix, then jb þ cc j¼jbj( þ c B c): (2:76) Theorem 2.9c. If and B are square and the same size, then the determinant of the product is the product of the determinants: Corollary Corollary 2 jbj ¼jjjBj: (2:77) jbj ¼jBj: (2:78) j 2 j¼jj 2 : (2:79)
23 2. ORTHOGONL VECTORS ND MTRICES 4 Example 2.9b. To illustrae Theorem 2.9c, let ¼ and B ¼ 3 2 : 2 Then B ¼ 5 2, jbj ¼ 6, 3 2 jj ¼ 2, jbj ¼8, jjjbj ¼ 6:. 2. ORTHOGONL VECTORS ND MTRICES Two n vectors a and b are said to be orthogonal if a b ¼ a b þ a 2 b 2 þ þ a n b n ¼ (2:8) Note that the term orthogonal applies to two vectors, not to a single vector. Geometrically, two orthogonal vectors are perpendicular to each other. This is illustrated in Figure 2.3 for the vectors x ¼ (4, 2) and x 2 ¼ (, 2). Note that x x 2 ¼ (4) ( ) þ (2) (2) ¼. Figure 2.3 Two orthogonal (perpendicular) vectors.
24 42 MTRIX LGEBR Figure 2.4 Vectors a and b in 3-space. u to the sides of the triangle can be stated in vector form as cos u ¼ a a þ b b (b a) (b a) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 (a a)(b b) ¼ a a þ b b (b b þ a a 2a b) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 (a a)(b b) a b ¼ p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : (2:8) (a a)(b b) When u ¼ 98, a b ¼ since cos(98) ¼. Thus a and b are perpendicular when a b ¼. If a a ¼, the vector p affiffiffiffiffiffiffi is said to be normalized. vector b can be normalized by dividing by its length, b b. Thus c ¼ p b ffiffiffiffiffiffiffi (2:82) b b is normalized so that c c ¼. set of p vectors c, c 2,..., c p that are normalized (c i c i ¼ for all i) and mutually orthogonal (c i c j ¼ for all i = j) is said to be an orthonormal set of vectors. If the p p matrix C ¼ (c, c 2,..., c p ) has orthonormal columns, C is called an orthogonal matrix. Since the elements of C C are products of columns of
25 2. ORTHOGONL VECTORS ND MTRICES 43 C [see Theorem 2.2c(i)], an orthogonal matrix C has the property C C ¼ I: (2:83) It can be shown that an orthogonal matrix C also satisfies CC ¼ I: (2:84) Thus an orthogonal matrix C has orthonormal rows as well as orthonormal columns. It is also clear from (2.83) and (2.84) that C ¼ C 2 if C is orthogonal. Example 2.. To illustrate an orthogonal matrix, we start with 2, whose columns are mutually orthogonal but not orthonormal. p ffiffiffi p ffiffiffi To p normalize ffiffiffi the three columns, we divide by their respective lengths, 3, 6, and 2, to obtain the matrix p = ffiffiffi p 3 = ffiffiffi p 6 = ffiffi p 2 C ¼ = ffiffiffi p3 2= 6 = ffiffiffi p 3 = ffiffiffi pffiffi, 6 = 2 whose columns are orthonormal. Note that the rows of C are also orthonormal, so that C satisfies (2.84) as well as (2.83). Multiplication of a vector by an orthogonal matrix has the effect of rotating axes; that is, if a point x is transformed to z ¼ Cx, where C is orthogonal, then the distance from the origin to z is the same as the distance to x: z z ¼ (Cx) (Cx) ¼ x C Cx ¼ x Ix ¼ x x: (2:85) Hence, the transformation from x to z is a rotation.
26 44 MTRIX LGEBR Theorem 2.. If the p p matrix C is orthogonal and if is any p p matrix, then (i) jcj ¼þor2. (ii) jc Cj ¼jj: (iii) c ij, where c ij is any element of C. 2. TRCE 2. TRCE Theorem 2. (i) If and B are n n, then tr( + B) ¼ tr() + tr(b): (2:86) (ii) If is n p and B is p n, then tr(b) ¼ tr(b): ð2:87þ Note that in (2.87) n can be less than, equal to, or greater than p. (iii) If is n p, then where a i is the ith column of. (iv) If is n p, then tr( ) ¼ Xp i¼ a i a i, (2:88) where a i is the ith row of. tr( ) ¼ Xn a i a i, (2:89) i¼
27 2. TRCE 45 (v) If ¼ (a ij )isann p matrix with representative element a ij, then tr( ) ¼ tr( ) ¼ Xn X p i¼ j¼ a 2 ij : (2:9) (vi) If is any n n matrix and P is any n n nonsingular matrix, then tr(p P) ¼ tr(): (2:9) (vii) If is any n n matrix and C is any n n orthogonal matrix, then tr(c C) ¼ tr(): (2:92) (viii) If is n p of rank r and 2 is a generalized inverse of, then tr( ) ¼ tr( ) ¼ r: (2:93) PROOF. We prove parts (ii), (iii), and (vi). (ii) By (2.3), the ith diagonal element of E ¼ B is e ii ¼ P k a ikb ki. Then tr(b) ¼ tr(e) ¼ X e ii ¼ X X a ik b ki : i i Similarly, the ith diagonal element of F ¼ B is f ii ¼ P k b ika ki, and tr(b) ¼ tr(f) ¼ X f ii ¼ X X b ik a ki i i k ¼ X X a ki b ik ¼ tr(e) ¼ tr(b): k i (iii) By Theorem 2.2c(i), is obtained as products of columns of. Ifa i is the ith column of, then the ith diagonal element of is a i a i. (vi) By (2.87) we obtain tr(p P) ¼ tr(pp ) ¼ tr(): Example 2.. We illustrate parts (ii) and (viii) of Theorem 2.. k (ii) Let and B ¼ :
28 46 MTRIX LGEBR Then 9 6 B C B 4 8 3, B ¼ 3 7, tr(b) ¼ 9 8 þ 34 ¼ 35, tr(b) ¼ 3 þ 32 ¼ 35: (viii) Using in (2.59) and in (2.6), we obtain B C 2, B C tr( ) ¼ þ þ ¼ 2 ¼ rank(), tr( ) ¼ þ þ ¼ 2 ¼ rank(): 2.2 EIGENVLUES ND EIGENVECTORS 2.2. Definition For every square matrix, a scalar l and a nonzero vector x can be found such that x ¼ lx, (2:94) Figure 2.5 n eigenvector x is transformed to lx.
29 2.2 EIGENVLUES ND EIGENVECTORS 47 where l is an eigenvalue of and x is an eigenvector. (These terms are sometimes referred to as characteristic root and characteristic vector, respectively.) Note that in (2.94), the vector x is transformed by onto a multiple of itself, so that the point x is on the line passing through x and the origin. This is illustrated in Figure 2.5. To find l and x for a matrix, we write (2.94) as ( li)x ¼ : (2:95) By (2.37), ( li)x is a linear combination of the columns of li, and by (2.4) and (2.95), these columns are linearly dependent. Thus the square matrix ( li) is singular, and by Theorem 2.9a(iii), we can solve for l using j lij ¼, (2:96) which is known as the characteristic equation. If is n n, the characteristic equation (2.96) will have n roots; that is, will have n eigenvalues l, l 2,..., l n. The l s will not necessarily all be distinct, or all nonzero, or even all real. (However, the eigenvalues of a symmetric matrix are real; see Theorem 2.2c.) fter finding l, l 2,..., l n using (2.96), the accompanying eigenvectors x, x 2,..., x n can be found using (2.95). If an eigenvalue is, the corresponding eigenvector is not. To see this, note that if l ¼, then ( li)x ¼ becomes x ¼, which has solutions for x because is singular, and the columns are therefore linearly dependent. [The matrix is singular because it has a zero eigenvalue; see (63) and (2.7).] If we multiply both sides of (2.95) by a scalar k, we obtain which can be rewritten as k( li)x ¼ k ¼, ( li)kx ¼ [by (2:2)]: Thus if x is an eigenvector of, kx is also an eigenvector. Eigenvectors are therefore unique only up to multiplication by a scalar. (There are many solution vectors x because li is singular; see Section 2.8) Hence, the length of x is arbitrary, but its direction from the origin is unique; that is, the relative values of (ratios of) the elements of x ¼ (x, x 2,..., x n ) are unique. Typically, an eigenvector x is scaled to normalized form as in (2.82), x x ¼.
30 48 MTRIX LGEBR Example To illustrate eigenvalues and eigenvectors, consider the matrix ¼ 2 : 4 By (2.96), the characteristic equation is which becomes j lij ¼ l 2 4 l ¼ ( l)(4 l) þ 2 ¼, l 2 5l þ 6 ¼ (l 3)(l 2) ¼, with roots l ¼ 3 and l 2 ¼ 2. To find the eigenvector x corresponding to l ¼ 3, we use (2.95) ( l I)x ¼, 3 2 x ¼, 4 3 x 2 which can be written as 2x þ 2x 2 ¼ x þ x 2 ¼ : The second equation is a multiple of the first, and either equation yields x ¼ x 2. The solution vector can be written with x ¼ x 2 ¼ c as an arbitrary constant: x ¼ x x 2 ¼ x ¼ x x ¼ c : p If c is set equal to = ffiffi 2 to normalize the eigenvector, we obtain pffiffiffi = 2 x ¼ p = ffiffiffi : 2 Similarly, corresponding to l 2 ¼ 2, we obtain pffiffiffi 2= 5 x 2 ¼ p = ffiffiffi : 5
31 2.2 EIGENVLUES ND EIGENVECTORS Functions of a Matrix If l is an eigenvalue of with corresponding eigenvector x, then for certain functions g(), an eigenvalue is given by g(l) and x is the corresponding eigenvector of g() as well as of. We illustrate some of these cases:. If l is an eigenvalue of, then cl is an eigenvalue of c,wherec is an arbitrary constant such that c =. This is easily demonstrated by multiplying the defining relationship x ¼ lx by c: cx ¼ clx: (2:97) Note that x is an eigenvector of corresponding to l, and x is also an eigenvector of c corresponding to cl. 2. If l is an eigenvalue of the and x is the corresponding eigenvector of, then cl þ k is an eigenvalue of the matrix c þ ki and x is an eigenvector of c þ ki, where c and k are scalars. To show this, we add kx to (2.97): cx þ kx ¼ clx þ kx, (c þ ki)x ¼ (cl þ k)x: (2:98) Thus cl þ k is an eigenvalue of c þ ki and x is the corresponding eigenvector of c þ ki. Note that (2.98) does not extend to þ B for arbitrary n n matrices and B; that is, þ B does not have l þ l B for an eigenvalue, where l is an eigenvalue of and l B is an eigenvalue of B. 3. If l is an eigenvalue of, then l 2 is an eigenvalue of 2. This can be demonstrated by multiplying the defining relationship x ¼ lx by : (x) ¼ (lx), 2 x ¼ lx ¼ l(lx) ¼ l 2 x: (2:99) Thus l 2 is an eigenvalue of 2, and x is the corresponding eigenvector of 2. This can be extended to any power of : k x ¼ l k x; (2:) that is, l k is an eigenvalue of k, and x is the corresponding eigenvector.
32 5 MTRIX LGEBR 4. If l is an eigenvalue of the nonsingular matrix, then /l is an eigenvalue of. To demonstrate this, we multiply x ¼ lx by to obtain x ¼ lx, x ¼ l x, x ¼ x: (2:) l Thus /l is an eigenvalue of, and x is an eigenvector of both and. 5. The results in (2.97) and (2.) can be used to obtain eigenvalues and eigenvectors of a polynomial in. For example, if l is an eigenvalue of, then ( 3 þ þ 5I)x ¼ 3 x þ 4 2 x 3x þ 5x ¼ l 3 x þ 4l 2 x 3lx þ 5x ¼ (l 3 þ 4l 2 3l þ 5)x: Thus l 3 þ 4l 2 3l þ 5 is an eigenvalue of 3 þ þ 5I, and x is the corresponding eigenvector. For certain matrices, property 5 can be extended to an infinite series. For example, if l is an eigenvalue of, then, by (2.98), l is an eigenvalue of I. If I is nonsingular, then, by (2.), =( l) is an eigenvalue of (I ). If, l,, then =( l) can be represented by the series l ¼ þ l þ l2 þ l 3 þ: Correspondingly, if all eigenvalues of satisfy, l,, then (I ) ¼ I þ þ 2 þ 3 þ: (2:2) Products It was noted in a comment following (2.98) that the eigenvalues of þ B are not of the form l þ l B, where l is an eigenvalue of and l B is an eigenvalue of B. Similarly, the eigenvalues of B are not products of the form l l B. However, the eigenvalues of B are the same as those of B. Theorem 2.2a. If and B are n n or if is n p and B is p n, then the (nonzero) eigenvalues of B are the same as those of B. Ifx is an eigenvector of B, then Bx is an eigenvector of B.
33 2.2 EIGENVLUES ND EIGENVECTORS 5 Two additional results involving eigenvalues of products are given in the following theorem. Theorem 2.2b. Let be any n n matrix. (i) If P is any n n nonsingular matrix, then and P 2 P have the same eigenvalues. (ii) If C is any n n orthogonal matrix, then and C C have the same eigenvalues Symmetric Matrices Theorem 2.2c. Let be an n n symmetric matrix. (i) The eigenvalues l, l 2,..., l n of are real. (ii) The eigenvectors x, x 2,..., x k of corresponding to distinct eigenvalues l,l 2,..., l k are mutually orthogonal; the eigenvectors x kþ, x kþ2,..., x n corresponding to the nondistinct eigenvalues can be chosen to be mutually orthogonal to each other and to the other eigenvectors; that is, x i x j ¼ for i = j. Theorem 2.2d. If is an n n symmetric matrix with eigenvalues l, l 2,..., l n and normalized eigenvectors x, x 2,..., x n,then can be expressed as ¼ CDC (2:3) ¼ Xn i¼ l i x i x i, (2:4) where D ¼ diag(l,l 2,..., l n ) and C is the orthogonal matrix C ¼ (x, x 2,..., x n )..The result in either (2.3) or (2.4) is often called the spectral decomposition of PROOF. By Theorem 2.2c(ii), C is orthogonal. Then by (2.84), I ¼ CC, and multiplication by gives ¼ CC :
34 52 MTRIX LGEBR We now substitute C ¼ (x, x 2,..., x n ) to obtain ¼ (x, x 2,..., x n )C ¼ (x, x 2,..., x n )C [by (2:28)] ¼ (l x, l 2 x 2,..., l n x n )C [by (2:94)] ¼ CDC, (2:5) since multiplication on the right by D ¼ diag(l, l 2,..., l n ) multiplies columns of C by elements of D [see (2.3)]. Now writing C in the form (2.5) becomes x x 2 x n C ¼ (x, x 2,..., x n ) ¼ C x [by (2:39)], x 2 ¼ (l x, l 2 x 2,..., l n x n ) B. x n ¼ l x x þ l 2x 2 x 2 þþl nx n x n : Corollary. If is symmetric and C and D are defined as in Theorem 2.2d, then C diagonalizes : C C ¼ D: (2:6) Theorem 2.2e. If is any n n matrix with eigenvalues l, l 2,..., l n, then (i) (ii) jj ¼ Yn l i : (2:7) i¼ tr() ¼ Xn l i : (2:8) i¼
35 2.2 EIGENVLUES ND EIGENVECTORS Positive Definite and Semidefinite Matrices Theorem 2.2f. Let be n n with eigenvalues l, l 2,..., l n. (i) If is positive definite, then l i. for i ¼, 2,..., n. (ii) If is positive semidefinite, then l i for i ¼, 2,..., n. The number of eigenvalues l i for which l i. is the rank of. PROOF. (i) For any l i,wehavex i ¼ l i x i. Multiplying by x i, we obtain x i x i ¼ l i x i x i, l i ¼ x i x i x i x i. : In the second expression, x i x i is positive because is positive definite, and x i x i is positive because x i =. If a matrix is positive definite, we can find a square root matrix =2 as follows. pffiffiffiffi Since the eigenvalues of are positive, we can substitute the square roots l i for li in the spectral decomposition of in (2.3), to obtain =2 ¼ CD =2 C, (2:9) where D =2 p ffiffiffiffiffi p ffiffiffiffiffi p ffiffiffiffiffi ¼ diag( l, l 2,..., l n). The matrix =2 is symmetric and has the property =2 =2 ¼ ( =2 ) 2 ¼ : (2:)
36 54 MTRIX LGEBR 2.3 IDEMPOTENT MTRICES square matrix is said to be idempotent if 2 ¼. Most idempotent matrices in this book are symmetric. Many of the sums of squares in regression (Chapters 6 ) and analysis of variance (Chapters 2 5) can be expressed as quadratic forms y y. The idempotence of or of a product involving will be used to establish that y y (or a multiple of y y) has a chi-square distribution. n example of an idempotent matrix is the identity matrix I. Theorem 2.3a. The only nonsingular idempotent matrix is the identity matrix I. PROOF. If is idempotent and nonsingular, then 2 ¼ and the inverse exists. If we multiply 2 ¼ by, we obtain 2 ¼, ¼ I: Many of the matrices of quadratic forms we will encounter in later chapters are singular idempotent matrices. We now give some properties of such matrices. Theorem 2.3b. If is singular, symmetric, and idempotent, then is positive semidefinite. PROOF. Since ¼ and ¼ 2,wehave ¼ 2 ¼ ¼, which is positive semidefinite by Theorem 2.6d(ii). If a is a real number such that a 2 ¼ a, then a is either or. The analogous property for matrices is that if 2 ¼, then the eigenvalues of are s and s. Theorem 2.3c. If is an n n symmetric idempotent matrix of rank r, then has r eigenvalues equal to and n r eigenvalues equal to. PROOF. By (2.99), if x ¼ lx, then 2 x ¼ l 2 x. Since 2 ¼, we have 2 x ¼ x ¼ lx. Equating the right sides of 2 x ¼ l 2 x and 2 x ¼ lx, wehave lx ¼ l 2 x or (l l 2 )x ¼ : But x =, and therefore l l 2 ¼, from which, l is either or. By Theorem 2.3b, is positive semidefinite, and therefore by Theorem 2.2f(ii), the number of nonzero eigenvalues is equal to rank(). Thus r eigenvalues of are equal to and the remaining n r eigenvalues are equal to.
37 2.3 IDEMPOTENT MTRICES 55 Theorem 2.3d. If is symmetric and idempotent of rank r, then rank() ¼ tr() ¼ r PROOF. By Theorem 2.2e(ii), tr() ¼ P n i¼ l i, and by Theorem 2.3c, P n i¼ l i ¼ r. Theorem 2.3e. If is an n n idempotent matrix, P is an n n nonsingular matrix, and C is an n n orthogonal matrix, then (i) I is idempotent. (ii) (I ) ¼ O and (I ) ¼ O. (iii) P P is idempotent. (iv) C C is idempotent. (If is symmetric, C C is a symmetric idempotent matrix.) Theorem 2.3f. Let be n p of rank r, let be any generalized inverse of, and let ( ) be any generalized inverse of. Then,, and ( ) are all idempotent. Theorem 2.3g. Suppose that the n n symmetric matrix can be written as ¼ P k i¼ i for some k, where each i is an n n symmetric matrix. Then any two of the following conditions implies the third condition. (i) is idempotent. (ii) Each of, 2,..., k is idempotent. (iii) i j ¼ O for i = j. Theorem 2.3h. If I ¼ P k i¼ i, where each n n matrix i is symmetric of rank r i, and if n ¼ P k i¼ r i, then both of the following are true: (i) Each of, 2,..., k is idempotent. (ii) i j ¼ O for i = j.
38 56 MTRIX LGEBR 2.4 VECTOR ND MTRIX CLCULUS 2.4. Derivatives of Functions of Vectors and Matrices Let u ¼ f (x) be a function of the variables x, x 2,..., x p in x ¼ (x, x 2,..., x p ), 2,...,@u=@x p be the partial derivatives. We 2 : (2:). @x p Two specific functions of interest are u ¼ a x and u ¼ x x. Their derivatives with respect to x are given in the following two theorems. Theorem 2.4a. Let u ¼ a x ¼ x a, where a ¼ (a, a 2,..., a p ) is a vector of a) ¼ a: PROOF Thus by (2.) we x þ a 2 x 2 þþa p x p ) ¼ a i @x ¼ a a 2. a p C ¼ a: Theorem 2.4b. Let u ¼ x x, where is a symmetric matrix of x) ¼ 2x:
39 2.4 VECTOR ND MTRIX CLCULUS 57 PROOF. We demonstrate that (2.3) holds for the special case in which is 3 3. The illustration could be generalized to a symmetric of any size. Let x a a 2 a 3 a x x 2 and a 2 a 22 a 23 a 2 : x 3 a 3 a 23 a 33 a 3 Then x x ¼ x 2 a þ 2x x 2 a 2 þ 2x x 3 a 3 þ x 2 2 a 22 þ 2x 2 x 3 a 23 þ x 2 3 a 33, and we x) 2 ¼ 2x a þ 2x 2 a 2 þ 2x 3 a 3 ¼ 2a x ¼ 2x a 2 þ 2x 2 a 22 þ 2x 3 a 23 ¼ 2a 2 3 ¼ 2x a 3 þ 2x 2 a 23 þ 2x 3 a 33 ¼ 2a 3 x: Thus by (2.), (2.27), and (2.), we @(x 3 a x ¼ 2@ a 2 C x ¼ 2x: a 3 x Now let u ¼ f (X) be a function of the variables x, x 2,..., x pp in the p p matrix X, and let (@u=@x ), (@u=@x 2 ),...,(@u=@x pp ) be the partial derivatives. Similarly to (2.), we ¼.. B C pp Two functions of interest of this type are u ¼ tr(x) and u ¼ ln jxj for a positive definite matrix X. Theorem 2.4c. Let u ¼ tr(x), where X is a p p positive definite matrix and is a p p matrix of ¼ þ diag :
40 58 MTRIX LGEBR PROOF. Note that tr(x) ¼ P p P p i¼ j¼ x ija ji [see the proof of Theorem 2.(ii)]. Since x ij ¼ x ji, [@tr(x)]=@x ij ¼ a ji þ a ij if i = j, and [@tr(x)]=@x ii ¼ a ii. The result follows. Theorem 2.4d. Let u ¼ ln jxj where X is a p p positive definite matrix. ln ¼ 2X diag(x ): (2:6) PROOF. See Harville (997, p. 36). See Problem 2.83 for a demonstration that this theorem holds for 2 2 matrices Derivatives Involving Inverse Matrices and Determinants Let be an n n nonsingular matrix with elements a ij that are functions of a scalar x. We as the n n matrix with ij =@x. The related =@x is often of interest. If is positive definite, the derivative (@=@x) log jj is also often of interest. Theorem 2.4e. Let be nonsingular of order n with Then ¼ (2:7) PROOF. Because is nonsingular, we have ¼ I: Thus Hence @ ¼ ¼ : Theorem 2.4f. Let be an n n positive define matrix. log ¼ tr
41 2.4 VECTOR ND MTRIX CLCULUS 59 PROOF. Since is positive definite, its spectral decomposition (Theorem 2.2d) can be written as CDC, where C is an orthogonal matrix and D is a diagonal matrix of positive eigenvalues, l i. Using Theorem 2.2e, we obtain @x ¼ ¼ CD ¼ CD C Using Theorem 2.(i) and (ii), we have tr log Q n i¼ l P n i¼ log l ¼ i l ¼ tr D þ C @x ¼ C þ DC : ¼ : Since C is orthogonal, C C ¼ I which implies that @x C ¼ O C ¼ @x þ ¼ Thus tr[ (@=@x)] ¼ tr[d (@D=@x)] and the result follows.
42 6 MTRIX LGEBR Maximization or Minimization of a Function of a Vector Consider a function u ¼ f (x) of the p variables in x. In many cases we can find a maximum or minimum of u by solving the system of p ¼ : Occasionally the situation requires the maximization or minimization of the function u, subject to q constraints on x. We denote the constraints as h (x) ¼, h 2 (x) ¼,..., h q (x) ¼ or, more succinctly, h(x) ¼. Maximization or minimization of u subject to h(x) ¼ can often be carried out by the method of Lagrange multipliers. We denote a vector of q unknown constants (the Lagrange multipliers) byl and let y ¼ (x, l ). We then let v ¼ u þ l h(x). The maximum or minimum of u subject to h(x) ¼ is obtained by solving the equations or, @x l ¼ and h(x) ¼.. : @h p PROBLEMS 2. Prove Theorem 2.2a. 2.2 Let ¼ : (a) Find. (b) Verify that ( ) ¼, thus illustrating Theorem 2.. (c) Find and Let ¼ 3 and B ¼ 3 2.
43 PROBLEMS 6 (a) Find B and B. (b) Find jj, jbj, and jbj, and verify that Theorem 2.9c holds in this case. (c) Find jbj and compare to jbj. (d) Find (B) and compare to B. (e) Find tr(b) and compare to tr(b). (f) Find the eigenvalues of B and of B, thus illustrating Theorem 2.2a. 2.4 Let ¼ 3 4 and B ¼ (a) Find þ B and B. (b) Find and B. (c) Find ( þ B) and þ B, thus illustrating Theorem 2.2a(ii). 2.5 Verify the distributive law in (2.5), (B þ C) ¼ B þ C Let ¼, B 3 7, C (a) Find B and B. (b) Find B þ C, C, and (B þ C). Compare (B þ C) with B þ C, thus illustrating (2.5). (c) Compare (B) with B, thus illustrating Theorem 2.2b. (d) Compare tr(b) with tr(b) and confirmthat (2.87) holds in this case. (e) Let a and a a 2 be the two rows of. Find B a 2 B and compare with B in part (a), thus illustrating (2.27). (f) Let b and b 2 be the two columns of B. Find (b, b 2 ) and compare with B in part (a), thus illustrating (2.28) Let 6 4 2, B (a) Show that B ¼ O. (b) Find a vector x such that x ¼. (c) What is the rank of and the rank of B? 2.8 If j is a vector of s, as defined in (2.6), show that (a) j a ¼ a j ¼ P i a i, as in (2.24). (b) j is a column vector whose elements are the row sums of, as in (2.25). (c) j is a row vector whose elements are the column sums of, as in (2.25).
44 62 MTRIX LGEBR 2.9 Prove Corollary to Theorem 2.2b; that is, assuming that, B, and C are conformal, show that (BC) ¼ C B. 2. Prove Theorem 2.2c. 2. Use matrix in Problem 2.6 and let D ¼ 3 5, D : 6 Find D and D 2, thus illustrating (2.29) and (2.3). 2 3 a 2.2 Let 4 5 6, D b c Find D, D, and DD. 2.3 For y ¼ (y, y 2, y 3 ) and the symmetric matrix a a 2 a 3 a 2 a 22 a 23, a 3 a 23 a 33 express y y in the form given in (2.33) Let 2, B 7, C 4, x y 2, z ¼ 2 : Find the following: (a) Bx (h) xy (b) y B (i) B B (c) x x (j) yz (d) x Cz (k) zy (e) x pffiffiffiffiffiffi x (l) y y (f) x y (m) C C (g) xx 2.5 Use x, y,, and B as defined in Problem 2.4. (a) Find x þ y and x y.
45 PROBLEMS 63 (b) Find tr(), tr(b), þ B, and tr( þ B). (c) Find B and B. (d) Find tr(b) and tr(b). (e) Find jbj and jbj. (f) Find (B) and B. 2.6 Using B and x in Problem 2.4, find Bx as a linear combination of the columns of B, as in (2.37), and compare with Bx as found in Problem 2.4(a). 2.7 Let ¼ 2 5, B ¼ 6 2, I ¼ (a) Show that (B) ¼ B as in (2.26). (b) Show that I ¼ and that IB ¼ B. (c) Find jj. (d) Find. (e) Find ( ) and compare with, thus verifying (2.46). (f) Find ( ) and verify that it is equal to ( ) as in Theorem 2.5a. 2.8 Let and B be defined and partitioned as follows: ¼ 2 2 B C B 3 2, B 2 2 : (a) Find B as in (2.35), using the indicated partitioning. (b) Check by finding B in the usual way, ignoring the partitioning. 2.9 Partition the matrices and B in Problem 2.8 as follows: ¼ (a, 2 ), B 2 2 ¼ b : B Repeat parts (a) and (b) of Problem 2.8. Note that in this case, (2.35) becomes B ¼ a b þ 2B 2.
46 64 MTRIX LGEBR 2.2 Let ¼ , b Find b as a linear combination of the columns of as in (2.37) and check the result by finding b in the usual way. 2.2 Show that each column of the product B can be expressed as a linear combination of the columns of, with coefficients arising from the corresponding column of B, as noted following Example Let B 3. 2 Express the columns of B as linear combinations of the columns of Show that if a set of vectors includes, the set is linearly dependent, as noted following (2.4) Suppose that and B are n n and that B ¼ O as in (2.43). Show that and B are both singular or one of them is O Let ¼ 3 2 2, B C ¼ Find B and CB. re they equal? What are the ranks of, B, and C? 2.26 Let ¼ 3 2 2, B 2. (a) Find a matrix C such that B ¼ CB. IsC unique? (b) Find a vector x such that x ¼. Can you do this for B? Let 4 2 3, x 2. 3 (a) Find a matrix B = such that x ¼ Bx. Why is this possible? Can and B be nonsingular? Can B be nonsingular? (b) Find a matrix C = O such that Cx ¼. Can C be nonsingular? 2.28 Prove Theorem 2.5a Prove Theorem 2.5b. 2.3 Use the matrix in Problem 2.7, and let B ¼ Find B, B 2, and (B) 2. Verify that Theorem 2.5b holds in this case.
47 2.3 Show that the partitioned matrix ¼ 2 has the inverse indicated 2 in (2.5) Show that the partitioned matrix ¼ a 2 a has the inverse given in 2 a (2.5) Show that B þ cc has the inverse indicated in (2.53) Show that þ PBQ has the inverse indicated in (2.54) Show that y y ¼ y 2 ( þ ) y as in (2.55) Prove Theorem 2.6b(ii) Prove Corollaries and 2 of Theorem 2.6b Prove the only if part of Theorem 2.6c Prove Corollary to Theorem 2.6c. PROBLEMS Compare the rank of the augmented matrix with the rank of the coefficient matrix for each of the following systems of equations. Find solutions where they exist. ðaþ x þ 2x 2 þ 3x 3 ¼ 6 x x 2 ¼ 2 x x 3 ¼ ðbþ x x 2 þ 2x 3 ¼ 2 x x 2 x 3 ¼ 2x 2x 2 þ x 3 ¼ 2 ðcþ x þ x 2 þ x 3 þ x 4 ¼ 8 x x 2 x 3 x 4 ¼ 6 3x þ x 2 þ x 3 þ x 4 ¼ Prove Theorem 2.8a For the matrices,, and 2 in (2.59) and (2.6), show that ¼ and 2 ¼ Show that in (2.6) can be obtained using Theorem 2.8b Show that 2 in (2.6) can be obtained using the five-step algorithm following Theorem 2.8b Prove Theorem 2.8c Show that if is symmetric, there exists a symmetric generalized inverse for, as noted following Theorem 2.8c Let (a) Find a symmetric generalized inverse for.
48 66 MTRIX LGEBR (b) Find a nonsymmetric generalized inverse for (a) Show that if is nonsingular, then ¼. (b) Show that if is n p of rank p, n, then is a left inverse of, that is, ¼ I Prove Theorem 2.9a parts (iv) and (vi). 2.5 Use ¼ 2 5 from Problem 2.7 to illustrate (64), (2.66), and (2.67) in 3 Theorem 2.9a. 2.5 (a) Multiply in Problem 2.5 by and verify that (2.69) holds in this case. (b) Verify that (2.69) holds in general Prove Corollaries, 2, 3, and 4 of Theorem 2.9b Prove Corollaries and 2 of Theorem2.9c Use in Problem 2.5 and let B ¼ (a) Find jj, jbj, B, and jbj and illustrate (2.77). (b) Find jj 2 and j 2 j and illustrate (2.79) Use Theorem 2.9c and Corollary of Theorem 2.9b to prove Theorem 2.9b Show that if C C ¼ I, then CC ¼ I as in (2.84) The columns of the following matrix are mutually orthogonal: 2: (a) Normalize the columns of by dividing each column by its length; denote the resulting matrix by C. (b) Show that C C ¼ CC ¼ I Prove Theorem 2.a Prove Theorem 2. parts (i), (iv), (v), and (vii). 2.6 Use matrix B in Problem 2.26 to illustrate Theorem 2. parts (iii) and (iv). 2.6 Use matrix in Problem 2.26 to illustrate Theorem 2.(v), that is, tr( ) ¼ tr( ) ¼ P ij a2 ij Show that tr( ) ¼ tr( ) ¼ r ¼ rank(), as in (2.93) Use in (2.59) and 2 in (2.6) to illustrate Theorem 2.(viii), that is, tr( ) ¼ tr( ) ¼ r ¼ rank().
Chapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationMathematical Foundations of Applied Statistics: Matrix Algebra
Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationAPPENDIX A. Matrix Algebra. a n1 a n2 g a nk
APPENDIX A Matrix Algebra A.1 TERMINOLOGY A matrix is a rectangular array of numbers, denoted a 11 a 12 g a 1K a A = [a ik ] = [A] ik = D 21 a 22 g a 2K T. g a n1 a n2 g a nk (A-1) The typical element
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationSTAT 8260 Theory of Linear Models Lecture Notes
STAT 8260 Theory of Linear Models Lecture Notes Classical linear models are at the core of the field of statistics, and are probably the most commonly used set of statistical techniques in practice. For
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationMatrix Algebra Review
APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More informationMatrices and Determinants
Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationMatrix & Linear Algebra
Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A
More information4 Linear Algebra Review
Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS
More informationSome Notes on Linear Algebra
Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationChapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of
Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple
More informationChapter 1: Systems of Linear Equations and Matrices
: Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationEIGENVALUES AND SINGULAR VALUE DECOMPOSITION
APPENDIX B EIGENVALUES AND SINGULAR VALUE DECOMPOSITION B.1 LINEAR EQUATIONS AND INVERSES Problems of linear estimation can be written in terms of a linear matrix equation whose solution provides the required
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationChapter 1 Vector Spaces
Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationSection 3.3. Matrix Rank and the Inverse of a Full Rank Matrix
3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank
More informationChapter 2: Matrix Algebra
Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry
More informationMiscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and
More informationLecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore
Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1
More informationMatrix Basic Concepts
Matrix Basic Concepts Topics: What is a matrix? Matrix terminology Elements or entries Diagonal entries Address/location of entries Rows and columns Size of a matrix A column matrix; vectors Special types
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationSection 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices
3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product
More informationLinear Algebra. James Je Heon Kim
Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationExample Linear Algebra Competency Test
Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,
More information