How do our representations change if we select another basis?

Size: px
Start display at page:

Download "How do our representations change if we select another basis?"

Transcription

1 CHAPTER 6 Linear Mappings and Matrices 99 THEOREM 6.: For any linear operators F; G AðV Þ, 6. Change of Basis mðg FÞ ¼ mðgþmðfþ or ½G FŠ ¼ ½GŠ½FŠ (Here G F denotes the composition of the maps G and F.) Let V be an n-dimensional vector space over a field K. We have shown that once we have selected a basis S of V, every vector v V can be represented by means of an n-tuple in K n, and every linear operator T in AðVÞ can be represented by an n n matrix over K. We ask the following natural question: How do our representations change if we select another basis? In order to answer this question, we first need a definition. DEFINITION: Let S ¼ fu ; u ;...; u n g be a basis of a vector space V; and let S 0 ¼ fv ; v ;...; v n g be another basis. (For reference, we will call S the old basis and S 0 the new basis.) Because S is a basis, each vector in the new basis S 0 can be written uniquely as a linear combination of the vectors in S; say, v ¼ a u þ a u þþa n u n v ¼ a u þ a u þþa n u n ::::::::::::::::::::::::::::::::::::::::::::::::: v n ¼ a n u þ a n u þþa nn u n Let P be the transpose of the above matrix of coefficients; that is, let P ¼½p ij Š, where p ij ¼ a ji. Then P is called the change-of-basis matrix (or transition matrix) from the old basis S to the new basis S 0. The following remarks are in order. Remark : The above change-of-basis matrix P may also be viewed as the matrix whose columns are, respectively, the coordinate column vectors of the new basis vectors v i relative to the old basis S; namely, P ¼ ½v Š S ; ½v Š S ;...; ½v n Š S Remark : Analogously, there is a change-of-basis matrix Q from the new basis S 0 to the old basis S. Similarly, Q may be viewed as the matrix whose columns are, respectively, the coordinate column vectors of the old basis vectors u i relative to the new basis S 0 ; namely, Q ¼ ½u Š S 0; ½u Š S 0;...; ½u n Š S 0 Remark : Because the vectors v ; v ;...; v n in the new basis S 0 are linearly independent, the matrix P is invertible (Problem 6.8). Similarly, Q is invertible. In fact, we have the following proposition (proved in Problem 6.8). PROPOSITION 6.4: Let P and Q be the above change-of-basis matrices. Then Q ¼ P. Now suppose S ¼ fu ; u ;...; u n g is a basis of a vector space V, and suppose P ¼½p ij Š is any nonsingular matrix. Then the n vectors v i ¼ p i u i þ p i u þþp ni u n ; i ¼ ; ;...; n corresponding to the columns of P, are linearly independent [Problem 6.(a)]. Thus, they form another basis S 0 of V. Moreover, P will be the change-of-basis matrix from S to the new basis S 0.

2 00 CHAPTER 6 Linear Mappings and Matrices EXAMPLE 6.5 Consider the following two bases of R : S ¼ fu ; u g ¼ fð; Þ; ð; 5Þg and S 0 ¼ fv ; v g ¼ fð; Þ; ð; Þg (a) Find the change-of-basis matrix P from S to the new basis S 0. Write each of the new basis vectors of S 0 as a linear combination of the original basis vectors u and u of S. We have ¼ x ¼ x Thus, þ y 5 þ y 5 or or x þ y ¼ x þ 5y ¼ x þ y ¼ x þ 5y ¼ v ¼ 8u þ u v ¼ u þ 4u and hence; P ¼ 8 yielding x ¼ 8; y ¼ yielding x ¼ ; y ¼ 4 4 Note that the coordinates of v and v are the columns, not rows, of the change-of-basis matrix P. (b) Find the change-of-basis matrix Q from the new basis S 0 back to the old basis S. Here we write each of the old basis vectors u and u of S 0 as a linear combination of the new basis vectors v and v of S 0. This yields u ¼ 4v v u ¼ v 8v and hence; Q ¼ 4 8 As expected from Proposition 6.4, Q ¼ P. (In fact, we could have obtained Q by simply finding P.) EXAMPLE 6.6 Consider the following two bases of R : : E ¼ fe ; e ; e g ¼ fð; 0; 0Þ; ð0; ; 0Þ; ð0; 0; Þg and S ¼ fu ; u ; u g ¼ fð; 0; Þ; ð; ; Þ; ð; ; Þg (a) Find the change-of-basis matrix P from the basis E to the basis S. Because E is the usual basis, we can immediately write each basis element of S as a linear combination of the basis elements of E. Specifically, u ¼ð; 0; Þ ¼ e þ e 6 u ¼ð; ; Þ ¼e þ e þ e and hence; P ¼ u ¼ð; ; Þ ¼ e þ e þ e Again, the coordinates of u ; u ; u appear as the columns in P. Observe that P is simply the matrix whose columns are the basis vectors of S. This is true only because the original basis was the usual basis E. (b) Find the change-of-basis matrix Q from the basis S to the basis E. The definition of the change-of-basis matrix Q tells us to write each of the (usual) basis vectors in E as a linear combination of the basis elements of S. This yields e ¼ð; 0; 0Þ ¼ u þ u u 6 e ¼ð0; ; 0Þ ¼ u þ u and hence; Q ¼ 4 5 e ¼ð0; 0; Þ ¼ u u þ u 0 We emphasize that to find Q, we need to solve three systems of linear equations one system for each of e ; e ; e.

3 CHAPTER 6 Linear Mappings and Matrices 0 Alternatively, we can find Q ¼ P by forming the matrix M ¼½P; IŠ and row reducing M to row canonical form: thus; M ¼ ¼½I; P Š Q ¼ P 6 ¼ (Here we have used the fact that Q is the inverse of P.) The result in Example 6.6(a) is true in general. We state this result formally, because it occurs often. PROPOSITION 6.5: The change-of-basis matrix from the usual basis E of K n to any basis S of K n is the matrix P whose columns are, respectively, the basis vectors of S. Applications of Change-of-Basis Matrix First we show how a change of basis affects the coordinates of a vector in a vector space V. The following theorem is proved in Problem 6.. THEOREM 6.6: Let P be the change-of-basis matrix from a basis S to a basis S 0 in a vector space V. Then, for any vector v V, we have P 0 ¼ and hence; P ¼ 0 Namely, if we multiply the coordinates of v in the original basis S by P, we get the coordinates of v in the new basis S 0. Remark : Although P is called the change-of-basis matrix from the old basis S to the new basis S 0, we emphasize that P transforms the coordinates of v in the original basis S into the coordinates of v in the new basis S 0. Remark : Because of the above theorem, many texts call Q ¼ P, not P, the transition matrix from the old basis S to the new basis S 0. Some texts also refer to Q as the change-of-coordinates matrix. We now give the proof of the above theorem for the special case that dim V ¼. Suppose P is the change-of-basis matrix from the basis S ¼ fu ; u ; u g to the basis S 0 ¼ fv ; v ; v g; say, v ¼ a u þ a u þ a a v ¼ b u þ b u þ b u and hence; P ¼ v ¼ c u þ c u þ c u a b c 4 a b c 5 a b c Now suppose v V and, say, v ¼ k v þ k v þ k v. Then, substituting for v ; v ; v from above, we obtain v ¼ k ða u þ a u þ a u Þþk ðb u þ b u þ b u Þþk ðc u þ c u þ c u Þ ¼ða k þ b k þ c k Þu þða k þ b k þ c k Þu þða k þ b k þ c k Þu

4 0 CHAPTER 6 Linear Mappings and Matrices Thus, k a k þ b k þ c k 0 ¼ 4 k 5 and ¼ 4 a k þ b k þ c k 5 k a k þ b k þ c k Accordingly, a b c P 0 ¼ 4 a b c 5 k a k þ b k þ c k 4 k 5 ¼ 4 a k þ b k þ c k 5 ¼ a b c k a k þ b k þ c k Finally, multiplying the equation ¼ P, by P, we get P ¼ P P 0 ¼ I 0 ¼ 0 The next theorem (proved in Problem 6.6) shows how a change of basis affects the matrix representation of a linear operator. THEOREM 6.: Let P be the change-of-basis matrix from a basis S to a basis S 0 in a vector space V. Then, for any linear operator T on V, ½TŠ S 0 ¼ P ½TŠ S P That is, if A and B are the matrix representations of T relative, respectively, to S and S 0, then B ¼ P AP EXAMPLE 6. Consider the following two bases of R : E ¼ fe ; e ; e g ¼ fð; 0; 0Þ; ð0; ; 0Þ; ð0; 0; Þg and S ¼ fu ; u ; u g ¼ fð; 0; Þ; ð; ; Þ; ð; ; Þg The change-of-basis matrix P from E to S and its inverse P were obtained in Example 6.6. (a) Write v ¼ð; ; 5Þ as a linear combination of u ; u ; u, or, equivalently, find. One way to do this is to directly solve the vector equation v ¼ xu þ yu þ zu ; that is, x þ y þ z ¼ 4 5 ¼ x4 0 5 þ y4 5 þ z4 5 or y þ z ¼ 5 x þ y þ z ¼ 5 The solution is x ¼, y ¼ 5, z ¼ 4, so v ¼ u 5u þ 4u. On the other hand, we know that ½vŠ E ¼½; ; 5Š T, because E is the usual basis, and we already know P. Therefore, by Theorem 6.6, ¼ P ½vŠ E ¼ ¼ Thus, again, v ¼ u 5u þ 4u. (b) Let A ¼ 4 4 5, which may be viewed as a linear operator on R. Find the matrix B that represents A relative to the basis S

5 CHAPTER 6 Linear Mappings and Matrices 0 The definition of the matrix representation of A relative to the basis S tells us to write each of Aðu Þ, Aðu Þ, Aðu Þ as a linear combination of the basis vectors u ; u ; u of S. This yields Aðu Þ ¼ ð ; ; 5Þ ¼u 5u þ 6u 6 Aðu Þ¼ð; ; 9Þ ¼ u 4u þ 8u and hence; B ¼ Aðu Þ¼ð; 4; 5Þ ¼u 8e þ u 6 8 We emphasize that to find B, we need to solve three systems of linear equations one system for each of Aðu Þ, Aðu Þ, Aðu Þ. On the other hand, because we know P and P, we can use Theorem 6.. That is, B ¼ P AP ¼ ¼ This, as expected, gives the same result. 6.4 Similarity Suppose A and B are square matrices for which there exists an invertible matrix P such that B ¼ P AP; then B is said to be similar to A, or B is said to be obtained from A by a similarity transformation. We show (Problem 6.9) that similarity of matrices is an equivalence relation. By Theorem 6. and the above remark, we have the following basic result. THEOREM 6.8: Two matrices represent the same linear operator if and only if the matrices are similar. That is, all the matrix representations of a linear operator T form an equivalence class of similar matrices. A linear operator T is said to be diagonalizable if there exists a basis S of V such that T is represented by a diagonal matrix; the basis S is then said to diagonalize T. The preceding theorem gives us the following result. THEOREM 6.9: Let A be the matrix representation of a linear operator T. Then T is diagonalizable if and only if there exists an invertible matrix P such that P AP is a diagonal matrix. That is, T is diagonalizable if and only if its matrix representation can be diagonalized by a similarity transformation. We emphasize that not every operator is diagonalizable. However, we will show (Chapter 0) that every linear operator can be represented by certain standard matrices called its normal or canonical forms. Such a discussion will require some theory of fields, polynomials, and determinants. Functions and Similar Matrices Suppose f is a function on square matrices that assigns the same value to similar matrices; that is, f ðaþ ¼f ðbþ whenever A is similar to B. Then f induces a function, also denoted by f, on linear operators T in the following natural way. We define f ðtþ ¼f ð½tš S Þ where S is any basis. By Theorem 6.8, the function is well defined. The determinant (Chapter 8) is perhaps the most important example of such a function. The trace (Section.) is another important example of such a function.

6 04 CHAPTER 6 Linear Mappings and Matrices EXAMPLE 6.8 Consider the following linear operator F and bases E and S of R : Fðx; yþ ¼ðx þ y; 4x 5yÞ; E ¼ fð; 0Þ; ð0; Þg; S ¼ fð; Þ; ð; 5Þg By Example 6., the matrix representations of F relative to the bases E and S are, respectively, A ¼ 5 9 and B ¼ Using matrix A, we have (i) Determinant of F ¼ detðaþ ¼ 0 ¼ ; (ii) Trace of F ¼ trðaþ ¼ 5 ¼ : On the other hand, using matrix B, we have (i) Determinant of F ¼ detðbþ ¼ 860 þ 88 ¼ ; (ii) Trace of F ¼ trðbþ ¼5 55 ¼. As expected, both matrices yield the same result. 6.5 Matrices and General Linear Mappings Last, we consider the general case of linear mappings from one vector space into another. Suppose V and U are vector spaces over the same field K and, say, dim V ¼ m and dim U ¼ n. Furthermore, suppose S ¼ fv ; v ;...; v m g and S 0 ¼ fu ; u ;...; u n g are arbitrary but fixed bases, respectively, of V and U. Suppose F: V! U is a linear mapping. Then the vectors Fðv Þ, Fðv Þ;...; Fðv m Þ belong to U, and so each is a linear combination of the basis vectors in S 0 ; say, Fðv Þ¼ a u þ a u þþa n u n Fðv Þ¼ a u þ a u þþa n u n ::::::::::::::::::::::::::::::::::::::::::::::::::::::: Fðv m Þ¼a m u þ a m u þþa mn u n DEFINITION: The transpose of the above matrix of coefficients, denoted by m S;S 0ðFÞ or ½FŠ S;S 0, is called the matrix representation of F relative to the bases S and S 0. [We will use the simple notation mðfþ and ½FŠ when the bases are understood.] The following theorem is analogous to Theorem 6. for linear operators (Problem 6.6). THEOREM 6.0: For any vector v V, ½FŠ S;S 0 ¼½FðvÞŠ S 0. That is, multiplying the coordinates of v in the basis S of V by ½FŠ, we obtain the coordinates of FðvÞ in the basis S 0 of U. Recall that for any vector spaces V and U, the collection of all linear mappings from V into U is a vector space and is denoted by HomðV; UÞ. The following theorem is analogous to Theorem 6. for linear operators, where now we let M ¼ M m;n denote the vector space of all m n matrices (Problem 6.6). THEOREM 6.: The mapping m: HomðV; U Þ! M defined by mðfþ ¼ ½FŠ is a vector space isomorphism. That is, for any F; G HomðV; UÞ and any scalar k, (i) (ii) (iii) mðf þ GÞ ¼mðFÞ þmðgþ or ½F þ GŠ ¼½FŠ þ½gš mðkfþ ¼kmðFÞ or ½kFŠ ¼k½FŠ m is bijective (one-to-one and onto).

7 CHAPTER 6 Linear Mappings and Matrices 05 Our next theorem is analogous to Theorem 6. for linear operators (Problem 6.6). THEOREM 6.: Let S; S 0 ; S 00 be bases of vector spaces V; U; W, respectively. Let F: V! U and G U! W be linear mappings. Then ½G FŠ S;S 00 ¼½GŠ S 0 ;S 00½FŠ S;S 0 That is, relative to the appropriate bases, the matrix representation of the composition of two mappings is the matrix product of the matrix representations of the individual mappings. Next we show how the matrix representation of a linear mapping F: V! U is affected when new bases are selected (Problem 6.6). THEOREM 6.: Let P be the change-of-basis matrix from a basis e to a basis e 0 in V, and let Q be the change-of-basis matrix from a basis f to a basis f 0 in U. Then, for any linear map F: V! U, ½FŠ e 0 ; f 0 ¼ Q ½FŠ e; f P In other words, if A is the matrix representation of a linear mapping F relative to the bases e and f, and B is the matrix representation of F relative to the bases e 0 and f 0, then B ¼ Q AP Our last theorem, proved in Problem 6.6, shows that any linear mapping from one vector space V into another vector space U can be represented by a very simple matrix. We note that this theorem is analogous to Theorem.8 for m n matrices. THEOREM 6.4: Let F: V! U be linear and, say, rankðfþ ¼r. Then there exist bases of V and U such that the matrix representation of F has the form A ¼ I r where I r is the r-square identity matrix. The above matrix A is called the normal or canonical form of the linear map F. SOLVED PROBLEMS Matrix Representation of Linear Operators 6.. Consider the linear mapping F: R! R defined by Fðx; yþ ¼ðx þ 4y; x 5yÞ and the following bases of R : E ¼ fe ; e g ¼ fð; 0Þ; ð0; Þg and S ¼ fu ; u g ¼ fð; Þ; ð; Þg (a) Find the matrix A representing F relative to the basis E. (b) Find the matrix B representing F relative to the basis S. (a) Because E is the usual basis, the rows of A are simply the coefficients in the components of Fðx; yþ; that is, using ða; bþ ¼ae þ be, we have Fðe Þ¼Fð; 0Þ ¼ð; Þ ¼ e þ e and so A ¼ 4 Fðe Þ¼Fð0; Þ ¼ð4; 5Þ ¼4e 5e 5 Note that the coefficients of the basis vectors are written as columns in the matrix representation.

8 06 CHAPTER 6 Linear Mappings and Matrices (b) First find Fðu Þ and write it as a linear combination of the basis vectors u and u. We have Fðu Þ¼Fð; Þ ¼ð; 8Þ ¼xð; Þþyð; Þ; Solve the system to obtain x ¼ 49, y ¼ 0. Therefore, Fðu Þ¼ 49u þ 0u and so x þ y ¼ x þ y ¼ 8 Next find Fðu Þ and write it as a linear combination of the basis vectors u and u. We have Fðu Þ¼Fð; Þ ¼ð8; Þ ¼xð; Þþyð; Þ; Solve for x and y to obtain x ¼ 6, y ¼ 4. Hence, and so Fðu Þ¼ 6u þ 4u 49 6 Write the coefficients of u and u as columns to obtain B ¼ 0 4 x þ y ¼ 8 x þ y ¼ (b 0 ) Alternatively, one can first find the coordinates of an arbitrary vector ða; bþ in R relative to the basis S. We have ða; bþ ¼xð; Þþyð; Þ ¼ðx þ y; x þ yþ; and so Solve for x and y in terms of a and b to get x ¼ a þ b, y ¼ a b. Thus, ða; bþ ¼ ð a þ bþu þða bþu x þ y ¼ a x þ y ¼ b Then use the formula for ða; bþ to find the coordinates of Fðu Þ and Fðu Þ relative to S: Fðu Þ¼Fð; Þ ¼ð; 8Þ ¼ 49u þ 0u 49 6 and so B ¼ Fðu Þ¼Fð; Þ ¼ð8; Þ ¼ 6u þ 4u Consider the following linear operator G on R and basis S: Gðx; yþ ¼ðx y; 4x þ yþ and S ¼ fu ; u g ¼ fð; Þ; ð; 5Þg (a) Find the matrix representation ½GŠ S of G relative to S. (b) Verify ½GŠ S ¼½GðvÞŠ S for the vector v ¼ð4; Þ in R. First find the coordinates of an arbitrary vector v ¼ða; bþ in R relative to the basis S. We have a ¼ x þ y x þ y ¼ a ; and so b 5 x þ 5y ¼ b Solve for x and y in terms of a and b to get x ¼ 5a þ b, y ¼ a b. Thus, ða; bþ ¼ ð 5a þ bþu þða bþu ; and so ½vŠ ¼ ½ 5a þ b; a bš T (a) Using the formula for ða; bþ and Gðx; yþ ¼ðx y; 4x þ yþ, we have Gðu Þ¼Gð; Þ ¼ ð 9; Þ ¼u 0u 0 and so ½GŠ Gðu Þ¼Gð; 5Þ ¼ ð ; Þ ¼0u 6u S ¼ 0 6 (We emphasize that the coefficients of u and u are written as columns, not rows, in the matrix representation.) (b) Use the formula ða; bþ ¼ ð 5a þ bþu þða bþu to get v ¼ð4; Þ ¼ 6u þ 5u GðvÞ ¼Gð4; Þ ¼ð0; Þ ¼ u þ 80u Then ¼ ½ 6; 5Š T and ½GðvÞŠ S ¼ ½ ; 80Š T

9 CHAPTER 6 Linear Mappings and Matrices 0 Accordingly, ½GŠ S ¼ (This is expected from Theorem 6..) ¼ Consider the following matrix A and basis S of R : A ¼ 4 and S ¼ fu 5 6 ; u g ¼ ¼½GðvÞŠ 80 S ; The matrix A defines a linear operator on R. Find the matrix B that represents the mapping A relative to the basis S. First find the coordinates of an arbitrary vector ða; bþ T with respect to the basis S. We have a b ¼ x þ y or x þ y ¼ a x y ¼ b Solve for x and y in terms of a and b to obtain x ¼ a þ b, y ¼ a b. Thus, ða; bþ T ¼ða þ bþu þ ð a bþu Then use the formula for ða; bþ T to find the coordinates of Au and Au relative to the basis S: Au ¼ 4 ¼ 6 ¼ 6u þ 9u 5 6 Au ¼ 4 ¼ ¼ 5u þ u 5 6 Writing the coordinates as columns yields B ¼ Find the matrix representation of each of the following linear operators F on R relative to the usual basis E ¼ fe ; e ; e g of R ; that is, find ½FŠ ¼½FŠ E : (a) F defined by Fðx; y; zþ ¼ðxþy z; 4x 5y 6z; x þ 8y þ 9z). (b) F defined by the matrix A ¼ (c) F defined by Fðe Þ¼ð; ; 5Þ; Fðe Þ¼ð; 4; 6Þ, Fðe Þ¼ð; ; Þ. (Theorem 5. states that a linear map is completely defined by its action on the vectors in a basis.) (a) Because E is the usual basis, simply write the coefficients of the components of Fðx; y; zþ as rows: ½FŠ ¼ (b) Because E is the usual basis, ½FŠ ¼A, the matrix A itself. (c) Here Fðe Þ¼ð; ; 5Þ ¼ e þ e þ 5e Fðe Þ¼ð; 4; 6Þ ¼e þ 4e þ 6e Fðe Þ¼ð; ; Þ ¼e þ e þ e and so ½FŠ ¼ That is, the columns of ½FŠ are the images of the usual basis vectors Let G be the linear operator on R defined by Gðx; y; zþ ¼ðy þ z; x 4y; xþ. (a) Find the matrix representation of G relative to the basis S ¼ fw ; w ; w g ¼ fð; ; Þ; ð; ; 0Þ; ð; 0; 0Þg (b) Verify that ½GŠ½vŠ ¼½GðvÞŠ for any vector v in R.

10 08 CHAPTER 6 Linear Mappings and Matrices First find the coordinates of an arbitrary vector ða; b; cþ R with respect to the basis S. Write ða; b; cþ as a linear combination of w ; w ; w using unknown scalars x; y, and z: ða; b; cþ ¼xð; ; Þþyð; ; 0Þþzð; 0; 0Þ ¼ðx þ y þ z; x þ y; xþ Set corresponding components equal to each other to obtain the system of equations x þ y þ z ¼ a; x þ y ¼ b; x ¼ c Solve the system for x; y, z in terms of a; b, c to find x ¼ c, y ¼ b c, z ¼ a b. Thus, ða; b; cþ ¼cw þðb cþw þða bþw, or equivalently, ½ða; b; cþš ¼ ½c; b c; a bš T (a) Because Gðx; y; zþ ¼ðy þ z; x 4y; xþ, Gðw Þ¼Gð; ; Þ ¼ð; ; Þ ¼w 6x þ 6x Gðw Þ¼Gð; ; 0Þ ¼ð; ; Þ ¼w 6w þ 5w Gðw Þ¼Gð; 0; 0Þ ¼ð0; ; Þ ¼w w w Write the coordinates Gðw Þ, Gðw Þ, Gðw Þ as columns to get ½GŠ ¼ (b) Write GðvÞ as a linear combination of w ; w ; w, where v ¼ða; b; cþ is an arbitrary vector in R, GðvÞ ¼Gða; b; cþ ¼ðb þ c; a 4b; aþ ¼aw þ ð a 4bÞw þ ð a þ 6b þ cþw or equivalently, Accordingly, ½GðvÞŠ ¼ ½a; a 4b; a þ 6b þ cš T ½GŠ½vŠ ¼ c b c a b 5 ¼ 4 a a 4b a þ 6b þ c 5 ¼½GðvÞŠ 6.6. Consider the following matrix A and basis S of R : 8 < A ¼ 4 05 and S ¼ fu ; u ; u g ¼ 4 5; : ; 9 = 4 5 ; The matrix A defines a linear operator on R. Find the matrix B that represents the mapping A relative to the basis S. (Recall that A represents itself relative to the usual basis of R.) First find the coordinates of an arbitrary vector ða; b; cþ in R with respect to the basis S. We have a 0 4 b 5 ¼ x4 5 þ y4 5 þ z4 5 or c x þ z ¼ a x þ y þ z ¼ b x þ y þ z ¼ c Solve for x; y; z in terms of a; b; c to get x ¼ a þ b c; y ¼ a þ b c; z ¼ c b thus; ða; b; cþ T ¼ða þ b cþu þ ð a þ b cþu þðc bþu

11 CHAPTER 6 Linear Mappings and Matrices 09 Then use the formula for ða; b; cþ T to find the coordinates of Au, Au, Au relative to the basis S: Aðu Þ¼Að; ; Þ T ¼ð0; ; Þ T ¼ u þ u þ u Aðu Þ¼Að; ; 0Þ T ¼ ð ; ; Þ T ¼ 4u u þ u so B ¼ Aðu Þ¼Að; ; Þ T ¼ð0; ; Þ T ¼ u u þ u For each of the following linear transformations (operators) L on R, find the matrix A that represents L (relative to the usual basis of R ): (a) L is defined by Lð; 0Þ ¼ð; 4Þ and Lð0; Þ ¼ð5; 8Þ. (b) L is the rotation in R counterclockwise by 90. (c) L is the reflection in R about the line y ¼ x. (a) Because fð; 0Þ; ð0; Þg is the usual basis of R, write their images under L as columns to get A ¼ (b) Under the rotation L, we have Lð; 0Þ ¼ð0; Þ and Lð0; Þ ¼ ð ; 0Þ. Thus, A ¼ 0 0 (c) Under the reflection L, we have Lð; 0Þ ¼ð0; Þ and Lð0; Þ ¼ ð ; 0Þ. Thus, A ¼ The set S ¼ fe t, te t, t e t g is a basis of a vector space V of functions f : R! R. Let D be the differential operator on V; that is, Dð f Þ¼df =dt. Find the matrix representation of D relative to the basis S. Find the image of each basis function: Dðe t Þ ¼ e t ¼ ðe t Þþ0ðte t Þþ0ðt e t Þ Dðte t Þ ¼ e t þ te t ¼ ðe t Þþðte t Þþ0ðt e t Þ Dðt e t Þ¼te t þ t e t ¼ 0ðe t Þþðte t Þþðt e t Þ and thus; 0 ½DŠ ¼ Prove Theorem 6.: Let T: V! V be a linear operator, and let S be a (finite) basis of V. Then, for any vector v in V, ½TŠ S ¼½TðvÞŠ S. Suppose S ¼ fu ; u ;...; u n g, and suppose, for i ¼ ;...; n, Then ½TŠ S is the n-square matrix whose jth row is Tðu i Þ¼a i u þ a i u þþa in u n ¼ Pn j¼ a ij u j ða j ; a j ;...; a nj Þ ðþ Now suppose v ¼ k u þ k u þþk n u n ¼ Pn i¼ k i u i Writing a column vector as the transpose of a row vector, we have ¼½k ; k ;...; k n Š T ðþ

12 0 CHAPTER 6 Linear Mappings and Matrices Furthermore, using the linearity of T, TðvÞ ¼T Pn i¼ ¼ Pn P n j¼ i¼ k i u i a ij k i ¼ Pn i¼ u j ¼ Pn Thus, ½TðvÞŠ S is the column vector whose jth entry is k i Tðu i Þ¼ Pn j¼ i¼ k i P n j¼ a ij u j ða j k þ a j k þþa nj k n Þu j a j k þ a j k þþa nj k n ðþ On the other hand, the jth entry of ½TŠ S is obtained by multiplying the jth row of ½TŠ S by that is () by (). But the product of () and () is (). Hence, ½TŠ S and ½TðvÞŠ S have the same entries. Thus, ½TŠ S ¼½TðvÞŠ S Prove Theorem 6.: Let S ¼ fu ; u ;...; u n g be a basis for V over K, and let M be the algebra of n-square matrices over K. Then the mapping m: AðVÞ! M defined by mðtþ ¼½TŠ S is a vector space isomorphism. That is, for any F; G AðVÞ and any k K, we have (i) ½F þ GŠ ¼ ½FŠ þ ½GŠ, (ii) ½kFŠ ¼ k½fš, (iii) m is one-to-one and onto. (i) Suppose, for i ¼ ;...; n, Fðu i Þ¼ Pn j¼ a ij u j and Gðu i Þ¼ Pn Consider the matrices A ¼½a ij Š and B ¼½b ij Š. Then ½FŠ ¼A T and ½GŠ ¼B T. We have, for i ¼ ;...; n, Because A þ B is the matrix ða ij þ b ij Þ, we have (ii) Also, for i ¼ ;...; n; Because ka is the matrix ðka ij Þ, we have (iii) j¼ b ij u j ðf þ GÞðu i Þ¼Fðu i ÞþGðu i Þ¼ Pn ða ij þ b ij Þu j j¼ ½F þ GŠ ¼ðA þ BÞ T ¼ A T þ B T ¼½FŠþ½GŠ ðkfþðu i Þ¼kFðu i Þ¼k Pn j¼ a ij u j ¼ Pn ðka ij Þu j j¼ ½kFŠ ¼ðkAÞ T ¼ ka T ¼ k½fš Finally, m is one-to-one, because a linear mapping is completely determined by its values on a basis. Also, m is onto, because matrix A ¼½a ij Š in M is the image of the linear operator, Thus, the theorem is proved. Fðu i Þ¼ Pn j¼ a ij u j ; i ¼ ;...; n 6.. Prove Theorem 6.: For any linear operators G; F AðVÞ, ½G FŠ ¼½GŠ½FŠ. Using the notation in Problem 6.0, we have P n ðg FÞðu i Þ¼GðFðu i ÞÞ ¼ G a ij u j ¼ Pn a ij Gðu j Þ j¼ j¼ ¼ Pn P a n ij b jk u k ¼ Pn P n a ij b jk u k j¼ k¼ k¼ j¼ Recall that AB is the matrix AB ¼½c ik Š, where c ik ¼ P n j¼ a ijb jk. Accordingly, The theorem is proved. ½G FŠ ¼ðABÞ T ¼ B T A T ¼½GŠ½FŠ

13 CHAPTER 6 Linear Mappings and Matrices 6.. Let A be the matrix representation of a linear operator T. Prove that, for any polynomial f ðtþ, we have that f ðaþ is the matrix representation of f ðtþ. [Thus, f ðtþ ¼0 if and only if f ðaþ ¼0.] Let f be the mapping that sends an operator T into its matrix representation A. We need to prove that fð f ðtþþ ¼ f ðaþ. Suppose f ðtþ ¼a n t n þþa t þ a 0. The proof is by induction on n, the degree of f ðtþ. Suppose n ¼ 0. Recall that fði 0 Þ¼I, where I 0 is the identity mapping and I is the identity matrix. Thus, fð f ðtþþ ¼ fða 0 I 0 Þ¼a 0 fði 0 Þ¼a 0 I ¼ f ðaþ and so the theorem holds for n ¼ 0. Now assume the theorem holds for polynomials of degree less than n. Then, because f is an algebra isomorphism, and the theorem is proved. Change of Basis fð f ðtþþ ¼ fða n T n þ a n T n þþa T þ a 0 I 0 Þ ¼ a n fðtþfðt n Þþfða n T n þþa T þ a 0 I 0 Þ ¼ a n AA n þða n A n þþa A þ a 0 IÞ¼f ðaþ The coordinate vector in this section will always denote a column vector; that is, 6.. Consider the following bases of R : ¼½a ; a ;...; a n Š T E ¼ fe ; e g ¼ fð; 0Þ; ð0; Þg and S ¼ fu ; u g ¼ fð; Þ; ð; 4Þg (a) Find the change-of-basis matrix P from the usual basis E to S. (b) Find the change-of-basis matrix Q from S back to E. (c) Find the coordinate vector ½vŠ of v ¼ð5; Þ relative to S. (a) Because E is the usual basis, simply write the basis vectors in S as columns: P ¼ 4 (b) Method. Use the definition of the change-of-basis matrix. That is, express each vector in E as a linear combination of the vectors in S. We do this by first finding the coordinates of an arbitrary vector v ¼ða; bþ relative to S. We have x þ y ¼ a ða; bþ ¼xð; Þþyð; 4Þ ¼ðxþy; x þ 4yÞ or x þ 4y ¼ b Solve for x and y to obtain x ¼ 4a b, y ¼ a þ b. Thus, v ¼ð4a bþu þ ð a þ bþu and ¼ ½ða; bþš S ¼½4a b; a þ bš T Using the above formula for and writing the coordinates of the e i as columns yields e ¼ð; 0Þ ¼ 4u u 4 and Q ¼ e ¼ð0; Þ ¼ u þ u Method. Thus, Because Q ¼ P ; find P, say by using the formula for the inverse of a matrix. P ¼ 4 (c) Method. Write v as a linear combination of the vectors in S, say by using the above formula for v ¼ða; bþ. We have v ¼ð5; Þ ¼u 8u ; and so ¼½; 8Š T. Method. Use, from Theorem 6.6, the fact that ¼ P ½vŠ E and the fact that ½vŠ E ¼½5; Š T : ¼ P 4 5 ½vŠ E ¼ ¼ 8

14 CHAPTER 6 Linear Mappings and Matrices 6.4. The vectors u ¼ð; ; 0Þ, u ¼ð; ; Þ, u ¼ð0; ; Þ form a basis S of R. Find (a) The change-of-basis matrix P from the usual basis E ¼ fe ; e ; e g to S. (b) The change-of-basis matrix Q from S back to E. 0 (a) Because E is the usual basis, simply write the basis vectors of S as columns: P ¼ (b) Method. Express each basis vector of E as a linear combination of the basis vectors of S by first finding the coordinates of an arbitrary vector v ¼ða; b; cþ relative to the basis S. We have a 0 x þ y ¼ a 4 b 5 ¼ x4 5 þ y4 5 þ z4 5 or x þ y þ z ¼ b c 0 y þ z ¼ c Solve for x; y; z to get x ¼ a b þ c, y ¼ 6a þ b c, z ¼ 4a b þ c. Thus, or v ¼ða; b; cþ ¼ða bþcþu þ ð 6a þ b cþu þð4a bþcþu ¼ ½ða; b; cþš S ¼½a bþc; 6a þ b c; 4a b þ cš T Using the above formula for and then writing the coordinates of the e i as columns yields e ¼ð; 0; 0Þ ¼ u 6u þ 4u e ¼ð0; ; 0Þ ¼ u þ u u and Q ¼ e ¼ð0; 0; Þ ¼ u u þ u 4 Method. Find P by row reducing M ¼½P; IŠ to the form ½I; P Š: M ¼ ¼½I; P Š Thus, Q ¼ P ¼ Suppose the x-axis and y-axis in the plane R are rotated counterclockwise 45 so that the new x 0 -axis and y 0 -axis are along the line y ¼ x and the line y ¼ x, respectively. (a) Find the change-of-basis matrix P. (b) Find the coordinates of the point Að5; 6Þ under the given rotation. (a) The unit vectors in the direction of the new x 0 - and y 0 -axes are u ¼ð p ffiffiffi ; pffiffiffi Þ and u ¼ ð p ffiffi ; pffiffi Þ (The unit vectors in the direction of the original x and y axes are the usual basis of R.) Thus, write the coordinates of u and u as columns to obtain " pffiffi pffiffiffi # P ¼ pffiffi pffiffiffi (b) Multiply the coordinates of the point by P : " pffiffiffi pffiffiffi # " pffiffiffi # 5 ffiffiffi pffiffiffi ¼ pffiffiffi 6 p (Because P is orthogonal, P is simply the transpose of P.)

15 CHAPTER 6 Linear Mappings and Matrices 6.6. The vectors u ¼ð; ; 0Þ, u ¼ð0; ; Þ, u ¼ð; ; Þ form a basis S of R. Find the coordinates of an arbitrary vector v ¼ða; b; cþ relative to the basis S. Method. this yields the system Express v as a linear combination of u ; u ; u using unknowns x; y; z. We have ða; b; cþ ¼xð; ; 0Þþyð0; ; Þþzð; ; Þ ¼ðx þ z; x þ y þ z; y þ zþ x þ z ¼ a x þ y þ z ¼ b y þ z ¼ c or x þ z ¼ a y þ z ¼ aþb y þ z ¼ c or x þ z ¼ a y þ z ¼ aþb z ¼ a b þ c Solving by back-substitution yields x ¼ b c, y ¼ a þ b c, z ¼ a b þ c. Thus, ¼½b c; a þ b c; a b þ cš T Method. Find P by row reducing M ¼½P; IŠ to the form ½I; P Š, where P is the change-of-basis matrix from the usual basis E to S or, in other words, the matrix whose columns are the basis vectors of S. We have Thus; M ¼ ¼½I; P Š a b c P 6 ¼ 4 5 and ¼ P ½vŠ E ¼ 4 54 b 5 ¼ 4 a þ b c 5 c a b þ c 6.. Consider the following bases of R : S ¼ fu ; u g ¼ fð; Þ; ð; 4Þg and S 0 ¼ fv ; v g ¼ fð; Þ; ð; 8Þg (a) Find the coordinates of v ¼ða; bþ relative to the basis S. (b) Find the change-of-basis matrix P from S to S 0. (c) Find the coordinates of v ¼ða; bþ relative to the basis S 0. (d) Find the change-of-basis matrix Q from S 0 back to S. (e) Verify Q ¼ P. (f ) Show that, for any vector v ¼ða; bþ in R, P ¼ 0. (See Theorem 6.6.) (a) Let v ¼ xu þ yu for unknowns x and y; that is, a x þ y ¼ a ¼ x þ y or b 4 x 4y ¼ b Solve for x and y in terms of a and b to get x ¼ a b and y ¼ a þ b. Thus, or x þ y ¼ a y ¼ a þ b ða; bþ ¼ ð a Þu þða þ bþu or ½ða; bþš S ¼ ½ a b; a þ bšt (b) Use part (a) to write each of the basis vectors v and v of S 0 as a linear combination of the basis vectors u and u of S; that is, v ¼ð; Þ ¼ ð 9 Þu þð þ Þu ¼ u þ 5 u v ¼ð; 8Þ ¼ ð 6 Þu þð þ 4Þu ¼ 8u þ u

16 4 CHAPTER 6 Linear Mappings and Matrices Then P is the matrix whose columns are the coordinates of v and v relative to the basis S; that is, " P ¼ # 8 5 (c) Let v ¼ xv þ yv for unknown scalars x and y: a ¼ x þ y x þ y ¼ a or b 8 x þ 8y ¼ b Solve for x and y to get x ¼ 8a þ b and y ¼ a b. Thus, or x þ y ¼ a y ¼ b a ða; bþ ¼ ð 8a þ bþv þða bþv or ½ða; bþš S 0 ¼ ½ 8a þ b; a bš T (d) Use part (c) to express each of the basis vectors u and u of S as a linear combination of the basis vectors v and v of S 0 : u ¼ð; Þ ¼ ð 8 6Þv þð þ Þv ¼ 4v þ 5v u ¼ð; 4Þ ¼ ð 4 Þv þð9þ4þv ¼ 6v þ v Write the coordinates of u and u relative to S 0 as columns to obtain Q ¼ " # (e) QP ¼ ¼ 0 ¼ I (f ) Use parts (a), (c), and (d) to obtain " P 4 6 a ¼ Q ¼ b # 8a þ b 5 a þ b ¼ ¼½vŠ a b S Suppose P is the change-of-basis matrix from a basis fu i g to a basis fw i g, and suppose Q is the change-of-basis matrix from the basis fw i g back to fu i g. Prove that P is invertible and that Q ¼ P. Suppose, for i ¼ ; ;...; n, that w i ¼ a i u þ a i u þ...þ a in u n ¼ Pn a ij u j ðþ and, for j ¼ ; ;...; n, u j ¼ b j w þ b j w þþb jn w n ¼ Pn j¼ k¼ b jk w k Let A ¼½a ij Š and B ¼½b jk Š. Then P ¼ A T and Q ¼ B T. Substituting () into () yields w i ¼ Pn P n a ij b jk w k ¼ Pn P n a ij b jk w k j¼ j¼ k¼ Because fw i g is a basis, P a ij b jk ¼ d ik, where d ik is the Kronecker delta; that is, d ik ¼ if i ¼ k but d ik ¼ 0 if i 6¼ k. Suppose AB ¼½c ik Š. Then c ik ¼ d ik. Accordingly, AB ¼ I, and so Thus, Q ¼ P. k¼ QP ¼ B T A T ¼ðABÞ T ¼ I T ¼ I 6.9. Consider a finite sequence of vectors S ¼ fu ; u ;...; u n g. Let S 0 be the sequence of vectors obtained from S by one of the following elementary operations : () Interchange two vectors. () Multiply a vector by a nonzero scalar. () Add a multiple of one vector to another vector. Show that S and S 0 span the same subspace W. Also, show that S 0 is linearly independent if and only if S is linearly independent.. ðþ

17 CHAPTER 6 Linear Mappings and Matrices 5 Observe that, for each operation, the vectors S 0 are linear combinations of vectors in S. Also, because each operation has an inverse of the same type, each vector in S is a linear combination of vectors in S 0. Thus, S and S 0 span the same subspace W. Moreover, S 0 is linearly independent if and only if dim W ¼ n, and this is true if and only if S is linearly independent Let A ¼½a ij Š and B ¼½b ij Š be row equivalent m n matrices over a field K, and let v ; v ;...; v n be any vectors in a vector space V over K. For i ¼ ; ;...; m, let u i and w i be defined by u i ¼ a i v þ a i v þþa in v n and w i ¼ b i v þ b i v þþb in v n Show that fu i g and fw i g span the same subspace of V. Applying an elementary operation of Problem 6.9 to fu i g is equivalent to applying an elementary row operation to the matrix A. Because A and B are row equivalent, B can be obtained from A by a sequence of elementary row operations. Hence, fw i g can be obtained from fu i g by the corresponding sequence of operations. Accordingly, fu i g and fw i g span the same space. 6.. Suppose u ; u ;...; u n belong to a vector space V over a field K, and suppose P ¼½a ij Š is an n-square matrix over K. For i ¼ ; ;...; n, let v i ¼ a i u þ a i u þþa in u n. (a) Suppose P is invertible. Show that fu i g and fv i g span the same subspace of V. Hence, fu i g is linearly independent if and only if fv i g is linearly independent. (b) Suppose P is singular (not invertible). Show that fv i g is linearly dependent. (c) Suppose fv i g is linearly independent. Show that P is invertible. (a) Because P is invertible, it is row equivalent to the identity matrix I. Hence, by Problem 6.9, fv i g and fu i g span the same subspace of V. Thus, one is linearly independent if and only if the other is linearly independent. (b) Because P is not invertible, it is row equivalent to a matrix with a zero row. This means fv i g spans a substance that has a spanning set with less than n elements. Thus, fv i g is linearly dependent. (c) This is the contrapositive of the statement of part (b), and so it follows from part (b). 6.. Prove Theorem 6.6: Let P be the change-of-basis matrix from a basis S to a basis S 0 in a vector space V. Then, for any vector v V, we have P 0 ¼, and hence, P ¼ 0. Suppose S ¼ fu ;...; u n g and S 0 ¼ fw ;...; w n g, and suppose, for i ¼ ;...; n, w i ¼ a i u þ a i u þþa in u n ¼ Pn Then P is the n-square matrix whose jth row is ða j ; a j ;...; a nj Þ Also suppose v ¼ k w þ k w þþk n w n ¼ P n i¼ k iw i. Then Substituting for w i in the equation for v, we obtain v ¼ Pn i¼ ¼ Pn j¼ k i w i ¼ Pn j¼ a ij u j ðþ 0 ¼½k ; k ;...; k n Š T ðþ i¼ k i P n j¼ a ij u j ¼ Pn j¼ ða j k þ a j k þþa nj k n Þu j Accordingly, is the column vector whose jth entry is a j k þ a j k þþa nj k n P n a ij k i u j i¼ On the other hand, the jth entry of P 0 is obtained by multiplying the jth row of P by 0 that is, () by (). However, the product of () and () is (). Hence, P 0 and have the same entries. Thus, P 0 ¼ 0, as claimed. Furthermore, multiplying the above by P gives P ¼ P P 0 ¼ 0. ðþ

18 6 CHAPTER 6 Linear Mappings and Matrices Linear Operators and Change of Basis 6.. Consider the linear transformation F on R defined by Fðx; yþ ¼ð5x y; x þ yþ and the following bases of R : E ¼ fe ; e g ¼ fð; 0Þ; ð0; Þg and S ¼ fu ; u g ¼ fð; 4Þ; ð; Þg (a) Find the change-of-basis matrix P from E to S and the change-of-basis matrix Q from S back to E. (b) Find the matrix A that represents F in the basis E. (c) Find the matrix B that represents F in the basis S. (a) Because E is the usual basis, simply write the vectors in S as columns to obtain the change-of-basis matrix P. Recall, also, that Q ¼ P. Thus, P ¼ and Q ¼ P ¼ 4 4 (b) Write the coefficients of x and y in Fðx; yþ ¼ð5x y; x þ yþ as rows to get A ¼ 5 (c) Method. Find the coordinates of Fðu Þ and Fðu Þ relative to the basis S. This may be done by first finding the coordinates of an arbitrary vector ða; bþ in R relative to the basis S. We have ða; bþ ¼xð; 4Þþyð; Þ ¼ðx þ y; 4x þ yþ; and so Solve for x and y in terms of a and b to get x ¼ a þ b, y ¼ 4a b. Then ða; bþ ¼ ð a þ bþu þð4a bþu x þ y ¼ a 4x þ y ¼ b Now use the formula for ða; bþ to obtain Fðu Þ¼Fð; 4Þ ¼ð; 6Þ ¼ 5u u 5 and so B ¼ Fðu Þ¼Fð; Þ ¼ð; Þ ¼ u þ u Method. By Theorem 6., B ¼ P AP. Thus, B ¼ P AP ¼ 5 5 ¼ Let A ¼. Find the matrix B that represents the linear operator A relative to the basis 4 S ¼ fu ; u g ¼ f½; Š T ; ½; 5Š T g. [Recall A defines a linear operator A: R! R relative to the usual basis E of R ]. Method. Find the coordinates of Aðu Þ and Aðu Þ relative to the basis S by first finding the coordinates of an arbitrary vector ½a; bš T in R relative to the basis S. By Problem 6., ½a; bš T ¼ ð 5a þ bþu þða bþu Using the formula for ½a; bš T, we obtain Aðu Þ¼ ¼ ¼ 5u þ u 4 and Aðu Þ¼ ¼ 9 ¼ 89u þ 54u Thus; B ¼ 54 Method. Use B ¼ P AP, where P is the change-of-basis matrix from the usual basis E to S. Thus, simply write the vectors in S (as columns) to obtain the change-of-basis matrix P and then use the formula

19 CHAPTER 6 Linear Mappings and Matrices for P. This gives P ¼ and P ¼ 5 5 Then B ¼ P AP ¼ ¼ Let A ¼ : Find the matrix B that represents the linear operator A relative to the basis S ¼ fu ; u ; u g ¼ f½; ; 0Š T ; ½0; ; Š T ; ½; ; Š T g [Recall A that defines a linear operator A: R! R relative to the usual basis E of R.] Method. Find the coordinates of Aðu Þ, Aðu Þ, Aðu Þ relative to the basis S by first finding the coordinates of an arbitrary vector v ¼ða; b; cþ in R relative to the basis S. By Problem 6.6, Using this formula for ½a; b; cš T, we obtain ¼ðb cþu þ ð a þ b cþu þða b þ cþu Aðu Þ¼½4; ; Š T ¼ 8u þ u 5u ; Aðu Þ¼½4; ; 0Š T ¼ u 6u þ u Aðu Þ¼½9; 4; Š T ¼ u u þ 6u Writing the coefficients of u ; u ; u as columns yields 8 B ¼ Method. Use B ¼ P AP, where P is the change-of-basis matrix from the usual basis E to S. The matrix P (whose columns are simply the vectors in S) and P appear in Problem 6.6. Thus, 0 B ¼ P AP ¼ ¼ Prove Theorem 6.: Let P be the change-of-basis matrix from a basis S to a basis S 0 in a vector space V. Then, for any linear operator T on V, ½TŠ S 0 ¼ P ½TŠ S P. Let v be a vector in V. Then, by Theorem 6.6, P 0 ¼. Therefore, But ½TŠ S 0 0 ¼½TðvÞŠ S 0. Hence, P ½TŠ S P 0 ¼ P ½TŠ S ¼ P ½TðvÞŠ S ¼½TðvÞŠ S 0 Because the mapping v! 0 P ½TŠ S P ¼½TŠ S 0, as claimed. P ½TŠ S P 0 ¼½TŠ S 0 0 is onto K n, we have P ½TŠ S PX ¼½TŠ S 0X for every X K n. Thus, Similarity of Matrices 6.. Let A ¼ 4 6 and P ¼. 4 (a) Find B ¼ P AP. (b) Verify trðbþ ¼trðAÞ: (c) Verify detðbþ ¼detðAÞ: (a) First find P using the formula for the inverse of a matrix. We have " P ¼ #

20 8 CHAPTER 6 Linear Mappings and Matrices Then B ¼ P AP ¼ 4 ¼ (b) trðaþ ¼4 þ 6 ¼ 0 and trðbþ ¼5 5 ¼ 0. Hence, trðbþ ¼trðAÞ. (c) detðaþ ¼4 þ 6 ¼ 0 and detðbþ ¼ 5 þ 405 ¼ 0. Hence, detðbþ ¼detðAÞ Find the trace of each of the linear transformations F on R in Problem 6.4. Find the trace (sum of the diagonal elements) of any matrix representation of F such as the matrix representation ½FŠ ¼½FŠ E of F relative to the usual basis E given in Problem 6.4. (a) trðfþ ¼trð½FŠÞ ¼ 5 þ 9 ¼ 5. (b) trðfþ ¼trð½FŠÞ ¼ þ þ 5 ¼ 9. (c) trðfþ ¼trð½FŠÞ ¼ þ 4 þ ¼ Write A B if A is similar to B that is, if there exists an invertible matrix P such that A ¼ P BP. Prove that is an equivalence relation (on square matrices); that is, (a) A A, for every A. (b) If A B, then B A. (c) If A B and B C, then A C. (a) The identity matrix I is invertible, and I ¼ I. Because A ¼ I AI, we have A A. (b) Because A B, there exists an invertible matrix P such that A ¼ P BP. Hence, B ¼ PAP ¼ðP Þ AP and P is also invertible. Thus, B A. (c) Because A B, there exists an invertible matrix P such that A ¼ P BP, and as B C, there exists an invertible matrix Q such that B ¼ Q CQ. Thus, A ¼ P BP ¼ P ðq CQÞP ¼ðP Q ÞCðQPÞ ¼ðQPÞ CðQPÞ and QP is also invertible. Thus, A C Suppose B is similar to A, say B ¼ P AP. Prove (a) B n ¼ P A n P, and so B n is similar to A n. (b) f ðbþ ¼P f ðaþp, for any polynomial f ðxþ, and so f ðbþ is similar to f ðaþ: (c) B is a root of a polynomial gðxþ if and only if A is a root of gðxþ. (a) (b) The proof is by induction on n. The result holds for n ¼ by hypothesis. Suppose n > and the result holds for n. Then B n ¼ BB n ¼ðP APÞðP A n PÞ¼P A n P Suppose f ðxþ ¼a n x n þþa x þ a 0. Using the left and right distributive laws and part (a), we have P f ðaþp ¼ P ða n A n þþa A þ a 0 IÞP ¼ P ða n A n ÞP þþp ða AÞP þ P ða 0 IÞP ¼ a n ðp A n PÞþþa ðp APÞþa 0 ðp IPÞ ¼ a n B n þþa B þ a 0 I ¼ f ðbþ (c) By part (b), gðbþ ¼0 if and only if P gðaþp ¼ 0 if and only if gðaþ ¼P0P ¼ 0. Matrix Representations of General Linear Mappings 6.. Let F: R! R be the linear map defined by Fðx; y; zþ ¼ðx þ y 4z; x 5y þ zþ. (a) Find the matrix of F in the following bases of R and R : S ¼ fw ; w ; w g ¼ fð; ; Þ; ð; ; 0Þ; ð; 0; 0Þg and S 0 ¼ fu ; u g ¼ fð; Þ; ð; 5Þg

21 CHAPTER 6 Linear Mappings and Matrices 9 (b) Verify Theorem 6.0: The action of F is preserved by its matrix representation; that is, for any v in R, we have ½FŠ S;S 0 ¼½FðvÞŠ S 0. (a) From Problem 6., ða; bþ ¼ ð 5a þ bþu þða bþu. Thus, Fðw Þ¼Fð; ; Þ ¼ð; Þ ¼ u þ 4u Fðw Þ¼Fð; ; 0Þ ¼ð5; 4Þ ¼ u þ 9u Fðw Þ¼Fð; 0; 0Þ ¼ð; Þ ¼ u þ 8u Write the coordinates of Fðw Þ, Fðw Þ; Fðw Þ as columns to get ½FŠ S;S 0 ¼ (b) If v ¼ðx; y; zþ, then, by Problem 6.5, v ¼ zw þðy zþw þðx yþw. Also, FðvÞ ¼ðxþy 4z; x 5y þ zþ ¼ ð x 0y þ 6zÞu þð8xþy 5zÞu Hence; ¼ðz; y z; x yþ T x 0y þ 6z and ½FðvÞŠ S 0 ¼ 8x þ y 5z z Thus, ½FŠ S;S 0 ¼ 4 y x 5 x 0y þ 6z ¼ ¼½FðvÞŠ x þ y 5z S 0 x y 6.. Let F: R n! R m be the linear mapping defined as follows: Fðx ; x ;...; x n Þ¼ða x þþa n x n, a x þþa n x n ;...; a m x þþa mn x n Þ (a) Show that the rows of the matrix ½FŠ representing F relative to the usual bases of R n and R m are the coefficients of the x i in the components of Fðx ;...; x n Þ. (b) Find the matrix representation of each of the following linear mappings relative to the usual basis of R n : (i) F: R! R defined by Fðx; yþ ¼ðx y; x þ 4y; 5x 6yÞ. (ii) F: R 4! R defined by Fðx; y; s; tþ ¼ðx 4y þ s 5t; 5x þ y s tþ. (iii) F: R! R 4 defined by Fðx; y; zþ ¼ðx þ y 8z; x þ y þ z; 4x 5z; 6yÞ. (a) We have Fð; 0;...; 0Þ ¼ða ; a ;...; a m Þ Fð0; ;...; 0Þ ¼ða ; a ;...; a m Þ ::::::::::::::::::::::::::::::::::::::::::::::::::::: Fð0; 0;...; Þ ¼ða n ; a n ;...; a mn Þ and thus; a a... a n a ½FŠ ¼ a... a n 6 4::::::::::::::::::::::::::::::::: 5 a m a m... a mn (b) By part (a), we need only look at the coefficients of the unknown x; y;... in Fðx; y;...þ. Thus, 8 ðiþ ½FŠ ¼ ; ðiiþ ½FŠ ¼ ; ðiiiþ ½FŠ ¼ Let A ¼ 5. Recall that A determines a mapping F: R! R defined by FðvÞ ¼Av, 4 where vectors are written as columns. Find the matrix ½FŠ that represents the mapping relative to the following bases of R and R : (a) The usual bases of R and of R. (b) S ¼ fw ; w ; w g ¼ fð; ; Þ; ð; ; 0Þ; ð; 0; 0Þg and S 0 ¼ fu ; u g ¼ fð; Þ; ð; 5Þg. (a) Relative to the usual bases, ½FŠ is the matrix A.

22 0 CHAPTER 6 Linear Mappings and Matrices (b) From Problem 9., ða; bþ ¼ ð 5a þ bþu þða bþu. Thus, Fðw Þ¼ ¼ 4 ¼ u þ 8u 4 4 Fðw Þ¼ ¼ ¼ 4u þ 4u 4 0 Fðw Þ¼ ¼ ¼ 8u þ 5u Writing the coefficients of Fðw Þ, Fðw Þ, Fðw Þ as columns yields ½FŠ ¼ Consider the linear transformation T on R defined by Tðx; yþ ¼ðx y; x þ 4yÞ and the following bases of R : E ¼ fe ; e g ¼ fð; 0Þ; ð0; Þg and S ¼ fu ; u g ¼ fð; Þ; ð; 5Þg (a) Find the matrix A representing T relative to the bases E and S. (b) Find the matrix B representing T relative to the bases S and E. (We can view T as a linear mapping from one space into another, each having its own basis.) (a) From Problem 6., ða; bþ ¼ ð 5a þ bþu þða bþu. Hence, Tðe Þ¼Tð; 0Þ ¼ð; Þ ¼ 8u þ 5u 8 and so A ¼ Tðe Þ¼Tð0; Þ ¼ ð ; 4Þ ¼ u u 5 (b) We have Tðu Þ¼Tð; Þ ¼ ð ; Þ ¼ e þ e and so B ¼ Tðu Þ¼Tð; 5Þ ¼ ð ; Þ ¼ e þ e 6.5. How are the matrices A and B in Problem 6.4 related? By Theorem 6., the matrices A and B are equivalent to each other; that is, there exist nonsingular matrices P and Q such that B ¼ Q AP, where P is the change-of-basis matrix from S to E, and Q is the change-of-basis matrix from E to S. Thus, P ¼ ; Q ¼ 5 ; Q ¼ 5 5 and Q AP ¼ 8 ¼ ¼ B Prove Theorem 6.4: Let F: V! U be linear and, say, rankðfþ ¼r. Then there exist bases V and of U such that the matrix representation of F has the following form, where I r is the r-square identity matrix: A ¼ I r Suppose dim V ¼ m and dim U ¼ n. Let W be the kernel of F and U 0 the image of F. We are given that rank ðfþ ¼r. Hence, the dimension of the kernel of F is m r. Let fw ;...; w m r g be a basis of the kernel of F and extend this to a basis of V: Set fv ;...; v r ; w ;...; w m r g u ¼ Fðv Þ; u ¼ Fðv Þ;...; u r ¼ Fðv r Þ

23 CHAPTER 6 Linear Mappings and Matrices Then fu ;...; u r g is a basis of U 0, the image of F. Extend this to a basis of U, say fu ;...; u r ; u rþ ;...; u n g Observe that Fðv Þ ¼ u ¼ u þ 0u þþ0u r þ 0u rþ þþ0u n Fðv Þ ¼ u ¼ 0u þ u þþ0u r þ 0u rþ þþ0u n :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Fðv r Þ ¼ u r ¼ 0u þ 0u þþu r þ 0u rþ þþ0u n Fðw Þ ¼ 0 ¼ 0u þ 0u þþ0u r þ 0u rþ þþ0u n :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Fðw m r Þ¼0 ¼ 0u þ 0u þþ0u r þ 0u rþ þþ0u n Thus, the matrix of F in the above bases has the required form. SUPPLEMENTARY PROBLEMS Matrices and Linear Operators 6.. Let F: R! R be defined by Fðx; yþ ¼ð4x þ 5y; x yþ. (a) Find the matrix A representing F in the usual basis E. (b) (c) Find the matrix B representing F in the basis S ¼ fu ; u g ¼ fð; 4Þ; ð; 9Þg. Find P such that B ¼ P AP. (d) For v ¼ða; bþ, find and ½FðvÞŠ S. Verify that ½FŠ S ¼½FðvÞŠ S Let A: R! R be defined by the matrix A ¼ 5. 4 (a) Find the matrix B representing A relative to the basis S ¼ fu ; u g ¼ fð; Þ; ð; 8Þg. (Recall that A represents the mapping A relative to the usual basis E.) (b) For v ¼ða; bþ, find and ½AðvÞŠ S For each linear transformation L on R, find the matrix A representing L (relative to the usual basis of R ): (a) L is the rotation in R counterclockwise by 45. (b) L is the reflection in R about the line y ¼ x. (c) (d) L is defined by Lð; 0Þ ¼ð; 5Þ and Lð0; Þ ¼ð; Þ. L is defined by Lð; Þ ¼ð; Þ and Lð; Þ ¼ð5; 4Þ Find the matrix representing each linear transformation T on R relative to the usual basis of R : (a) Tðx; y; zþ ¼ðx; y; 0Þ. (b) Tðx; y; zþ ¼ðz; y þ z; x þ y þ zþ. (c) Tðx; y; zþ ¼ðx y 4z; x þ y þ 4z; 6x 8y þ zþ Repeat Problem 6.40 using the basis S ¼ fu ; u ; u g ¼ fð; ; 0Þ; ð; ; Þ; ð; ; 5Þg Let L be the linear transformation on R defined by Lð; 0; 0Þ ¼ð; ; Þ; Lð0; ; 0Þ ¼ð; ; 5Þ; Lð0; 0; Þ ¼ð; ; Þ (a) Find the matrix A representing L relative to the usual basis of R. (b) Find the matrix B representing L relative to the basis S in Problem Let D denote the differential operator; that is, Dð f ðtþþ ¼ df =dt. Each of the following sets is a basis of a vector space V of functions. Find the matrix representing D in each basis: (a) fe t ; e t ; te t g. (b) f; t; sin t; cos tg. (c) fe 5t ; te 5t ; t e 5t g.

24 CHAPTER 6 Linear Mappings and Matrices Let D denote the differential operator on the vector space V of functions with basis S ¼ fsin y, cos yg. (a) Find the matrix A ¼½DŠ S. (b) Use A to show that D is a zero of f ðtþ ¼t þ Let V be the vector space of matrices. Consider the following matrix M and usual basis E of V: M ¼ a b c d and E ¼ ; 0 ; ; Find the matrix representing each of the following linear operators T on V relative to E: (a) TðAÞ ¼MA. (b) TðAÞ ¼AM. (c) TðAÞ ¼MA AM Let V and 0 V denote the identity and zero operators, respectively, on a vector space V. Show that, for any basis S of V, (a) ½ V Š S ¼ I, the identity matrix. (b) ½0 V Š S ¼ 0, the zero matrix. Change of Basis 6.4. Find the change-of-basis matrix P from the usual basis E of R to a basis S, the change-of-basis matrix Q from S back to E, and the coordinates of v ¼ða; bþ relative to S, for the following bases S: (a) S ¼ fð; Þ; ð; 5Þg. (c) S ¼ fð; 5Þ; ð; Þg. (b) S ¼ fð; Þ; ð; 8Þg. (d) S ¼ fð; Þ; ð4; 5Þg Consider the bases S ¼ fð; Þ; ð; Þg and S 0 ¼ fð; Þ; ð; 4Þg of R. Find the change-of-basis matrix: (a) P from S to S 0. (b) Q from S 0 back to S Suppose that the x-axis and y-axis in the plane R are rotated counterclockwise 0 to yield new x 0 -axis and y 0 -axis for the plane. Find (a) (b) (c) The unit vectors in the direction of the new x 0 -axis and y 0 -axis. The change-of-basis matrix P for the new coordinate system. The new coordinates of the points Að; Þ, Bð; 5Þ, Cða; bþ Find the change-of-basis matrix P from the usual basis E of R to a basis S, the change-of-basis matrix Q from S back to E, and the coordinates of v ¼ða; b; cþ relative to S, where S consists of the vectors: (a) u ¼ð; ; 0Þ; u ¼ð0; ; Þ; u ¼ð0; ; Þ. (b) u ¼ð; 0; Þ; u ¼ð; ; Þ; u ¼ð; ; 4Þ. (c) u ¼ð; ; Þ; u ¼ð; ; 4Þ; u ¼ð; 5; 6Þ Suppose S ; S ; S are bases of V. Let P and Q be the change-of-basis matrices, respectively, from S to S and from S to S. Prove that PQ is the change-of-basis matrix from S to S. Linear Operators and Change of Basis 6.5. Consider the linear operator F on R defined by Fðx; yþ ¼ð5x þ y; x yþ and the following bases of R : S ¼ fð; Þ; ð; Þg and S 0 ¼ fð; Þ; ð; 4Þg (a) Find the matrix A representing F relative to the basis S. (b) Find the matrix B representing F relative to the basis S 0. (c) Find the change-of-basis matrix P from S to S 0. (d) How are A and B related? 6.5. Let A: R! R be defined by the matrix A ¼. Find the matrix B that represents the linear operator A relative to each of the following bases: (a) S ¼ fð; Þ T ; ð; 5Þ T g. (b) S ¼ fð; Þ T ; ð; 4Þ T g.

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

5 Linear Transformations

5 Linear Transformations Lecture 13 5 Linear Transformations 5.1 Basic Definitions and Examples We have already come across with the notion of linear transformations on euclidean spaces. We shall now see that this notion readily

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Linear Algebra Lecture Notes-I

Linear Algebra Lecture Notes-I Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Mathematics 1. Part II: Linear Algebra. Exercises and problems Bachelor Degree in Informatics Engineering Barcelona School of Informatics Mathematics Part II: Linear Algebra Eercises and problems February 5 Departament de Matemàtica Aplicada Universitat Politècnica

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Linear Algebra. Session 8

Linear Algebra. Session 8 Linear Algebra. Session 8 Dr. Marco A Roque Sol 08/01/2017 Abstract Linear Algebra Range and kernel Let V, W be vector spaces and L : V W, be a linear mapping. Definition. The range (or image of L is the

More information

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Prepared by: M. S. KumarSwamy, TGT(Maths) Page Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Introduction. Vectors and Matrices. Vectors [1] Vectors [2] Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

ECON 186 Class Notes: Linear Algebra

ECON 186 Class Notes: Linear Algebra ECON 86 Class Notes: Linear Algebra Jijian Fan Jijian Fan ECON 86 / 27 Singularity and Rank As discussed previously, squareness is a necessary condition for a matrix to be nonsingular (have an inverse).

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

ORIE 6300 Mathematical Programming I August 25, Recitation 1

ORIE 6300 Mathematical Programming I August 25, Recitation 1 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)

More information

Properties of Transformations

Properties of Transformations 6. - 6.4 Properties of Transformations P. Danziger Transformations from R n R m. General Transformations A general transformation maps vectors in R n to vectors in R m. We write T : R n R m to indicate

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

The Product of Like-Indexed Terms in Binary Recurrences

The Product of Like-Indexed Terms in Binary Recurrences Journal of Number Theory 96, 152 173 (2002) doi:10.1006/jnth.2002.2794 The Product of Like-Indexed Terms in Binary Recurrences F. Luca 1 Instituto de Matemáticas UNAM, Campus Morelia, Ap. Postal 61-3 (Xangari),

More information

Classification of a subclass of low-dimensional complex filiform Leibniz algebras

Classification of a subclass of low-dimensional complex filiform Leibniz algebras Linear Multilinear lgebra ISSN: 008-087 (Print) 56-59 (Online) Journal homepage: http://www.tfonline.com/loi/glma20 Classification of a subclass of low-dimensional complex filiform Leibniz algebras I.S.

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections ) c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side

More information

Duality of finite-dimensional vector spaces

Duality of finite-dimensional vector spaces CHAPTER I Duality of finite-dimensional vector spaces 1 Dual space Let E be a finite-dimensional vector space over a field K The vector space of linear maps E K is denoted by E, so E = L(E, K) This vector

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Matrix representation of a linear map

Matrix representation of a linear map Matrix representation of a linear map As before, let e i = (0,..., 0, 1, 0,..., 0) T, with 1 in the i th place and 0 elsewhere, be standard basis vectors. Given linear map f : R n R m we get n column vectors

More information

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to : MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information