Matrices related to linear transformations

Similar documents
Practice Final Exam Solutions

Solutions to Final Exam

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

MATH 221, Spring Homework 10 Solutions

Study Guide for Linear Algebra Exam 2

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

LINEAR ALGEBRA REVIEW

MAT 1302B Mathematical Methods II

Definition (T -invariant subspace) Example. Example

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 54 Second Midterm Exam, Prof. Srivastava October 31, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Vector space and subspace

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

and let s calculate the image of some vectors under the transformation T.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math Final December 2006 C. Robinson

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Chapter 4 & 5: Vector Spaces & Linear Transformations

Final Exam Practice Problems Answers Math 24 Winter 2012

Second Midterm Exam April 14, 2011 Answers., and

Math 54 Homework 3 Solutions 9/

MATH 115A: SAMPLE FINAL SOLUTIONS

Practice problems for Exam 3 A =

MA 265 FINAL EXAM Fall 2012

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

EXAM. Exam #3. Math 2360, Spring April 24, 2001 ANSWERS

Quizzes for Math 304

Cheat Sheet for MATH461

MATH 310, REVIEW SHEET 2

0 2 0, it is diagonal, hence diagonalizable)

Problem 1: Solving a linear equation

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Designing Information Devices and Systems I Discussion 4B

Test 3, Linear Algebra

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Diagonalization. Hung-yi Lee

Calculating determinants for larger matrices

235 Final exam review questions

Eigenvalues, Eigenvectors, and Diagonalization

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Chapter 5. Eigenvalues and Eigenvectors

Spring 2014 Math 272 Final Exam Review Sheet

Dimension. Eigenvalue and eigenvector

MATH 1553-C MIDTERM EXAMINATION 3

1 Last time: inverses

k is a product of elementary matrices.

Recall : Eigenvalues and Eigenvectors

Homework 3 Solutions Math 309, Fall 2015

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 24 Spring 2012 Questions (mostly) from the Textbook

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

Recitation 8: Graphs and Adjacency Matrices

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Jordan Canonical Form Homework Solutions

Summer Session Practice Final Exam

Linear Algebra- Final Exam Review

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

Math 308 Practice Final Exam Page and vector y =

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

Spring 2019 Exam 2 3/27/19 Time Limit: / Problem Points Score. Total: 280

Linear Algebra, Summer 2011, pt. 2

Linear Algebra Practice Problems

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Sample Final Exam: Solutions

Math 308 Practice Test for Final Exam Winter 2015

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Math 205B Exam 02 page 1 03/19/2010 Name /7 4/7 1/ /7 1/7 5/ /7 1/7 2/

Name: Final Exam MATH 3320

Chapter 3 Transformations

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Generalized Eigenvectors and Jordan Form

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Columbus State Community College Mathematics Department Public Syllabus

Math Linear Algebra Final Exam Review Sheet

Math 205 A B Final Exam page 1 12/12/2012 Name

MATH 1553 PRACTICE FINAL EXAMINATION

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

Transcription:

Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas we have encountered. The matrix of a linear transformation from R n to R m. Theorem Given a linear transformation T : R n R m, there is an m n matrix A for which T (v) Av for all v in R n. This matrix is where {e, e 2,..., e n } is the standard basis for R n. A (T (e ) T (e 2 ) T (e n )), () Thus, the only transformations from R n to R m are matrix transformations, and we can calculate the range of T by finding the column space of A and the kernel of T by finding the null space of A. Transition matrices Given two bases B {v, v 2,..., v n } and C {w, w 2,..., w n } for the same vector space V, we can change coordinates with respect to one basis to coordinates with respect to the other via matrix multiplication by an appropriate matrix. That is, [v] C [v] B for some matrix. This matrix,, is called the transition matrix from B to C. Our book uses the notation to emphasize that we are changing from B to C coordinates using. The formula for C B is ([v ] C [v 2 ] C [v n ] C ). (2) C B In words, to go from B to C, is obtained by forming the C-coordinates of each B-basis vector. Also, you should note the similarity between this formula and the one above. To make it look even more similar, define T : V R n by T (v) [v] C. That is, let T be the coordinate map with respect to C-coordinates. As mentioned in class, the coordinate map is linear. We can rewrite our matrix in the form: (T (v ) T (v 2 ) T (v n )). (3) C B There are two ways to get the transition matrix from C to B. If we call this matrix Q, then Q. The direct approach is that B C Q ([w ] B [w 2 ] B [w n ] B ). B C The indirect approach is that Q. That is, B C ( C B ). (4)

The matrix of a linear transformation from V to W, with respect to bases B for V and C for W If we don t have T : R n R m then then Theorem does not apply. In fact, there are matrices that represent linear transformations, but only if we use coordinate systems. Theorem 2 Let T : V W be a linear transformation. Suppose B {v, v 2,..., v n } is a basis for V, and C is a basis for W. Then there is a matrix A, for which [T (v)] C A[v] B. In fact, we have a formula for A: A ([T (v )] C [T (v 2 )] C [T (v n )] C ). (5) We have notation for this matrix. It is usually denoted [T ] B,C. Again, you should note how similar (5) is to (3), and even to (). roof of theorem: Let v be a vector in V. Then for some scalars c, c 2,..., c n, we have v c v + c 2 v 2 + + c n v n. Given this, Now as desired. T (v) T (c v + c 2 v 2 + + c n v n ) c T (v ) + c 2 T (v 2 ) + + c n T (v n ). [T (v)] C [c T (v ) + c 2 T (v 2 ) + + c n T (v n )] C c [T (v )] C + c 2 [T (v 2 )] C + + c n [T (v n )] C c 2 ([T (v )] C [T (v 2 )] C [T (v n )] C ). A[v] B, A natural question to ask is how the range and kernel of T relate to A. It is NOT the case that the range of T is the column space of A, or that Ker(T ) Null(A). It is important that you understand why they are not equal, so take some time to think about this. What IS true is the following: The column space of A gives the coordinates for the vectors in Range(T ) and Null(A) gives the coordinates for the vectors in Ker(T ). To find a basis for the range of T, find a basis for the column space of A and these will be the coordinates for the basis vectors for the range. Similarly, to find a basis for the kernel of T, first find a basis for Null(A). These vectors are the coordinates for the basis for Ker(T ). As an ( example, consider ) a problem from Homework 9: T : 2 M 2 2 defined by p() p(2) T (p(t)). We will calculate [T ] p(2) p() B,C, where B is the standard basis for 2 and c c n age 2

C is the standard basis for M 2 2. We have T () T (t) T (t 2 ) [T ()] C 2 [T (t)] 2 C 4 [T (t 2 )] 4 C We now have the columns for A [T ] B,C, giving A [T ] B,C 2 4 2 4., 2 2, 4 4. Suppose we wish to calculate T (2t 2 + t + ) by matrix multiplication using the formula [T (v)] C A[v] B instead of just using the formula for T. Here is what we would do: First find the coordinates for 2t 2 + t + : [2t 2 + t + ] B. Next, multiply by A: 2 4 2 4 2 4. This is NOT the answer. Instead, what we have is 2 4 4 [T (v)] C 4. Finally, converting from coordinates to matrices, T (v). 4 4 To find the kernel and range of T, we may first find the null space and column space of A. This will give us coordinates of the respective bases. We reduce A: 0 2 2 4 2 4 0 3 0 0 0 0 3 0 0 0. 0 0 0 0 0 0 A basis for the column space of A consists of the first two columns { of( A. These )} are the 2 coordinates for the basis vectors for Range(T ) so this basis is,. A basis 2 age 3

2 for Null(A) is 3. Converting to 2, a basis for the kernel of T is {t 2 3t + 2}. Mixing bases You can safely skip this section. Suppose that T : V W and we have several bases: B, B 2 for V and C, C 2 for W. Then we have our choice of bases to use to find the matrix of the transformation. How are the matrices related to each other? They are related by transition matrices among the bases. Theorem 3 Let T : V W be a linear transformation and suppose B and B 2 are bases for V and C and C 2 are bases for W. Then [T ] B2,C 2 [T ] B,C. C 2 C B B 2 roof. In general, if you have two matrices, A and B, and if Av Bv for all v, then A B. (Can you prove this?) We use this result on matrices to show that the two matrices [T ] B2,C 2 and [T ] B,C C 2 C B B 2 are the same. Given a vector v in V, by the defining property of the matrix of a transformation, [T ] B2,C 2 [v] B2 [T (v)] C2. On the other hand, ( [T ] B,C C 2 C B B 2 ) [v] B2 That is, for all vectors v, ( [T ] B2,C 2 [v] B2 [T ] B,C C 2 C so the two matrices must be the same. ( [T ] B,C C 2 C C 2 C [T ] B,C [v] B C 2 C [T (v)] C [T (v)] C2. B B 2 B B 2 [v] B2 ) [v] B2, I think of this as follows: [T ] B,C expects to see B coordinates, and [T ] B2,C 2 expects to see B 2 coordinates. If you want to feed a matrix B 2 coordinates and get C 2 -coordinate answers, the direct approach is to use [T ] B2,C 2. However, there is the following indirect route: ) age 4

use the transition matrix,, to convert B 2 coordinates to B coordinates, then multiply by [T ] B,C B B 2, giving an answer in C coordinates. Finally, multiply by another transition matrix,, to get the desired C 2 -coordinate answer. C 2 C Continuing { our ( previous ) ( example, ) ( suppose )} that B {, t +, t 2 + t + }, 0 0 0 0 0 C,,, and we wish to calculate [T ] 0 B,C for the transformation on age 3. We could do this in two ways. The direct approach is to use: [T ] B,C ([T ()] C [T (t + )] C [T (t 2 + t + )] C ) { } 2 3 3 7 C 3 2 C 7 3 C 2 3 0 4 0 0 0. 0 4 Alternatively, we may use the formula from Theorem 3: [T ] B,C [T ] B,C C C B B where B is the standard basis for 2 and C is the standard basis for M 2 2. The transition matrices are: ([] B [t + ] B [t 2 + t + ] B ) 0 B B 0 0 and C C { 0 0 0 C ( 0 ) 0 0 C ( 0 ) 0 0 C } 0 0 0 C 0 0 0 0 0 0 0. 0 0 age 5

By the theorem, 0 0 0 [T ] B,C 0 0 2 4 0 0 2 4 0 0 0 0 0 0 0 0 2 3 0 0 3 7 0 0 3 7 0 0 2 3 2 3 0 4 0 0 0. 0 4 This second matrix also contains all the information about T. For example, if we were asked to find the kernel of T, we could use this second matrix. Reducing, 2 3 0 5 0 4 0 0 0 0 4 0 0 0, 0 0 0 0 0 0 5 and the null space of this matrix is spanned by 4. This is different from before, but remember, this is not the kernel, it is a coordinate vector for a kernel vector. Using the basis B, the kernel is spanned by the vector 5 4(t + ) + (t 2 + t + ) t 2 3t + 2. That is, once we convert from coordinates to a vector in 2, we get exactly the same answer. Linear operators, eigenvectors, and diagonalization This section is important, at least for the examples. A linear operator is a linear transformation from a space to itself. That is, rather than T : V W, we have T : V V. That is, V and W are the same space. In this case, we only need one basis, B of V. If B {v, v 2,..., v n }, then we write where [T (v)] B A[v] B, A ([T (v )] B [T (v 2 )] B [T (v n )] B ). We use the notation A [T ] B rather than [T ] B,B. In the case of a linear operator, with two bases, B and C for V, Theorem 3 becomes [T ] C [T ] B. C B B C age 6

Since ( C B B C ), we can write this [T ] C [T ] B, where. That is, all matrices that represent T are similar to each other. In fact, B C this is what similarity actually means! That is, if A and B are similar to each other, then there is a linear transformation T for which A and B are the matrices representing T with respect to some bases. Here are three examples of these formulas. First, consider T : 3 3 defined by T (p(t)) (t + 2)p (t). Let S be the standard basis for 3, {, t, t 2, t 3 } and let C {, t + 2, (t + 2) 2, (t + 2) 3 } be a second basis. We have A [T ] S ([T ()] S [T (t)] S [T (t 2 )] S [T (t 3 )] S ) ([0] S [t + 2] S [2t 2 + 4t] S [3t 3 + 6t 2 ] S ) 0 2 0 0 0 4 0 0 0 2 6. 0 0 0 3 Letting B [T ] C, we could calculate B directly: B [T ] C ([T ()] C [T (t + 2)] C [T ((t + 2) 2 )] C [T ((t + 2) 3 )] C ) ([0] C [t + 2] C [2(t + 2) 2 ] C [3(t + 2) 3 ] C ) 0 0 0 0 0 0 0 0 0 2 0. 0 0 0 3 To calculate B using the formula, we have B A, where is the transition matrix from C to S. First we get the transition matrix: 2 4 8 ([] S [t + 2] S [(t + 2) 2 ] S [(t + 2) 3 ] S ) 0 4 2 S C 0 0 6. 0 0 0 age 7

2 4 8 I will let you check that 0 4 2 0 0 6. Given this, we should have 0 0 0 2 4 8 0 2 0 0 2 4 8 B 0 4 2 0 4 0 0 4 2 0 0 6 0 0 2 6 0 0 6 0 0 0 0 0 0 3 0 0 0 2 4 8 0 2 8 24 0 4 2 0 8 36 0 0 6 0 0 2 8 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 2 0. 0 0 0 3 As you should notice from this example, the columns of are eigenvectors of A, and is a diagonalizing matrix for A. One of the aspects of linear operators is that they have eigenvectors and eigenvalues, just like matrices. We say c is an eigenvalue for a linear operator T if there is a nonzero vector v in V with T (v) cv. Similarly, we call v an eigenvector for T. How does one go about finding eigenvectors and eigenvalues for a linear operator? In differential equations, you studied differential operators, which were linear operators on vector spaces of functions, with differentiation being part of the definition of the operator. art of what happened in that class involved finding eigenvectors and eigenvalues for differential operators. In this class, we relate everything back to matrices. So for the example above, we calculated the matrix of the transformation. It turns out that the eigenvalues of a transformation are the same as the eigenvalues for any matrix representing that transformation. Thus, T had 0,, 2, 3 as eigenvalues. What about eigenvectors? The are NOT the columns of the diagonalizing matrix. However, the columns of are the coordinates for 2 the eigenvectors. So, for this example, 0 is an eigenvector for the matrix representing T, 0 and it gives the coordinates for an eigenvector for T. Going from coordinates to the vector, 2 0 2 + t + 0 t2 + 0 t 3 t + 2. 0 We define a linear operator T to be diagonalizable if there is some basis B for which [T ] B is a diagonal matrix. So our example above, T (p(t)) (t + 2)p (t) is diagonal- age 8

izable since [T ] B is a diagonal matrix with the basis {, t + 2, (t + 2) 2, (t + 2) 3 }. Eigenvalue/eigenvector/diagonalizability properties for linear operators are very similar to those for matrices. For example, here is a good Final Exam question: rove that if c is an eigenvalue for T, then the set of all v for which T (v) cv is a subspace of V, the domain space. That is, prove that the eigenspace is a subspace. The following is the main theorem on diagonalizability of linear transformations. Theorem 4 A linear operator T on a finite dimensional vector space V is diagonalizable if and only if V has a basis consisting of eigenvectors of T. Here is half of the proof: Suppose that B {v, v 2,..., v n } is a basis for V and each v i is an eigenvector for T. That is, suppose that T (v i ) c i v i for each i. The matrix of T with respect to B is ([T (v )] B [T (v 2 )] B [T (v n )] B ). That is, we apply T to each basis vector, get the coordinates of the answer, and make that the appropriate column of the matrix. Since each vector is an eigenvector, we have ([T (v )] B [T (v 2 )] B [T (v n )] B ) ([c v ] B [c 2 v 2 ] B [c n v n ] B ). But the B-coordinates of v i is e i, the i th standard basis vector. That is, [T ] B (c e c 2 e 2 c n e n ), but this is just the diagonal matrix with c s on the diagonal. For a second example, let s use matrix spaces: Define T : M 2 2 M 2 2 by 2 T (A) A. 2 roblem: Find the matrix of T with respect to the standard basis for M 2 2, find all eigenvalues of T, a basis for each eigenspace, and finally, find the matrix of T with respect to a basis of eigenvectors. Solution: As an aid to the calculations, let s first calculate a formula for T (A) in terms of the entries of the matrix A. We have a b a b 2 T c d c d 2 2a + b a + 2b 2c + d c + 2d 2a + b + 2c + d a + 2b + c + 2d. 2a + b + 2c + d a + 2b + c + 2d We now apply T to the standard basis vectors: 0 2 0 2 T, T, T 0 0 2 0 0 2 0 0 0 Getting the standard coordinates of each vector gives 2 2 [T ] S 2 2 2 2. 2 2 2, T 2 0 0 0 2. 2 age 9

Next, the eigenvalues of T are the same as those for [T ] S. We get the characteristic polynomial for this matrix: x 2 2 x 0 x 0 det(xi [T ] S ) x 2 2 2 x 2 0 x 0 x 2 x 2 2 x 2 2 x 2 (I subtracted the third row from the first and the fourth from the second) 0 0 0 0 x 2 0 0 2 x 2 x 2 0 0 0 0 x 4 2 2 x 2 0 0 2 x 4 x 2 x 4 2 2 x 4 x2 [(x 4) 2 4] x 2 (x 2)(x 6). Thus, the eigenvalues are 0, 2, 6. Next, we find bases for the eigenspaces of [T ] S. These will NOT be bases for the eigenspaces for T. I will let you go through the details, but the matrix [T ] S has eigenspace bases: 0 0 : 0, 0, 2 :, 6 :. 0 These give COORDINATES for the bases for the actual eigenspaces of T. That is, if we convert from coordinate vectors to matrices, we get the actual eigenvectors. These give us { } {} {} 0 0 0 :,, 2 :, 6 :. 0 0 A basis for M 2 2 consisting of eigenvectors of T is { 0 0 B,, 0 0 With respect to this basis, the matrix of T is 0 0 0 0 [T ] B 0 0 0 0 0 0 2 0. 0 0 0 6, }. You should check that [T ] B [T ] S for an appropriate (. As) a check on the calculations 2 above, I will use this second matrix for T to calculate T. To do this, we need the 3 2 age 0

2 formula [T (v)] B [T ] B [v] B. We need to write as a linear combination of vectors 3 ( 2 ) 2 in B. Calling these vectors b, b 2, b 3, b 4, we have b 3 2 + 2b 2 b 3 + b 4, so [ T ] 2 3 2 B 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 6 2 This tells us the coordinates of the answer, we need -2(third basis vector) + 6(fourth basis vector). Thus, 2 8 4 T 2 + 6. 3 2 8 4 The direct calculation would have been 2 2 2 T 3 2 3 2 2 ( 0 0 2 6. ) 0 3 8 7 8 4. 8 4 One last example. Let V {(x, y, z) x + y + z 0}. That is, let V be a plane in R 3. A 0 basis for V is B {u, v} 0,. Let T : V V be defined by T (x, y, z) (x + 2y, x + 2z, 0). I will leave it to you to show that T really is a linear operator on V. (What this means is that if w is any vector in V, then T (w) should also be in V. For example, (5, -3, -2) is in V, and T (5, 3, 2) (,, 0) is also in V.) The problem: Find the matrix of T with respect to B, find all eigenvalues and a basis for each eigenspace, and find the matrix of T with respect to a basis of eigenvectors. Implicit in this problem is that T IS diagonalizable. Note that even though T is defined in terms of vectors in R 3, the matrix of T will be a 2 2 matrix. That is, it is the size of the basis (the dimension of V ) that determines the size of the matrix. To get the matrix of T, we apply T to each basis vector and write than answer as a combination of basis vectors. We have T (u) (,, 0) u v, T (v) ( 2, 2, 0) 2u 2v. Getting the coordinates for T (u) and T (v), we put these in as columns in the matrix: 2 [T ] B A. 2 age

The eigenvalues of T are the same as the eigenvalues of [T ] B, so det(xi [T ] B ) x 2 x + 2 (x )(x + 2) + 2 x2 + x. 2 2 The eigenvalues of T are 0 and -. For 0, we have, giving a basis {} 2 {( 0 0 )} 2 2 2. For -, A + I, giving a basis. These are 0 0 not eigenvectors for T, but they are the coordinate vectors for the eignevectors. The corresponding eigenvectors for T are 2u + v (2,, ) and u + v (,, 0). Since V is 2-dimensional, and these come from different eigenspaces, they must be a basis for V. So we 2 have found our basis of eigenvectors: C, {w, w 2 }. Finally, we calculate the matrix of T with respect to C: T (w ) 0, T (w 2 ) w 2, so [T ] C. Based 0 0 0 0 2 0 0 on all we ve done, the matrix should be similar to. It is, of course, 2 0 2 0 0 using, which has inverse. That is, 2 A, 0 which we could write [T ] C [T ] B, and interpret either as (): is the matrix with eigenvectors of A as its columns, or (2): is the transition matrix. B C One final comment. Not all linear operators are diagonalizable. For example, the linear operator T : 2 2 defined by T (p(t)) p(t + ) is not diagonalizable. This T is sometimes called a shift operator. As an example, T (t 2 + 2t + 3) (t + ) 2 + 2(t + ) + 3 t 2 + 2t + + 2t + 2 + 3 t 2 + 4t + 6. You should be able to verify the following: The matrix of T with respect to the standard basis of 2 is [T ] S 0 2, and this matrix has 0 0 only one eigenvalue (λ ), and only a -dimensional eigenspace. Since the matrix is not diagonalizable, neither is T. age 2