(VI.C) Rational Canonical Form
|
|
- Magnus Rose
- 6 years ago
- Views:
Transcription
1 (VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars λ i F. T is completely transparent when looked at relative to this basis. Interms of matrices, if A = [T]ê and D = [T] B = diag{λ,..., λ n }, then A = S D S, and A is similar to a diagonal matrix. What if A is not diagonalizable ( T is not semisimple)? Is there some basis in terms of which the action of T is still transparent? An equivalent question: is there a canonical [=standard] sort of matrix to which any A is similar? In the next three sections, we present two distinct solutions to this problem: the rational and Jordan canonical f orms of A. That is, any A is similar to (essentially) unique matrices R and J obeying certain rules. One warning: in case A is diagonalizable, i.e. A D, the rational form R does not reduce to that (or any) diagonal matrix. Later we will find that, by contrast, J does = D in this case. Another basic question: when is A diagonalizable? Momentarily assuming F = C, Answer : when the geometric multiplicities (of the eigenvalues of A ) equal the algebraic multiplicities. Answer 2 : when m A (not f A ) has no repeated roots (see Exercise VI.B.3). Although this avoids the computation involved in Answer (with which you are by now familiar), finding n f (λi A) ain t easy
2 2 (VI.C) RATIONAL CANONICAL FORM either. Answer 3 : the Spectral Theorem, to be discussed later in these notes. Now we go over a couple of basic building blocks we ll need for our discussion of rational canonical form. Direct-Sum Decomposition. The notation means that V = V V m (i) V = V V m (ii) dim V i = dim V, or equivalently (ii ) there is only one way of writing a given v V as v v m (where v i V i ). We say that V is the direct sum of the V i. If this holds then there exists a basis B = {B,..., B m } for V such that B i are (respectively) bases for V i. A linear transformation T : V V is respects the direct sum decomposition if T(V i ) V i (for each i ); that is, if each V i is closed under the action of T. In that case, [ ] T V B 0 [T] B =... 0 [T Vm ] Bm that is, the matrix of T (with respect to B ) consists of blocks along the diagonal, representing its behavior on each subspace V i. Cyclic Subspaces and Vectors. Let T : V V be a transformation, and look at the successive images (under T ) of v V : v, T v, T 2 v,.... Now V is finite-dimensional, so the collections { v, T v,..., T i v} have to stop being independent at some point: i.e., there is some d such that v, T v,..., T d v are independent, smaller matrices: evidently the dimensions of the i th block are dim V i dim V i.
3 (VI.C) RATIONAL CANONICAL FORM 3 but and so v, T v,..., T d v, T d v are dependent 0 = α d T d v + α d T d v α T v + α 0 v, where not all α i = 0. In fact α d = 0, because otherwise we would contradict independence of v..., T d v. Dividing by α d, we see that where p is the monic polynomial 0 = p(t) v p(λ) = λ d + α d α d λ d α 0 α d =: λ d + a d λ d a 0. Let W = span{ v, T v,..., T d v}; it is called a (T -)cyclic subspace of V with cyclic vector v, and we say that v generates W (under the action of T ). REMARK. Clearly T d v W since p(t) v = 0 = T d v = a d T d v... a T v a 0 v. In fact W contains T d+ v, T d+2 v,... simply by applying T to both sides of this equation repeatedly. So if we cut off at the degree (= d ) of the first polynomial relation amongst the T k v and consider the span (= W ) of the first d of these, we have already the space spanned by all the T k v. Let s look at the matrix of T W with respect to the basis B = { v, T v,..., T d v} =: { v,..., v d }. Clearly T v = v 2, T v 2 = v 3,..., T v d = v d, while T v d = T(T d v) = T d v = a d T d v a d 2 T d 2 v... a 0 v = a d v d a d 2 v d... a 0 v.
4 4 (VI.C) RATIONAL CANONICAL FORM Therefore [T W ] B = 0 0 a 0 0 a.... =: M(p) ; 0 a d this is called the companion matrix of the (monic) polynomial p(λ) = λ d + a d λ d a 0. CLAIM 2. p(λ) is both the minimal and characteristic polynomial of its companion matrix. We shall prove this by computing n f (λi M(p)) : starting with λ 0 a 0 λ a λi M(p) =.,.. λ ad 3 λ a d 2 0 λ + a d multiply the bottom row by λ and add to the 2 nd from bottom, to obtain λ 0 a 0 λ a λ ad 3 0 λ 2 + a d λ + a d 2 0 λ + a d ;
5 (VI.C) RATIONAL CANONICAL FORM 5 repeat all the way up (multiply (i + ) st row by λ, add to i th ) to get 0 0 λ d + a d λ d a 0 0 λ d + a d λ d a λ 3 + a d λ 2 + a d 2 λ + a d 3 0 λ 2 + a d λ + a d 2 0 λ + a d Noting that the top-right antry is p(λ), add multiples of the st thru (d ) st columns to the right-hand column to kill all the other entries (in that column), obtaining 0 0 p(λ) Finally, multiplying the first (d ) columns by ( ) and performing row swaps, we arrive at 0... = n f (λi M(p)). 0 p(λ) Thus m M(p) [= lower-right-hand entry]= p(λ) and f M(p) (λ) [= product of diagonal entries]= p(λ), proving the above Claim. REMARK 3. Since p(λ) is the minimal polynomial of T W, p(t W ) = 0 ; in other words, p(t) annihilates W, which is also clear from the fact that it annihilated W s cyclic vector v above. Rational Canonical Form. For simplicity let V = F n, A M n (F) be any matrix, and T : V V be the corresponding transformation (with A = [T]ê as usual). Recall that the diagonal entries of n f (λi A) give the invariant factors k (λ) of λi A, and let { r,..., n } be the nontrivial ( = ) ones among them. We now
6 6 (VI.C) RATIONAL CANONICAL FORM show that to each (nontrivial) k there is associated a T -cyclic subspace W k V having k (λ) as both characteristic and minimal polynomial, and moreover that V = W r W n. The rational canonical form is then just the reflection of this decomposition of V (into cyclic subspaces) in matrix form. In VI.B we showed how to transform [for any A M n (F) ] (λi A) M n (F[λ]) by row/column-operations into a matrix in normal form: () P(λI A)Q = n f (λi A) = diag{,...,, r (λ),..., n (λ)} where P, Q M n (F[λ]) are invertible, and deg k (λ) = n. Set d k := deg k (λ). By swapping rows and columns one may change the right-hand side of () to diag{,...,, r (λ) }{{} ;... ;,...,, n(λ) }{{} }. d r d n = n Reversing the row/column operations we did on the companion matrices, we may change this to a matrix diag{λi dr M( r ),..., λi dn M( n )} consisting of blocks down the diagonal (of dimensions d r d r,..., d n d n ). So we have P (λi A) Q = λi n diag{m( r ),..., M( n )} where P, Q M n (F[λ]) are still invertible. Now suppose we could replace the left-hand side by S (λi A)S for S M n (F) with scalar entries. Here is what that would tell us. Let B = { v,..., v n } be the basis given by the columns of S. Then since S (λi A)S = λi S AS, λi n S AS = λi n diag{m( r ),... M( n )} = S AS = diag{m( r ),..., M( n )},
7 or (VI.C) RATIONAL CANONICAL FORM 7 [T] B = M( r )... M( n ). Combined with what we have seen about cyclic subspaces and direct sums, this tells us that V decomposes into a direct sum of T -cyclic subspaces as described. Namely, if the columns of our proposed S yield B = { v,..., v dr ; v dr +,..., v dr +d r+ ;... ; v n dn +,..., v n } =: {B r, B r+,..., B n }, then define W k = span{b k }, and V = W r W n. From the supposed form of [T] B, it is clear that: k (T) annihilates the subspace W k ; v, v dr +,..., v n dr + are cyclic vectors for W, W r+,..., W n ; and B r = { v,..., v dr } = { v T v,..., T d v } (etc. for the other B k ). This is all not very important since we shall soon present the basis differently. After all, we still have to prove it exists. Finding the Basis. We now embark on the quest to find the basis B so that [T] B looks as above. We ll need to work with a space of vectors with polynomial coefficients, namely (F[λ]) n. (This is really not a vector space, but a module over the ring F[λ].) There is a map (F[λ]) n η F n given by writing out a vector in components and formally replacing λ by T: 2 c = g (λ)ê g n (λ)ê n η( c) := g (T)ê g n (T)ê n. 2 here gi (T)ê i is just a polynomial in the transformation T [or matrix A ] acting on the i th standard basis vector F n.
8 8 (VI.C) RATIONAL CANONICAL FORM Recall that the row operations 3 P = P M P used to put (λi A) in normal form were invertible, so we may consider 4 P = P PM. Let C, Γ be the bases { c,..., c n }, { γ,..., γ n } for (F[λ]) n consisting of the column vectors of P and Q, respectively. We write SC = C [I]ê = P, S Γ = ê[i] Γ = Q, and also [λi T]ê for λi n A ; now () becomes (2)... r (λ)... n (λ) = S C ([λi T] ê)s Γ = C [λi T] Γ. We are going to use (2) to show that the vectors η( c r ),..., η( c n ) generate V(= F n ) under the action of T. To this end let W k = space generated by η( c k ) under the action of T, and d k = deg k (λ) (do not jump to the conclusion that d k = dim W k ). Noting that c k (resp. γ k ) are the columns of S C (resp. S Γ ), denote the entries of S C, S Γ by C ij, Γ ij F[λ], so that γ j = Γ j ê Γ nj ê n, c j = C j ê C nj ê n. Now observe that according to ( ), λi T takes γ c,..., γ r c r ; γ r r (λ) c r,..., γ n n (λ) c n. We expand this as follows for all j =,..., r, k = r,..., n : (λi T) ( Γ j (λ)ê Γ nj (λ)ê n ) = Cj (λ)ê C nj (λ)ê n, 3 we momentarily ignore the column operations Q. 4 finding P means taking the product of the elementary matrices corresponding to the reverse of the operations you did only the row operations!
9 (VI.C) RATIONAL CANONICAL FORM 9 (λi T) (Γ k (λ)ê Γ nk (λ)ê n ) = k (λ)c k (λ)ê k (λ)c nk (λ)ê n. One can employ a formal argument 5 like that in our proof of Cayley- Hamilton, to show it is valid to substitute in T for λ to get (since TI T = 0 ) 0 = C j (T)ê C nj (T)ê n = η( c j ), j =,..., r, 0 = k (T) (C k (T)ê C nk (T)ê n ) = k (T) η( c k ), k = r,..., n. (Basically, this amounts to taking η of both sides of both equations.) So we may discard the first r columns of S C = P, since their images under η are zero. for the remining c k, k (T) annihilates η( c k ) 5 unfortunately fairly nasty, unless you use theory of modules (which makes such things automatic). The formal argument (without module theory) for is: since Tê l = i A il ê i, (λi T) η( c j ) = 0, j r ( i ) Γ ij (λ)ê i = C ij (λ)ê i = ( λγij (λ) C ij (λ) ) ê i = Γ lj (λ)tê l = Γ lj (λ)a il ê i = i l i,l for each i, j ( j r ) the polynomial equation λγ ij (λ) C ij (λ) = A il Γ lj (λ) l holds. That is, the two sides are the same polynomial. It now makes perfect sense to substitute in T for λ to obtain TΓ ij (T) C ij (T) = A il Γ lj (λ) = l C ij (T) = (Tδ il A il ) Γ lj (T). l Therefore η( c j ) := i C ij (T)ê i = i,l [ (Tδ il A il ) Γ lj (T)ê i = Γ lj (T) l because i (Tδ il A il )ê i = Tê l i A il ê i = 0. i ] (Tδ il A il ) ê i i = 0
10 0 (VI.C) RATIONAL CANONICAL FORM so that { } η( c k ), Tη( c k ),..., T d kη( c k ) are dependent. We do not yet know that { } η( c k ), Tη( c k ),..., T dk η( c k ) are independent, but we do know that W k = their span and so Since deg k = n, dim W k d k = deg k (λ). dim W k n. Moreover, k (T) annihilates W k (since it annihilates η( c k ) ). Now let R ij be the (polynomial) entries of P = SC = C [I]ê, so that for all i =,..., n ê i = R i (λ) c R ni (λ) c n. Applying η to both sides, we have ê i = R i (T) η( c ) R ni (T) η( c n ) = R ri (T) η( c r ) R ni (T) η( c n ) since the first (r ) {η( c j )} are zero. Therefore (sums of) powers of T acting on the {η( c r ),..., η( c n )} give all the {ê,..., ê n }, and thereby span all of V = F n! In other words, and so Combining our inequalities, W r W n = V, dim W k dim V = n. (3) dim W k = n and so the sum is direct: V = W r W n.
11 (VI.C) RATIONAL CANONICAL FORM Also (3) forces dim W k = deg k (λ), so that { } B k = η( c k ), Tη( c k ),..., T dk η( c k ) is a basis for W k (it was an independent set after all). Thus [ T Wk ] B k = M( k ) as promised, and is the basis for V = F n. B = {B r,..., B n } Let s sum up, in terms of a theorem about transformations, and an algorithm for the corresponding matrices: THEOREM 4. For T : V V (dim V = n ) there is a decomposition V = W r W n into T -cyclic subspaces, such that the minimal and characteristic polynomials of T Wk are both just the k th invariant factor k of λi T. Algorithm. Given A M n (F), apply the normal form procedure to λi A to obtain 0... P(λI A)Q =. r (λ)... 0 n (λ) Find the inverse P of the row operations, call its entries C ij (some will be polynomials in λ ) and columns c k. Throw out c,..., c r ; write each of the remaining people component-by-component and compute c k = C k (λ)ê C nk (λ)ê n, η( c k ) = C k (T)ê C nk (T)ê n :
12 2 (VI.C) RATIONAL CANONICAL FORM after applying the T s to the ê i s rewrite the result as a column vector. (This sounds hard, but in actual practice, is not: e.g., if c k happens only to have scalar entries, then η( c k ) = c k.) Let d k = deg k, and write (for k = r,..., n ) { } B k = η( c k ), Tη( c k ),..., T dk η( c k ) ; put these together to form a basis Set and write where S := S B = B = {B r,..., B n } for F n. { matrix whose columns are the vectors of B A = S R S, R = diag{m( r ),..., M( n )} consists of companion-matrix blocks along the diagonal. (See the boxed formula earlier in the lecture.) }, REMARK 5. You should of course think of this as a change of basis: (A =)[T]ê = ê[i] B [T] B B[I]ê, where T is transparent relative to B. EXAMPLE 6. We now put our favorite example A = into rational canonical form. At the end of the last lecture we put (for this A ) λi A into normal form, n f (λi A) = λ λ 2 3λ,
13 in 4 (somewhat condensed) steps. (VI.C) RATIONAL CANONICAL FORM 3 We list the row operations involved in each of these steps, numbering them in the order in which they occur: Step I : R = swapping rows and 2 Step II : (Involves three row operations.) R 2 = multiplying row by ( ), R 3 = adding ( λ) {row } to row 2, R 4 = adding row to row 3 Step III : R 5 = adding row 2 to row 3 Step IV : R 6 = multiplying row 2 by ( ). EXAMPLE. What are the inverses of these row operations? This is not a hard question! For example, R 5 = subtracting row 2 from row 3, while R 3 = adding (λ ) {row } to row 2. Things like R and R 6 are the same in reverse. To find C = P one simply writes out the corresponding product of matrices = = C = R R 2 R 3 R 4 R 5 R 6 λ λ 0 λ 0 0 = c =, c 2 = 0, c 3 = Now according to the entries of n f (λi A), the invariant factors of A are =, 2 = λ, 3 = λ 2 3λ. This means that we keep c 2 = η( c 2 ) and c 3 = η( c 3 ), which will generate R 3 under the action of A (recall that η( c) = c if c has only scalar entries); and we throw out c because = = η( c ) = 0. Let s check this, in order to see a computation of η( c) where c has actual polynomial entries! First break c into components with respect to the standard basis: Now you substitute T for λ : c = (λ )ê + ( )ê 2 + ( )ê 3. η( c ) = (T )ê + ( )ê 2 + ( )ê
14 4 (VI.C) RATIONAL CANONICAL FORM = Tê (ê + ê 2 + ê 3 ). Finally, rewrite everything in matrix-vector notation (with respect to ê ) to compute the result: 0 0 = = = Written in this way it looks kind of like magic. Now the {dimension of W k }= {number of vectors in B k } should be the degree d k of k, in the remaining cases k = 2, 3. If d 2 was 5 (which by the way would never happen) then we would write down B 2 = {η( c 2 ), Tη( c 2 ),..., T 4 η( c 2 )}. In the case at hand we have d 2 =, d 3 = 2. So 0 B 2 = {η( c 2 )} = 0, B 3 = {η( c 3 ), Tη( c 3 )} = 0,. These three vectors together give our basis and 0 rref 0 S = 0 0 S = Now for the companion matrices. For arbitrary monic polynomials λ + a 0 and λ 2 + a λ + a 0 of degrees and 2, these are ( ) 0 a 0 ( a 0 ) and. a Therefore in our case M( 2 ) = (0) and M( 3 ) = ( ). The decomposition is therefore (in the form A = SRS ) =
15 Exercises () Compute the rational canonical form for A = EXERCISES 5 That is, present it as SRS, where R consists of companion matrix blocks along the diagonal and S is a change-of-basis matrix. Interpret the result in terms of a direct-sum decomposition of R 3 under the action of the transformation T with matrix [T]ê = A. (2) Same problem with A = , and R 4 instead of R 3. [Note: from Exercise VI.B.2, you already know the normal form for λi 4 A.] (3) Once more, same problem with c 0 0 A = 0 c 0. How does it depend on c? 0 0 c (4) Let T : V V (with V = R n ) be semisimple. (a) If T has a cyclic vector, show that T has n distinct eigenvalues. (b) In this case, if { v,..., v n } is an eigenbasis, show that v = n i= v i is a cyclic vector for T. (5) Let A M n (R) be such that A 2 = I n. (a) Prove that n is even. (b) Writing n = 2k, show that A is similar (over R) to ( ) 0 I J n := k. I k 0
16 6 (VI.C) RATIONAL CANONICAL FORM (6) Let T : P 3 P 3 be d dx Write its matrix A in the standard basis {x 3, x 2, x, } = B, and put it in rational canonical form (i.e. A = SRS ), without computation! [Hint: what is T 4?] (7) Suppose I tell you the characteristic polynomial of a 5 5 matrix A, without giving you the matrix: f A (λ) = (λ 3) 3 (λ 2) 2. (a)what are the possible rational canonical forms R? (Start by listing the possibilities for n f (λi A).) There should be six. (b) This says that there are six distinct (=nonsimilar) transformations of R 5 with eigenvalue list 3, 3, 3, 2, 2 (repeated according to multiplicity). Which ones are diagonalizable? [Hint: look at Answer 2 at the beginning of the section!]
(VI.D) Generalized Eigenspaces
(VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationMath 110 Linear Algebra Midterm 2 Review October 28, 2017
Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections
More informationLINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS
LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts
More informationThe Cayley-Hamilton Theorem and the Jordan Decomposition
LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal
More informationBare-bones outline of eigenvalue theory and the Jordan canonical form
Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationLinear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities
Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities 2. Determinant of a linear transformation, change of basis. In the solution set of Homework 1, New Series, I included
More informationMATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,
MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationFundamental theorem of modules over a PID and applications
Fundamental theorem of modules over a PID and applications Travis Schedler, WOMP 2007 September 11, 2007 01 The fundamental theorem of modules over PIDs A PID (Principal Ideal Domain) is an integral domain
More informationMAT 1302B Mathematical Methods II
MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationThe Cyclic Decomposition of a Nilpotent Operator
The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More information(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) =
(Can) Canonical Forms Math 683L (Summer 2003) Following the brief interlude to study diagonalisable transformations and matrices, we must now get back to the serious business of the general case. In this
More informationMAT1302F Mathematical Methods II Lecture 19
MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly
More information(Inv) Computing Invariant Factors Math 683L (Summer 2003)
(Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationEigenvalues and eigenvectors
Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationGeneralized Eigenvectors and Jordan Form
Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least
More information(II.B) Basis and dimension
(II.B) Basis and dimension How would you explain that a plane has two dimensions? Well, you can go in two independent directions, and no more. To make this idea precise, we formulate the DEFINITION 1.
More informationLecture 1 Systems of Linear Equations and Matrices
Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationLecture 9: Elementary Matrices
Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More informationPOLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers
POLYNOMIAL EQUATIONS OVER MATRICES Robert Lee Wilson Here are two well-known facts about polynomial equations over the complex numbers (C): (I) (Vieta Theorem) For any complex numbers x,..., x n (not necessarily
More informationREU 2007 Apprentice Class Lecture 8
REU 2007 Apprentice Class Lecture 8 Instructor: László Babai Scribe: Ian Shipman July 5, 2007 Revised by instructor Last updated July 5, 5:15 pm A81 The Cayley-Hamilton Theorem Recall that for a square
More informationLecture 8: Determinants I
8-1 MATH 1B03/1ZC3 Winter 2019 Lecture 8: Determinants I Instructor: Dr Rushworth January 29th Determinants via cofactor expansion (from Chapter 2.1 of Anton-Rorres) Matrices encode information. Often
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationSolving Systems of Equations Row Reduction
Solving Systems of Equations Row Reduction November 19, 2008 Though it has not been a primary topic of interest for us, the task of solving a system of linear equations has come up several times. For example,
More informationJORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5
JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct
More informationLecture Note 12: The Eigenvalue Problem
MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical
More informationAdvanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationNOTES II FOR 130A JACOB STERBENZ
NOTES II FOR 130A JACOB STERBENZ Abstract. Here are some notes on the Jordan canonical form as it was covered in class. Contents 1. Polynomials 1 2. The Minimal Polynomial and the Primary Decomposition
More informationM.6. Rational canonical form
book 2005/3/26 16:06 page 383 #397 M.6. RATIONAL CANONICAL FORM 383 M.6. Rational canonical form In this section we apply the theory of finitely generated modules of a principal ideal domain to study the
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationThe Jordan Canonical Form
The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop
More informationThe minimal polynomial
The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More information1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006
Jordan Canonical Forms December 6, 2006 1 Introduction We know that not every n n matrix A can be diagonalized However, it turns out that we can always put matrices A into something called Jordan Canonical
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationLecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)
Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1) Travis Schedler Tue, Nov 29, 2011 (version: Tue, Nov 29, 1:00 PM) Goals
More informationAN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b
AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise
More informationREVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION
REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar
More informationREVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and
REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationFirst of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if
5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationMath 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.
Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationEigenvalues and Eigenvectors
LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are
More informationGENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION
GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition
More informationOctober 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS
October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential
More informationUnit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence
Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector
More informationNotes on the matrix exponential
Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More information5. Diagonalization. plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found
5. Diagonalization plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found eigenvalues EV, eigenvectors, eigenspaces 5.. Eigenvalues and eigenvectors.
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationMath 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith
Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for
More informationThe Jordan canonical form
The Jordan canonical form Francisco Javier Sayas University of Delaware November 22, 213 The contents of these notes have been translated and slightly modified from a previous version in Spanish. Part
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationMultiplying matrices by diagonal matrices is faster than usual matrix multiplication.
7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix
More informationMinimal Polynomials and Jordan Normal Forms
Minimal Polynomials and Jordan Normal Forms 1. Minimal Polynomials Let A be an n n real matrix. M1. There is a polynomial p such that p(a) =. Proof. The space M n n (R) of n n real matrices is an n 2 -dimensional
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationA classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger
Tatsuro Ito Kazumasa Nomura Paul Terwilliger Overview This talk concerns a linear algebraic object called a tridiagonal pair. We will describe its features such as the eigenvalues, dual eigenvalues, shape,
More information1.4 Solvable Lie algebras
1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More information= W z1 + W z2 and W z1 z 2
Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationMath 121 Practice Final Solutions
Math Practice Final Solutions December 9, 04 Email me at odorney@college.harvard.edu with any typos.. True or False. (a) If B is a 6 6 matrix with characteristic polynomial λ (λ ) (λ + ), then rank(b)
More informationCANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz
CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationTheorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.
5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field
More informationUNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if
UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is
More informationEigenvalue and Eigenvector Homework
Eigenvalue and Eigenvector Homework Olena Bormashenko November 4, 2 For each of the matrices A below, do the following:. Find the characteristic polynomial of A, and use it to find all the eigenvalues
More informationSystems of Linear Equations
LECTURE 6 Systems of Linear Equations You may recall that in Math 303, matrices were first introduced as a means of encapsulating the essential data underlying a system of linear equations; that is to
More information1111: Linear Algebra I
1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 7 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 7 1 / 8 Properties of the matrix product Let us show that the matrix product we
More information