NOTES II FOR 130A JACOB STERBENZ

Size: px
Start display at page:

Download "NOTES II FOR 130A JACOB STERBENZ"

Transcription

1 NOTES II FOR 130A JACOB STERBENZ Abstract. Here are some notes on the Jordan canonical form as it was covered in class. Contents 1. Polynomials 1 2. The Minimal Polynomial and the Primary Decomposition 3 3. When the Characteristic Polynomial Factors 4 4. The Structure of Nilpotent Operators 5 5. The Real and Complex Jordan Forms 7 1. Polynomials As will become apparent shortly, the theory of invariant subspaces of a linear transformation T L(V ) turns on the issue of factoring polynomials into prime powers. Accordingly we begin here with a basic review of polynomial arithmetic. First some notation. Let F denote a field of numbers. For our purposes one can take this to be either real numbers R or complex numbers C. However, everything we say here is equally valid for any field (e.g. rational numbers Q). We denote by: F[x] = { p(x) d p(x) = a i x i, where a i F }, for various values of d. For any particular p F[x] we define: deg(p) = min{d d p(x) = a i x i and a d 0}. i=0 Note that this is just the usual notion of degree of a polynomial (the highest power of x you see in p(x)). We also say that p F[x] is monic if deg(p) = d and a d = 1. This is simply a normalization condition that is useful in the statement of several results to follow. Perhaps the most basic result about polynomials in one variable is the following: Theorem 1.1 (Division with remainder). Let p, q F[x] be any two polynomials. Then there exists g, r F[x] such that q = pg + r where deg(r) < deg(p). Proof. This follows from long division of polynomials just as one learns in high school algebra. For starters we may assume deg(p) deg(q) because otherwise we can simply choose g = 0 and r = q. Then we have: q(x) = d 1 i=0 i=0 a i x i, p(x) = with d 1 d 2, and a d1 0 and b d2 0. Let g 1 = a d 1 b d2 x d1 d2, and then set q 1 = q g 1 p. We have deg(q 1 ) < deg(q). If deg(q 1 ) < deg(p) we are done by setting r = q 1. Otherwise repeat this process to produce q 2 = q 1 g 2 p with deg(q 2 ) < deg(q 1 ). Again check if deg(q 2 ) < deg(p) or not, and divide again if this condition has not yet been achieved. Eventually this process will produce some q k with deg(q k ) < deg(p) and q k = q (g 1 + g g k )p. Finally set g = k i=1 g i and r = q k are we are done. 1 d 2 i=0 b i x i,

2 For collections of polynomials we have the following importion notion: Definition 1.2. A collection of polynomials I F[x] is called an ideal if: i) For any p, q I we have that p + q I. ii) For any p I and any other q F[x] we have pq I. An example of an ideal would be the set I = {p F[x] a0 = 0}. In other words this ideal consists of all polynomials of the form xg(x) for some (not fixed) g F[x]. Division with remainder shows that in some sense this is the general picture for polynomial ideals: Theorem 1.3 (Generation of ideals). Let I F[x] be any nonzero ideal. Then there exists a unique monic p F[x] such that I = {q F[x] q = pg for some g F[x]}. In this case we write I = (p) and say I is the ideal generated by p. Proof. Let d be the minimum degree over all nonzero polynomials in I, and let p I be a monic polynomial with deg(p) = d. Let q I be anything else. Then we have q = gp + r for some deg(r) < deg(p) polynomial. But q gp I by rules i) and ii) for ideals, so r I as well. If r 0 then it would be a nonzero polynomial in I with degree less that p, a contradiction. Thus r = 0 and we have q = pg as desired. Note that uniqueness follows because if deg(p) = deg(q) and q = gp for some g F[x], we must have g = a for some a F (this is simply because deg(gp) = deg(g) + deg(p)). Thus p is the only monic polynomial that can generate I. Definition 1.4. Next, given two polynomials p, q F[x] we say p divides q of there exists a g F[x] with q = pg. In this case we write p q. We say a polynomial p is prime if deg(p) 1 and if q p for some q F[x] we must have either p = aq for some a F or deg(q) = 0 (in the latter case q = a 0 for some a 0 F). Given a (finite) collection of polynomials p i F[x] we say p i are relatively prime if q p i for all i implies that deg(q) = 0. In other words the only common factors of all the p i are constants. It is important to understand that the notion of being prime really depends on F. The polynomial p = x is prime when F = R but not when F = C. The fundamental theorem of algebra states that the only prime polynomials if C[x] are those of the form p = a(x λ). On the other hand the only prime polynomials in R[x] are of the form p = a(x λ) or p(x) = ax 2 + bx + c where b 2 4ac < 0. This is because if a polynomial has real coefficients then its roots are either real or come in complex conjugate pairs. Also, note that a collection of non-constant polynomials p i C[x] are relatively prime iff they have no root in common. The same is true for p i R[x] as long as one takes into account complex roots (this is necessary because p 1 = x and p 2 = x 4 + 2x have no real roots in common but they are also not relatively prime in R[x]). From the previous result on generation of ideals we now have the following theorem which will be our main tool in the next Section: Theorem 1.5. Let q i F[x] be a finite collection of nonzero relatively prime polynomials. Then there exists g i F[x] such that i q ig i = 1. Proof. First define an ideal that is generated by the q i : I = {q F[x] q = i g i q i for some g i F[x]}. One can check immediately that this collection of polynomials satisfies i) and ii) above, and since the q i are nonzero I itself is also nonzero. Thus there exists a unique monic p F[x] with I = (p). In particular p q i for all i, and because the q i are relatively prime this means deg(p) = 0. The only monic polynomial of degree 0 is p = 1. Thus 1 I, and so by the construction of I there must exists g i F[x] with i g iq i = 1. We end with a prime factorization theorem for polynomials which mirrors what one knows about integers: Theorem 1.6. Let q F[x]. Then one can write uniquely q = a k i=1 pri i, where a F, where p i F[x] are distinct prime monic polynomials, and r i N. If F = C then each deg(p i ) = 1, and if F = R then each 1 deg(p i ) 2. 2

3 Proof. The existence of some prime factorization of q F[x] follows by inductively splitting q into smaller factors. For example either q is already prime or q = q 1 q 2 where each 1 deg(q i ) < deg(q). Then if at least one of q 1, q 2 is not prime we split further and continue this process until all factors are prime. The main issue then is the uniqueness of the prime factorization. This is a consequence of the following claim: If p is prime and p q 1 q 2, then either p q 1 or p q 2. The claim follows because if p, q 1 are relatively prime then one can find f, g F[x] with fp + gq 1 = 1. Then fq 2 p + gq 1 q 2 = q 2. But p q 1 q 2 implies q 1 q 2 = gp so rearranging things we have (fq 2 + g)p = q 2. In other words p q 2. By repeatedly applying the previous fact to both sides of the following equation we see that if: q = a k i=1 p ri i = a then we must have a = a, k = k, r i = r i, and p i = p i. (p i) r i, 2. The Minimal Polynomial and the Primary Decomposition Now let V be a vector space over the real or complex numbers F. Let T L(V ). The primary purpose of this section is to associate a certain polynomial m T F[x] with T in such a way that the prime factorization of m T gives a great deal of information about the invariant subspaces of T. Recall that finding the invariant subspaces of T is the key step to finding all of the solutions to ẋ = T x. To set things up we first define an ideal in F[x] associated with T. This is: k i=1 I(T ) = {q F[x] q(t ) = 0}. Here we mean that if q(x) = d i=0 a ix i then we define q(t ) = d i=0 a it i with the proviso that T 0 = I for any T. Note that q(t ) = 0 simply means q(t )x = 0 for all x V. It is not hard to check that I(T ) is indeed an ideal (basically zero plus zero is zero, and zero times anything is zero). A little bit less obvious is that I(T ) is not the zero ideal. To see this note that if x V then the collection of vectors {x, T x, T 2 x,..., T n x}, where n = dim(v ), must be linearly dependent. Thus there exists q F[x] such that q(t )x = 0. Constructing such q i for a basis B = {x 1,..., x n } and then setting q = q i gives a nonzero polynomial with q(t )x = 0 for all x V. Definition 2.1. From Theorem 1.3 in the previous section we must have I(T ) = (m T ) for some unique monic m T F[x]. In other words there exists a polynomial m T with m T q for all other q F[x] with q(t ) = 0. We call m T the minimal polynomial of T. Using the prime factorization of m T we have the fundamental result: Theorem 2.2 (Primary Decomposition Theorem). Let T L(V ) and m T its minimal polynomial. Then if m T = k i=1 pri i is the prime factorization of m T there exists a T invariant direct sum decomposition V = k i=1 V i where for each i = 1,..., k one has m T Vi = p ri i and moreover each V i = ker ( p ri i (T )). Proof. We ll break the proof down into a number of steps. Step 1:(Construction of V i ) This is the key part of the proof. Let m T = k m T into distinct prime powers and for each i = 1,..., k define: q i = j i p rj j = m T p ri i. i=1 pri i be the factorization of Then by construction the q i are relatively prime so by Theorem 1.5 there exists g i F[x] with i g iq i = 1. Using these polynomials define E i L(V ) by the formulas E i = g i (T )q i (T ). The first thing that is immediate is that [E i, T ] = E i T T E i = 0, and also i E i = I. Now define: V i = ran(e i ) = {y V y = Ei x for some x V }. Note that span(v 1,..., V k ) = V because x = i E ix for each x V. Therefore to show V = k i=1 V i we only need V i V j = { 0} for all i j. This will follow if E i E j = 0 for all i j. But m T g i q i g j q j whenever i j because q i q j for i j contains every prime factor p r l l for all l = 1,..., k. Thus E i E j = g i (T )q i (T )g j (T )q j (T ) = 0. 3

4 To complete this part of the proof we just need to show the V i are T invariant. By E i E j = 0 we have x = E i x for all x V i. Thus if x V i we have T x = T E i x = E i T x ran(e i ) = V i. Step 2:(Show that V i = ker ( p ri i (T )) ) First let x V i. Then x = g i (T )q i (T )x, so multiplying both sides by p ri i (T ) and using pri i (T )q i(t ) = m T (T ) = 0 we have p ri i (T )x = 0. Thus V i ker ( p ri i (T )). On the other hand suppose p ri i (T )x = 0. Since x = x j where each x j V j, and since T (V j ) V j we must also have p ri i (T )x j V j. But the V j are linearly independent so this implies p ri i (T )x j = 0 for each j. Now for j i we have p ri i g jq j, so there exists g F[x] with gp ri i = g j q j. Multiplying the previous equation by g(t ) gives E j x j = 0 for all i j. In other words x j = 0 for all i j. Thus x = x i so we have shown ker ( p ri i (T )) V i. Step 3:(Show that the minimal polynomial of T Vi is p ri i ) Since V i = ker ( p ri i (T )) we have p ri i (T V i ) = 0. Thus m T Vi p ri i, so we must have m T Vi = p r i i for some r i r i. Now define m = p r i i q i = p r i i j i prj j. Since each p rj j (T )V j = { 0} and p r i i (T )V i = { 0} we have m(t ) = 0 on all of V. Thus m T m which implies r i r i and we are done. 3. When the Characteristic Polynomial Factors We now bring in the connection between the minimal polynomial m T and the characteristic polynomial p T. Recall that the latter is defined by the equation p T (x) = det(xi T ), and λ F is an eigenvalue of T iff p T (λ) = 0. First we give an indication that m T is in fact closely related to p T : Lemma 3.1. Given T L(T ) the polynomials m T and p T have exactly the same roots. Moreover, m T factors over F iff p T factors over F. Proof. First of all let p I(T ) be any polynomial such that p(t ) = 0. Then if T x = λx for some x 0 a direct computation shows p(t )x = p(λ)x. Thus p(λ) = 0. Therefore the roots of p T are roots of m T as well. The other direction is a bit more work. Let λ F be a root of m T. Then by prime factorization m T = (x λ) r q(x) where (x λ) and q are relatively prime. Let V = V λ V q be the corresponding invariant decomposition of V according to Theorem 2.2. Then the minimal polynomial of T Vλ is exactly (x λ) r. In other words (T Vλ λi) r = 0 but (T Vλ λi) r 1 0. Let y = (T Vλ λi) r 1 x with y 0. Then (T Vλ λi)y = 0 so y V λ V is an eigenvector for T (on all of V ). Finally, the part about factorization is trivial if F = C (in this case every polynomial factors). On the other hand if V is a real vector space, by extending T to real operator on the complexification V C, we just need to see that the minimal polynomial of the extension is the same as the original. Since p(t ) = p(t ) for any p = C[x], we see that if p(t ) = 0 then R(p)(T ) = I(p)(T ) = 0, where R(p) and I(p) are the real polynomials with p = R(p) + ii(p). This implies that the minimal polynomial of a real operator is real, and so the minimal polynomials of T on both V and V C are the same. Before stating our main result we need one more idea. First a definition: Definition 3.2. An operator N L(V ) is called nilpotent if N k = 0 for some k 1. If N is nilpotent we call k 0 = min{k N k = 0} the order of N. Now: Lemma 3.3. Let N L(V ) be nilpotent. Then the order of N is dim(v ). Moreover for any λ F wth λ 0 one has λi + N is invertible. Proof. Let k 0 be the order of N, and choose some x V with y = N k0 1 x 0. Consider the collection of vectors {x, Nx, N 2 x,..., N k0 1 x}, and suppose k 0 1 i=0 a in i x = 0 for some a i F. Applying N k0 1 to both sides gives a 0 y = 0. Hence a 0 = 0. Applying N k0 2 gives a 1 y = 0, so a 1 = 0. Continuing in this way applying N k0 j for j = 3,..., k 0 (in order) gives a j 1 = 0. This shows that {x, Nx, N 2 x,..., N k0 1 x} is a linearly independent collection of k 0 vectors, and so we must have k 0 dim(v ). Finally, the part about the inverse follows from the direct calculation (λi + N) 1 = 1 k0 1 λ i=0 ( 1 λ N)i. We can now state the Main Theorem of these notes: 4

5 Theorem 3.4 (Abstract Jordan Form). Let V be a vector space over F and let T L(V ). Suppose that the characteristic polynomial of T factors over F, that is p T (x) = k i=1 (x λ i) di for some λ i F. Then there is a T invariant direct sum decomposition V = k i=1 V i where V i = ker ( ) (T λ i I) di. Moreover V i = ker ( ) (T λ i I) ri are exactly the subspaces associated with the factorization of the minimal polynomial m T (x) = k i=1 (x λ i) ri, and we have the relation r i d i = dim(v i ). In particular m T p T. As a consequence we can write T = S+N where S is diagonalizable with eigenvalues λ i, and N is nilpotent with [S, N] = 0. Moreover there is only one such pair S, N which yields this decomposition. Therefore, we have that T itself is diagonalizable iff N = 0, which is equivalent to r i = 1 for every prime factor of the minimal polynomial. Proof. Assuming that p T (x) = k i=1 (x λ i) di for some λ i F and d i N implies by Lemma 3.1 that m T (x) = k i=1 (x λ i) ri for the same λ i but possibly different r i. Now let V = k i=1 V i be the direct sum decomposition according to Theorem 2.2, so that by construction V i = ker ( ) (T λ i I) ri. Since the Vi are T invariant subspaces we must have p T = k i=1 p T Vi. On the other hand since m T Vi = (x λ i ) ri and p T Vi have the same roots then p T Vi = (x λ i ) di where d i = dim(v i ) and d i is the same as in the factorization of p T. Also, because T Vi λ i I is nilpotent of order r i, by Lemma 3.3 we have r i d i. Next we show that V i = ker ( ) (T λ i I) di. Since we already know that Vi = ker ( ) (T λ i I) ri and r i d i the containment V i ker ( ) (T λ i I) di is immediate. On the other hand if x = j x j, where x j V j, is such that x ker ( ) (T λ i I) di, then we know from invariance and independence that individually x j ker ( ) (T λ i I) di. For j i we can write T Vj λ i I = λi +N where λ = λ j λ i 0 and N = T Vj λ j I is nilpotent. Thus by Lemma 3.3 T Vj λ i I is invertible when i j, so ker(t Vj λ i I) di = { 0}. Thus x j = 0 for all j i which shows x V i. Finally, to get the T = S + N decomposition we just need to construct S and N in terms of blocks for each V i. Let S Vi = λ i I, and let N Vi = T Vi λ i I. Then it is clear that S is semisimple and N is nilponent with [S, N]. Now let T = S + N be any other such decomposition. By the condition [S, T ] = 0 we see that [S, E i ] = 0 for each of the projections constructed in Theorem 2.2, so S (V i ) V i for each i. Therefore V = k i=1 V i is also an invariant direct sum decomposition for S, and hence for N as well. Now if S x = λx, where x = i x i and x i V i we must have Sx i = λx i by independence. Thus, if B = {e 1,..., e n } is a basis for V consisting of eigenvectors of S, then S i = {E i e 1,..., E i e n } must be a collection of eigenvectors of S Vi which spans V i. In other words S Vi is also diagonalizable for each i. Now let x V i be any nonzero vector with S x = λx, and let k x 1 be the smallest power such that (N ) kx x = 0. Then if we set y = (N ) kx 1 x have 0 y V i and T y = (S + N )y = λy. Thus, we have shown that any eigenvalue of S Vi must also be an eigenvalue of T Vi, so S Vi = λ i I. Now N Vi = T Vi λ i I and we are done. We ll end this section we a few additional definitions regarding the subspaces V i constructed in the previous theorem. Definition 3.5. Let T L(V ) and assume that its characteristic polynomial factors as p T (x) = (x λ) d λ q(x) where (x λ), q(x) are relatively prime. Then we call V λ = ker(t λi) d λ the generalized eigenspace associated to λ. Note that d λ = dim(v λ ). We call d λ the algebraic multiplicity of eigenvalue λ, and we call e λ = dim ( ker(t λi) ) the geometric multiplicity of eigenvalue λ. Note that we always have 1 e λ d λ, and that e λ is exactly the number of linearly independent eigenvectors of T in V λ (which all must have eigenvalue λ). In particular T is diagonalizable iff p T (x) factors and e λ = d λ for all eigenvalues λ. Finally, note it is possible for e λ to be much smaller than d λ. For example if r λ = d λ, where r λ is the power of (x λ) in the minimal polynomial m T, then we must have e λ = The Structure of Nilpotent Operators In light of Theorem 3.4 we see that to get the simplest form for a general T L(V ) is suffices to study a general nilpotent operator N L(V ). It turns out that the key notion in this regard is the following: Definition 4.1. Let T L(V ) be any linear operator, and let x V be a fixed vector. Then we define: Z(x, T ) = span{x, T x, T 2 x,..., T dim(v ) 1 x}. 5

6 We call Z(x, T ) the T -cyclic subspace of V generated by x. We call dim(z(x, T )) = O(x, T ) the T -order of x. Note that by the properties of the characteristic polynomial we have T dim(v )+j x Z(x, T ) for all j 0. Our main theorem now is the following: Theorem 4.2. Let N L(V ) be a nilpotent operator of order k 0. Then there exists a collection of vectors x 1,..., x k V with O(x 1, N) O(x 2, N)... O(x k, N) = k 0 and V = k i=1 Z(x i, N). This decomposition is unique in the sense that if V = k i=1 Z(y i, N) is any other such decomposition we must have k = k and O(x i, N) = O(y i, N) for all i = 1,..., k. In particular there is a basis B consisting of vectors of the form N i x j such that: (1) [N] B = N l1 N l2 Nlk, where each l i = O(x i, N), and N l M(l l) is the elementary nilpotent matrix of order l : (2) N l = Any such form of N is unique with the proviso that l 1 l 2... l k. Proof. The proof of the cyclic decomposition V = k i=1 Z(x i, N) we give here is the same as in Appendix III of the text. The proof is by induction on the dimension of V. If dim(v ) = 1 then the only nilpotent operator is N = 0 and so V = Z(x, N) for any x 0. Now let N, V be arbitrary, and set W = N(V ) V. Since N is nilpotent we must have ker(n) { 0} so W V. Now N(W ) W, and dim(w ) < dim(v ) so by induction we can decompose W = k i=1 Z(y i, N W ) for some y i W. Choose any x i V with y i = Nx i. First suppose that there exists p i F[x] with i p i(n)x i = 0 and each deg(p i ) < O(x i, N) = O(y i, N)+1. Applying N to this and using the independence of Z(y i, N W ) we see that for each i either p i = 0 or p i 0 (as a polynomial) and p i (N)y i = 0. In other words p i = 0 or else p i (x) = a i x O(yi,N). But this also implies that p a i 0 in O(yi,N) 1 y i = 0, so a i = 0 for each term in that sum. This shows that the Z(x i, N) themselves are independent. It remains to show V = k i=1 Z(x i, N) L where L = k j=k+1 Z(x j, N) for some other collection of vectors x j. Since N( k i=1 Z(x i, N)) = N(V ), we must have V = span ( k i=1 Z(x i, N), ker(n) ). Let ker(n) = L L where L = ker(n) k i=1 Z(x i, N) and L is any other complimentary subspace. Then N(L) = { 0} L, so V = k i=1 Z(x i, N) L is an invariant direct sum decomposition. If {x k+1,..., x k } is any basis for L we have L = k j=k+1 span(x j), and automatically span(x j ) = Z(x j, N), so we are done. Notice that this inductive procedure also establishes uniqueness in terms of the orders O(x i, N) because if the orders of O(y i, N W ) are uniquely determined for i = 1... k, then so are the orders O(x i, N) = O(y i, N W ) + 1. And the number of remaining vectors, which all have order 1, is also determined by the number dim(v ) k i=1 O(x i, N). Finally, to see the canonical form (1) is valid suppose that V = span(x, Nx,..., N l 1 x) where l = dim(v ) = O(x, N). Then in the basis B = { e 1 = N l 1 x, e 2 = N l 2 x,..., e l = x } we have Ne 1 = 0 and Ne i = e i 1 for all i = 2,..., l. Thus [N] B = N l where N l is defined by (2) above. 6

7 Remark 4.3. Note that if V = span(x, Nx,..., N l 1 x) where l = dim(v ) = O(x, N), and instead we use the basis B = { e 1 = x, e 2 = Nx,..., e l = N l 1 x } we will get: [N] B = N l = This is the basic form for nilpotent matrices used in the text. I have chosen to employ the upper triangular form here which is a little more common by modern standards. 5. The Real and Complex Jordan Forms Combining what we have done so far yields the so called Jordan canonical form of a matrix: Theorem 5.1 (Jordan form). Let V be a vector space over F and let T L(V ) be such that it characteristic polynomial p T (x) factors over F as p T (x) = k i=1 (x λ i) di. In addition suppose the minimal polynomial of T factors as m T (x) = k i=1 (x λ i) ri. Then there exists a basis B for V such that: J l11 (λ 1 ) J l12 (λ 1 ) [T ] B = (λ1) Jl1n1 Jlk1 (λk)... (λk) Jlknk where l ij l i j+1 for each ij and l ini l ij l ij Jordan matrix of the form: = r i, and n i j=1 l ij = d i, and each elementary block J lij (λ i ) is an J l (λ) = λi + N l = λ 1 λ λ Proof. This follows more or less directly by combining Theorems 3.4 and 4.2. First note that T takes a block form along the primary direct sum decomposition V = k i=1 V i, and on each V i we have T Vi = λ i I + N i for some nilpotent operator N i L(V i ) or order r i. By applying Theorem 4.2 to N i we are done. The previous Theorem is not quite satisfactory when we want to compute solutions to ẋ = T x on a real vectors space V when the characteristic polynomial T L(V ) does not factor over R. However, the only obstruction here is complex conjugate pairs of roots in the factorization of p T (x). Using the material from the previous notes on complexifications and the previous Theorem we have: Theorem 5.2 (Real Jordan form). Let V be a real vector space and let T L(V ) be such that it characteristic polynomial p T (x) factors over C as p T (x) = k r i=1 (x λ i) di k c j=1 (x µ j) ej (x µ j ) ej, where λ i R and µ j = a j + 1b j and b j 0. In addition suppose the minimal polynomial of T factors as m T (x) = 7

8 kr i=1 (x λ i) ri k c j=1 (x µ j) tj (x µ j ) tj. Then there exists a (real) basis B for V such that: [T ] B = J l11 (λ 1 ) Jlkr (λ nkr kr ) K s11 (µ 1 ) Kskc mkc (µkc) where l ij l i j+1 for each ij and l ini = r i, and n i j=1 l ij = d i, and each real elementary Jordan block J l (λ) is the same as before. Furthermore s ij s i j+1 for each ij and s imi = t i while m i j=1 s ij = e i and each matrix K sij (µ i ), where µ i = a i + 1b i, is a 2s ij 2s ij block matrix of the form: a b I b a a b (3) K s (µ) = b a. I a b b a Proof. Using Theorem 3.4 we can reduce to the invariant factor associates to a single pair of conjugate complex eigenvalues. In other words we may assume p T (x) = (x µ) e (x µ) e and m T (x) = (x µ) t (x µ) t where µ = a + ib with b 0. Then using the complex Jordan normal form we know that V C = V µ V µ where each factor is the invariant subspace associated to eigenvalues µ and µ respectively. Let σ be the complex conjugation on V C. The key observation now is that σ(v µ ) = V µ. To see this recall that V µ = ker(t C µi) e, and an easy calculation shows that σ ( ker(t C µi) e) = ker(t C µi) e thanks to σt C = T C σ. Because of this we further have T C = S + N where S = µe µ + µσe µ σ where σe µ σ = E µ and the E are projections onto the generalized eigenspaces. This implies that both σs = Sσ and σn = Nσ, so both S and N in the semisimple/nilpotent decomposition of T are real operators. Now suppose V µ = k j=1 Z(z j, N). Then using the mirror symmetry σ(v µ ) = V µ and the fact N is real we must have V µ = k j=1 Z(σ(z j), N). In particular we can pair cyclic spaces to get the real decomposition: V = k j=1(z(z j, N) Z(σ(z j ), N)) R. For each fixed value of j let s j = O(z j, N) denote the (complex) dimension of the cyclic factor Z(z j, N), and choose the real basis of (Z(z j, N) Z(σ(z j ), N)) R to be B j = {y 1, x 1,..., y sj, x sj } where x l + iy l = N sj l z j. Then one can readily check that in this basis the matrix of T restricted to the factor (Z(z j, N) Z(σ(z j ), N)) R is exactly K sj (µ) from line (3). Department of Mathematics, University of California, San Diego (UCSD) 9500 Gilman Drive, Dept 0112 La Jolla, CA USA address: jsterben@math.ucsd.edu 8

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Further linear algebra. Chapter IV. Jordan normal form.

Further linear algebra. Chapter IV. Jordan normal form. Further linear algebra. Chapter IV. Jordan normal form. Andrei Yafaev In what follows V is a vector space of dimension n and B is a basis of V. In this chapter we are concerned with linear maps T : V V.

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5 JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct

More information

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) =

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) = (Can) Canonical Forms Math 683L (Summer 2003) Following the brief interlude to study diagonalisable transformations and matrices, we must now get back to the serious business of the general case. In this

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

Lecture 22: Jordan canonical form of upper-triangular matrices (1)

Lecture 22: Jordan canonical form of upper-triangular matrices (1) Lecture 22: Jordan canonical form of upper-triangular matrices (1) Travis Schedler Tue, Dec 6, 2011 (version: Tue, Dec 6, 1:00 PM) Goals (2) Definition, existence, and uniqueness of Jordan canonical form

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA Mahmut Kuzucuoğlu Middle East Technical University matmah@metu.edu.tr Ankara, TURKEY March 14, 015 ii TABLE OF CONTENTS CHAPTERS 0. PREFACE..................................................

More information

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3, MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

1.4 Solvable Lie algebras

1.4 Solvable Lie algebras 1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

(VI.C) Rational Canonical Form

(VI.C) Rational Canonical Form (VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars

More information

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 110 Linear Algebra Midterm 2 Review October 28, 2017 Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G. Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Algebra Qualifying Exam August 2001 Do all 5 problems. 1. Let G be afinite group of order 504 = 23 32 7. a. Show that G cannot be isomorphic to a subgroup of the alternating group Alt 7. (5 points) b.

More information

QUALIFYING EXAM IN ALGEBRA August 2011

QUALIFYING EXAM IN ALGEBRA August 2011 QUALIFYING EXAM IN ALGEBRA August 2011 1. There are 18 problems on the exam. Work and turn in 10 problems, in the following categories. I. Linear Algebra 1 problem II. Group Theory 3 problems III. Ring

More information

Abstract Vector Spaces and Concrete Examples

Abstract Vector Spaces and Concrete Examples LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.

More information

(VI.D) Generalized Eigenspaces

(VI.D) Generalized Eigenspaces (VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

1 Linear transformations; the basics

1 Linear transformations; the basics Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

JORDAN NORMAL FORM NOTES

JORDAN NORMAL FORM NOTES 18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm

More information

Lecture 7: Polynomial rings

Lecture 7: Polynomial rings Lecture 7: Polynomial rings Rajat Mittal IIT Kanpur You have seen polynomials many a times till now. The purpose of this lecture is to give a formal treatment to constructing polynomials and the rules

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church. Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church. Exercise 5.C.1 Suppose T L(V ) is diagonalizable. Prove that V = null T range T. Proof. Let v 1,...,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Eigenvectors. Prop-Defn

Eigenvectors. Prop-Defn Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.

More information

August 2015 Qualifying Examination Solutions

August 2015 Qualifying Examination Solutions August 2015 Qualifying Examination Solutions If you have any difficulty with the wording of the following problems please contact the supervisor immediately. All persons responsible for these problems,

More information

Algebra qual study guide James C. Hateley

Algebra qual study guide James C. Hateley Algebra qual study guide James C Hateley Linear Algebra Exercise It is known that real symmetric matrices are always diagonalizable You may assume this fact (a) What special properties do the eigenspaces

More information

Further linear algebra. Chapter II. Polynomials.

Further linear algebra. Chapter II. Polynomials. Further linear algebra. Chapter II. Polynomials. Andrei Yafaev 1 Definitions. In this chapter we consider a field k. Recall that examples of felds include Q, R, C, F p where p is prime. A polynomial is

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Final Exam Solutions, December 13, 2008

Linear Algebra Final Exam Solutions, December 13, 2008 Linear Algebra Final Exam Solutions, December 13, 2008 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

LA-3 Lecture Notes. Karl-Heinz Fieseler. Uppsala 2014

LA-3 Lecture Notes. Karl-Heinz Fieseler. Uppsala 2014 LA-3 Lecture Notes Karl-Heinz Fieseler Uppsala 2014 1 Contents 1 Dual space 2 2 Direct sums and quotient spaces 4 3 Bilinear forms 7 4 Jordan normal form 8 5 Minimal and characteristic polynomial 17 6

More information

M.6. Rational canonical form

M.6. Rational canonical form book 2005/3/26 16:06 page 383 #397 M.6. RATIONAL CANONICAL FORM 383 M.6. Rational canonical form In this section we apply the theory of finitely generated modules of a principal ideal domain to study the

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

8 Appendix: Polynomial Rings

8 Appendix: Polynomial Rings 8 Appendix: Polynomial Rings Throughout we suppose, unless otherwise specified, that R is a commutative ring. 8.1 (Largely) a reminder about polynomials A polynomial in the indeterminate X with coefficients

More information

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then Homework 6 Solutions 1 Let V be the real vector space spanned by the functions e t, te t, t 2 e t, e 2t Find a Jordan canonical basis and a Jordan canonical form of T on V dened by T (f) = f Solution Note

More information

SUPPLEMENT TO CHAPTERS VII/VIII

SUPPLEMENT TO CHAPTERS VII/VIII SUPPLEMENT TO CHAPTERS VII/VIII The characteristic polynomial of an operator Let A M n,n (F ) be an n n-matrix Then the characteristic polynomial of A is defined by: C A (x) = det(xi A) where I denotes

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v CHAPTER 5 REVIEW Throughout this note, we assume that V and W are two vector spaces with dimv = n and dimw = m. T : V W is a linear transformation.. A map T : V W is a linear transformation if and only

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Moreover this binary operation satisfies the following properties

Moreover this binary operation satisfies the following properties Contents 1 Algebraic structures 1 1.1 Group........................................... 1 1.1.1 Definitions and examples............................. 1 1.1.2 Subgroup.....................................

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Benjamin McKay. Abstract Linear Algebra

Benjamin McKay. Abstract Linear Algebra Benjamin McKay Abstract Linear Algebra October 19, 2016 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Contents I Basic Definitions 1 1 Vector Spaces 3 2 Fields

More information

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities 2. Determinant of a linear transformation, change of basis. In the solution set of Homework 1, New Series, I included

More information

Homework Set 5 Solutions

Homework Set 5 Solutions MATH 672-010 Vector Spaces Prof. D. A. Edwards Due: Oct. 7, 2014 Homework Set 5 Solutions 1. Let S, T L(V, V finite-dimensional. (a (5 points Prove that ST and T S have the same eigenvalues. Solution.

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Fundamental theorem of modules over a PID and applications

Fundamental theorem of modules over a PID and applications Fundamental theorem of modules over a PID and applications Travis Schedler, WOMP 2007 September 11, 2007 01 The fundamental theorem of modules over PIDs A PID (Principal Ideal Domain) is an integral domain

More information