CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

Size: px
Start display at page:

Download "CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz"

Transcription

1 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix an n-dimensional vector space V over the field F. Write L(V, V ) for the n 2 -dimensional vector space of linear operators from V to itself and recall that L(V, V ) is isomorphic as a vector space (and as a ring) to M n (F ), the set of n n matrices with entries in F. The impetus for our work is provided by the following question. Given T L, can we represent T by a matrix having a canonical (i.e., natural ) form? It turns out that the form of the matrix we obtain depends almost entirely on the factorization of the minimal polynomial of T over F. For example, writing q(x) for the minimal polynomial of T, if q(x) has all of its roots in F, it turns out that T is triangularizable. In other words, there exists a basis V such that the matrix for T with respect to V is lower triangular. In fact, we will be able to do much better, in that we will be able to put T into its so-called Jordan canonical form whenever q(x) splits over F. The Jordan form will still be lower triangular, but most of its off diagonal entries will be zero. When q(x) does not necessarily have its roots in F, the situation is more complicated. Nevertheless, the so-called rational canonical form will still be a desirable form in that, while not upper triangular, it will be zero-heavy. Before embarking on our journey, we recall a few definitions from class. Let T be a linear operator on V. A subspace W V is T -invariant if T (w) W for all w W. T -invariant subspaces play an important role in what follows. They provide the proper vehicle for proofs by induction and are an integral part of the decomposition of V leading to the canonical forms. Here s an easy way to get a T -invariant subspace. Fix v V and let W denote the subspace spanned by the vectors {v, T (v), T 2 (v),... }. Then W is T -invariant, since T takes a typical vector α 0 v + + α r T r (v) to α 0 T (v) + + α r T r+1 (v), which is again in 1

2 2 D. KATZ W. Note that W = {f(t )(v) f(x) F [X]}. We call W the cyclic subspace generated by v. Note that if the minimal polynomial q(x) of T has degree r, then W is spanned by the vectors {v, T (v),..., T r 1 (v)}. Indeed, for any j r, write X j = f(x)q(x) + s(x), with the degree of s(x) less than r. Substituting T gives T j = s(t ), so T j (v) = s(t )(v) W. Moreover, given f(x) F [X], if f(t )(v) = 0, then f(t )(w) = 0 for all w W. We will rely heavily upon these observations. Now suppose that W is an arbitrary T -invariant subspace. Then T induces a linear transformation T W : W W. The basic strategy for decomposing T is roughly the following. Proceed by induction on the dimension of V. Find T -invariant subspaces W, U V satisfying V = W U. By induction there is a basis for W bringing T W to the desired form and a basis for U bringing T U to the desired form. Putting these bases together brings T to the desired form. We begin by recalling a definition. Definition A. Let W 1,..., W k be subspaces of V. For each 1 i k, let {w i1,..., w 1ni } be a basis for W i. Assume n n k = n. If V := {w 11,..., w 1n1,..., w k1,..., w knk } is a basis for V, then V is said to be the (internal) direct sum of W 1,..., W k. Note that this condition is independent of the bases chosen from the W i. When V is the direct sum of W 1,..., W k, we denote this by writing V = W 1 W k. For example, if V = R 3, W 1 is any line through the origin and W 2 is any plane through the origin not containing W 1, then V = W 1 W 2. The following proposition elucidates nature of direct sums. We ll simply sketch its proof. Proposition B. The following are equivalent for subspaces W 1,..., W k V : (i) V = W 1 W k. (ii) V = W W k and for all 1 i k, W i (W 1 + W i 1 + W i+1 + W k ) = 0. (iii) Given v V, there exist unique w i W i with v = w w k. Sketch of Proof. Keep the notation established in Definition A. If (i) holds, then expressing any v V in terms of the basis V shows that W = W W k. Now suppose that w i = w 1 + w i 1 + w i w k (w j W j, 1 j k) represents an element in W i (W 1 + W i 1 + W i W k ). Bringing w i to the right hand side yields a sum of elements from W 1,..., W k being zero. Expressing each term in the sum in terms of the given

3 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 3 basis for the corresponding W i shows that the sum must be zero (by linear independence of the collection V). Thus, W i (W 1 + W i 1 + W i W k ) = 0, so (i) implies (ii). For (ii) implies (iii), use the second condition in (ii) to get the uniqueness of the required sum in (iii). (i) follows from (iii) in similar fashion. The next proposition shows that if V is the direct sum of T -invariant subspaces, then there exists a basis for V for which the corresponding matrix of T takes a block diagonal form. This is the first significant step along our lengthy path. Proposition C. Let W 1,..., W k be T -invariant subspaces such that V = W 1 W k. For each 1 i k, let W i := {w i1,..., w ini } be a basis for W i and set V := W 1 W k. Let A denote the matrix of T with respect to V and A i denote the matrix of T Wi with respect to W i. Furthermore, write q(x) for the minimal polynomial of T. Finally, for 1 i k, let q i (X) be the minimal polynomial of T Wi. Then : (i) The matrix for T with respect to V has the (block diagonal) form A A A = (ii) q(x) = LCM(q 1 (X),..., q k (X)). 0 0 A k Proof. Part (i) is basically clear. When we apply T to the basis vectors in W i, the resulting vectors belong to W i and are expressible in terms of the vectors in W i. It follows that the column in A corresponding to T (w ij ) is zero in all entries save possibly the entries belonging to rows n n i through n n i (for all 1 j n i ). Thus A takes the required from. For part (ii), set h(x) := LCM(q 1 (X),..., q k (X)). Let 1 i k, take w i W i and write h(x) = a i (X)q i (X). Then h(t )(w i ) = a i (T )(q i (T )(w i )) = 0. Since V = W W k, it follows that h(t )(v) = 0, for all v V. Thus, h(t ) = 0, so q(x) divides h(x). On the other hand, q(t ) = 0, so q(t Wi ) = 0, for each i. Thus, q i (X) divides q(x) for all i, so h(x) divides q(x). Since h(x) and q(x) are monic polynomials, h(x) = q(x).

4 4 D. KATZ Now that we know that a matrix decomposition follows from a direct sum decomposition, we need to know when we can achieve such a decomposition. The next proposition shows that we can decompose V according to how we factor q(x), the minimal polynomial of T. Proposition D. Let q(x) denote the minimal polynomial of T and suppose we factor q(x) = q 1 (X) e1 q k (X) e k, with each q i (X) irreducible over F. Then there exist subspaces W 1,..., W k V satisfying the following conditions. (i) W 1,..., W k are T -invariant. (ii) V = W 1 W k. (iii) q i (X) ei is the minimal polynomial of T Wi, for 1 i k. Proof. For each 1 i k, set W i := ker(q i (T ) ei ). Then each W i is T -invariant. Indeed, w i W i implies q i (T ) ei (w i ) = 0, so 0 = T (q i (T ) ei (w i )) = q i (T ) ei (T (w i )), so T (w i ) W i. This gives (i). For part (ii), set f i (X) := j i q j(x) ej. Then f 1 (X),..., f k (X) have no common divisor, so there exist a 1 (X),..., a k (X) F [X] such that a 1 (X)f 1 (X) + + a k (X)f k (X) = 1. Thus, for all v V, v = a 1 (T )f 1 (T )(v) + + a k (T )f k (T )(v). However, for each 1 i k, q i (T ) ei (a i (T )f i (T )(v)) = a i (T )(q(t )(v)) = 0. Thus, each a i (T )f i (T )(v) W i and V = W W k. To show that the sum is direct, fix an i between 1 and k and take c i (X), d i (X) F [X] satisfying c i (X)q i (X) ei + d i (X)f i (X) = 1. Suppose that w i = w 1 + w i 1 + w i w k, with w j W j, for all j. Then w i = (1 c i (T )q i (T ) ei )(w i ) = d i (T )f i (T )(w 1 ) + + d i (T )f i (T )(w i 1 ) + d i (T )f i (T )(w i+1 ) + + d i (T )f i (T )(w k ) = 0, since f i (T )(w j ) = 0, for all j i. Thus, W i (W 1 + W i 1 + W i W k ) = 0, and the sum is direct. This gives part (ii). To prove part (iii), let g i (X) denote the minimal polynomial of T Wi, for 1 i k. Then g i (X) divides q i (X) ei, for all i, since q i (T Wi ) ei = 0, by definition of W i. Thus,

5 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 5 g i (X) = q i (X) e i, e i e i, since q i (X) is irreducible. On the other hand, by Proposition C, q 1 (X) e1 q k (X) e k = q(x) = LCM(g 1 (X),..., g k (X)) = q 1 (X) e 1 qk (X) e k. It follows at once that g i (X) = q i (X) ei, for all i, which is what we want. Now, let s summarize what we ve done by putting together Propositions C and D. Write q(x) for the minimal polynomial of T and factor q(x) = q 1 (X) e1 q k (X) e k, with each q i (X) irreducible over F. By Proposition D, we can can decompose V as a direct sum of the T -invariant subspaces W i := ker(q i (T ) ei ). We will refer to the W i as the primary components of V. By Proposition C, if we take a basis V for V comprised of bases from W 1,..., W k, the matrix of T with respect to V is block diagonal, where the ith block diagonal entry is the matrix of T Wi with respect to the selected basis from W i. We will refer to these blocks as primary blocks associated to T. As things now stand, the primary blocks have no particular form. The next and most difficult phase of our journey requires an analysis at the primary block level. In other words, we assume that q(x) is a power of an irreducible polynomial and push on from there. Before doing so, we ll look at a special case. Remark E. Let W be a vector space and T a linear operator on W. Suppose there is an element w W such that W := {w, T (w),..., T r 1 (w)} is a basis for W. Then we can write T r (w) = λ 0 w λ 1 T (w) λ r 1 T r 1 (w), for some λ j F. A straightforward calculation now shows that the matrix of T with respect to W is λ λ λ λ n 1 Writing f(x) := X r + λ n 1 X r λ 0 for the corresponding polynomial, we say that the matrix above is the companion matrix for f(x), and denote it C(f(X)). In the best of worlds, each primary component W i in our general discussion would admit a basis of the type above so that the matrix of T Wi with respect to that basis was a companion matrix.

6 6 D. KATZ Alas this is not so; but we will see that each primary component can be decomposed into a direct sum of such spaces, so that the matrix of T Wi will be block diagonal, with companion matrices as blocks. The following crucial proposition is our next step towards this end. Proposition F. Let W be a finite dimensional vector space and T a linear operator on W. Suppose that the minimal polynomial of T is q(x) e, with q(x) an irreducible polynomial. Let r denote the degree of q(x) e. Then there exists w W such that w, T (w),..., T r 1 (w) are linearly independent. Proof. Before starting the proof, we note that for any u W, the set {u, T (u),..., T r (u)} is linearly dependent, since q(x) e has degree r. Thus the w we seek gives rise to the largest possible linearly independent set of the form {w, T (w),... }. Now, suppose the proposition is false. Take any w W and let W be the cyclic subspace generated by w. Let f(x) denote the minimal polynomial of T W. Since, by assumption, w, T (w),..., T r 1 (w) are linearly dependent, any dependence relation gives rise to a nonzero polynomial h(x) of degree r 1 such that h(t )(w) = 0. As mentioned above, this means h(t W ) = 0, so f(x) divides h(x). In particular, f(x) has degree less than or equal to r 1. On the other hand, since q(t ) e = 0, q(t W ) e = 0, so f(x) divides q(x) e. Since q(x) is irreducible, this forces f(x) to be a power of q(x), and since f(x) has degree less than r, f(x) = q(x) f, for f e 1. Thus, f(x) divides q(x) e 1. Writing q(x) e 1 = p(x)f(x), it follows that q(t ) e 1 (w) = p(t )f(t )(w) = 0. This now holds for all w W, so q(t ) e 1 = 0. But this contradicts the fact that q(x) e is the minimal polynomial of T. Thus, there must be a w W such that w, T (w),..., T r 1 (w) are linearly independent. Remark G. Let w W satisfy the conclusion of the Proposition F and write W 1 for the cyclic subspace generated by w. Then : (i) {w, T (w),..., T r 1 (w)} is a basis for W 1. Indeed, this set of vectors is linearly independent by design and by our comments before Definition A, they must also span W 1. (ii) q(x) e is the minimal polynomial for T W1. After all, w, T (w),..., T r 1 (w) are independent, so the minimal polynomial for T W1 must have degree at least r, and since q(t W1 ) e = 0, q(x) e must be the minimal polynomial of T W1. (iii) The matrix of T W1 with respect to the basis {w, T (w),..., T r 1 (w)} is C(q(X) e ).

7 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 7 Now if we think of this W as one of the primary components of V, then we have the start of the decomposition alluded to in Remark E. Suppose we could find a T -invariant subspace U such that W = W 1 U. Then we could either repeat the construction of Proposition F on U, or simply apply induction to write U as a direct sum of T -invariant cyclic subspaces. Thus, W is then decomposed into a sum of spaces that admit bases so that the matrix of the restriction of T with respect to any of these bases is a companion matrix. Thus T restricted to W can then be represented as a block diagonal matrix with companion matrices for blocks. Therefore, we must find U. This is the most difficult part of the proof of our canonical form theorems. The conductor. Let B be finite dimensional vector space, T a linear operator on B and A B a T -invariant subspace. Fix a vector v B\A. Then there is a unique monic polynomial g(x) F [X], called the conductor of v into A, such that : (i) g(t )(v) A and (ii) whenever f(x) is a polynomial with f(t )(v) A, then g(x) divides f(x). The existence of g(x) should be standard by now. Take g(x) to be a monic polynomial of least degree such that g(t )(v) A. If f(x) is a polynomial such that f(t )(v) A, then write f(x) = h(x)g(x) + r(x) with the degree of r(x) less than the degree of g(x). Then r(t )(v) = f(t )(v) h(t )(q(t )(v) A, so by definition of g(x), r(x) = 0, i.e., g(x) divides f(x). Uniqueness of g(x) follows along similar lines. An easy comment : If the minimal polynomial of T is q(x) e, with q(x) irreducible, then the conductor of v into A is q(x) c for some c e, and in fact, c is the least positive integer such that q(t ) c (v) A. We will need the following general fact, which is interesting in its own right. General Fact. Let T be a linear operator on the finite dimensional vector space L and write f(x) for the minimal polynomial of T. If h(x) is any polynomial relatively prime to f(x), then f(t ) : L L is an isomorphism. To see this, take a(x), b(x) F [X] such that a(x)h(x) + b(x)f(x) = 1. Substituting T gives a(t )h(t ) = I L = h(t )a(t ), so h(t ) is an isomorphism. Lemma H. Let W be a finite dimensional vector space and T a linear operator on W. Suppose that the minimal polynomial of T is q(x) e, with q(x) an irreducible polynomial. For w 1 W, let W 1 be the cyclic subspace generated by w 1 and suppose that q(x) e is also the minimal polynomial of T W1. Let U V be a T -invariant subspace satisfying W 1 U = 0

8 8 D. KATZ and W + U V. Then there exists v V \(W 1 + U ) such that the conductor of v into U equals the conductor of v into W 1 + U Proof. Let v V be any vector not in W 1 +U. Let t(x) be the conductor of v into W 1 +U and g(x) be the conductor of v into U. Then by our comments above, t(x) = q(x) d, g(x) = q(x) c and d c e. We would like to have d = c. This may not hold for the v we ve chosen, but we can adjust v so that it does hold. To elaborate, write q(t ) d (v) = u+f(t )(w 1 ), with u U and f(t )(w 1 ) W 1, for some f(x) in F [X]. Write f(x) = h(x)q(x) s, with h(x) not divisible by q(x). We want to see s d. Now, q(t ) d (v) = u + h(t )q(t ) s (w 1 ). Applying q(t ) c d to both sides of this equation, we obtain q(t ) c (v) = q(t ) c d (u) + h(t )q(t ) c d+s (w 1 ). Subtracting, we obtain q(t ) c (v) q(t ) c d (u) = h(t )q(t ) c d+s (w 1 ) = 0, since the left hand side belongs to U and the middle term belongs to W 1. By the general fact above, h(t ) is an isomorphism. Then q(t ) c d+s (w 1 ) = 0 and hence q(t W1 ) c d+s = 0. Therefore, c d + s e, so s d e c 0. Now set ṽ := v h(t )q(t ) s d (w 1 ). First, note that ṽ W 1 + U. Second, note that for any i 1, q(t ) i (h(t )(q(t ) s d (w)) W 1, from which it follows that q(t ) i (ṽ) W 1 + U if and only if q(t ) i (v) W 1 + U. Thus, q(x) d is the conductor of ṽ into W 1 + U (since this is the conductor of v into W 1 + U ). But now, q(t ) d (ṽ) = q(t ) d (v) h(t )q(t ) s (w 1 ) = u, so q(t ) d (ṽ) U. Thus q(x) d is divisible by the conductor of ṽ into U. But this latter conductor is always divisible by the conductor of ṽ into W 1 + U since any expression h(t )(ṽ) landing in U automatically lands in W 1 + U. Thus, q(x) d must also be the conductor of ṽ into U, which gives what we want. The previous Lemma is technically the most difficult part of the proof. However, it provides the crucial tool we need in finding the complementary T -invariant subspace U referred to in Remark G. Proposition I. Let T be a linear operator on a finite dimensional vector space W. Assume the minimal polynomial for T is q(x) e, with q(x) irreducible. Let W 1 W be the cyclic subspace generated by w W and assume that the minimal polynomial for T W1 is also q(x) e. Then there exists a T -invariant subspace U W such that W = W 1 U. Proof. If W = W 1, there is nothing to prove. Assume W 1 W. Suppose we could prove

9 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 9 the following statement : If U V is a T -invariant subspace satisfying W 1 U = 0 and W 1 + U V, then there exists a T -invariant subspace U U satisfying W 1 U = 0 and W 1 + U W 1 + U. Then, starting with U = 0, we may apply the process building up U until U satisfies V = W 1 + U. By design, W 1 U = 0, so V = W 1 U, as desired. To finish the proof, suppose that U is T -invariant, W 1 U = 0 and W 1 + U V. By Lemma H, there exists v W 1 + U such that the conductor of v into U equals the conductor of v into W 1 + U. Let q(x) c denote this conductor and suppose a equals the degree of q(x) c. Now set U := U + v, T (v),..., T a 1 (v). Of course, W 1 + U W 1 + U. We need to see that U is T -invariant and W 1 U = 0. To see that U is T -invariant, it suffices to show T (T a 1 (v)) = T a (v) U. Write X a = 1 q(x) c + r(x), where r(x) has degree less than a. Then T a (v) = q(t ) c (v) + r(t )(a). But q(t ) c (v) U and r(t )(v) v, T (v),..., T a 1 (v), so T a (v) U, as required. To see W 1 U = 0, suppose there is u U such that u := u + β 0 v + β a 1 T a 1 (v) W 1. Set g(x) = β a 1 X a β 0. Then g(t )(v) = u u U + W 1. Thus, g(x) is divisible by the conductor of v into U + W 1. By choice of v, this means g(x) is divisible by the conductor of v into U. Thus, g(t )(v) U. Therefore, u = u + g(t )(v) U W 1 = 0, so W 1 U = 0. This completes the proof. Our first Theorem provides the crucial step in the canonical form theorems that follow. Theorem J. Let W be a finite dimensional vector space and T a linear operator on W. Suppose that the minimal polynomial of T is q(x) e, with q(x) an irreducible polynomial. Let r denote the degree of q(x) e. Then there exist T -invariant subspaces W 1,..., W t of V satisfying the following conditions. (i) V = W 1 W t. (ii) For each 1 i t, W i has a basis of the form {w i, T (w i ),..., T ri 1 (w i )}. (iii) For each 1 i t, the minimal polynomial of T Wi is q(x) ei, for some e i e, and deg(q(x) ei ) = r i. Moreover, we can arrange the ordering so that e = e 1 e 2 e t. Remark K. Retain the notation of Theorem J. It follows from conditions (ii) and (iii) that each W i is the T -invariant subspace generated by w i and that the dimension of W i equals

10 10 D. KATZ the degree of the minimal polynomial of T Wi. Thus, just as in Remark E, the matrix of T Wi with respect to the given basis is the companion matrix C(q(X) ei ). It therefore follows that our primary blocks can be decomposed into block diagonal matrices with companion matrices down the diagonal. This is the rational canonical form. The polynomials q(x) ei in Theorem J are called the elementary divisors of T and are uniquely determined by conditions (i) - (iii). The definition of elementary divisor extends to arbitrary T by applying Theorem J to its primary blocks. We do this below. Proof of Theorem J. Proceed by induction on the dimension s of W. If s = 1, there is nothing to prove. Suppose that s > 1. By Proposition F, there exists w W such that w, T (w),..., T r 1 (w) are linearly independent. By Remark G, these vectors form a basis for the cyclic subspace W 1 generated by w and the minimal polynomial for T W1 is q(x) e. Now we set e 1 := e. By Proposition I, there exists a T -invariant subspace U W such that W = W 1 U. Then, q(t U ) e = 0, so the minimal polynomial of T U has the form q(x) e2, with e 2 e 1 = e. By induction on s, there exist T -invariant subspaces W 2,..., W t U satisfying the conclusions of the theorem. Clearly then, W 1,..., W t (and the minimal polynomials of T W1,..., T Wt ) satisfy the conclusions of the theorem. We are now ready for the main result of these notes. Theorem L (Rational Canonical Form). Let V be a finite dimensional vector space and T a linear operator on V. Write q(x) for the minimal polynomial of T and factor q(x) = q 1 (X) e1 q k (X) e k, with each q i (X) irreducible over F. Then there exists a basis V for V such that A, the matrix of T with repect to V, has the the block diagonal form A A A = , 0 0 A k where A 1,..., A k are primary blocks corresponding to q 1 (X) e1,..., q k (X) e k, and for each

11 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 11 1 i k, C(q i (X) ei,1 ) C(q i (X) ei,2 ) 0 A i = C(q i (X) ei,t i ) for integers e i = e i,1 e i,2 e i,ti. Proof. By Proposition D, we may decompose V = W 1 W k, where for each 1 i k, W i is T -invariant and the minimal polynomial of T Wi is q i (X) ei. By Theorem J, we may decompose each W i = W i,1 W i,ti so that for each 1 j t i, W i,j has a basis of the form W i,j := {w i,j, T (w i,j ),... T ri,j 1 (w i,j )} and the minimal polynomial of T Wi,j is q i (X) ei,j with e i = e i,1 e i,ti and deg(q i (X) ei,j ) = r i,j. Thus, as in Remark E, the matrix of T Wi,j with respect to the given basis is C(q i (X) ei,j ). Since V = W 1,1 W 1,t1 W k,1 W k,tk, if we take V := W i,1 W 1,t1 W k,1 W k,tk, then the matrix of T with respect to V has the required form (by Proposition C). Remarks M. (i) The process leading to the rational canonical form is algorithmic, assuming we can factor the minimal polynomial of T. In other words, starting with T, we can follow the path just traversed to contruct in finite time the basis V in Theorem L. Of course, the process needn t be the most efficient or even doable in any reasonable amount of time with or without the aid of a computer. To summarize, first find the minimal polynomial q(x). This can be calculated since it divides the characteristic polynomial of T and we can calculate the characteristic polynomial using determinants. Then factor q(x) = q 1 (X) e1 q k (X) e k, with each q i (X) irreducible over F. (In general, this factorization may not be readily computable.) Next calculate ker(q i (T ) ei ) for each 1 i k. This is equivalent to solving a homogenous system of equations, so Gaussian elimination applies. As in the proof of Theorem L, restricting T to each ker(q i (X) ei ) then reduces the problem to the case where the minimal polynomial of T is q(x) e, with q(x) irreducible. Now, as in the proof of Theorem J, set r := deg(q(x) e ) and consider the set {v, T (v),..., T r 1 (v)} as v ranges through the standard basis (say) for V. One of these collections must be linearly independent, a

12 12 D. KATZ condition that can be checked with Gaussian elimination. Calling the v that works w 1 and using W 1 to denote the corresponding T -invariant subspace generated by w 1, use the proof of Theorem G to construct a T -invariant subspace U satisfying V = W 1 U. (Its important to note that U can be explicity constructed, thanks to Lemma F which is also constructive). Finally repeat the process leading to W 1 and U, this time on T U. This leads to U = W 2 U 2, with W 2 meeting the required conditions. The process will terminate in finitely many steps. (ii) Let the collection of integers {e i,j } be given as in Theorem L. The polynomials {q i (X) ei,j } are called the elementary divisors of T and uniquely determine the rational canonical form of T. We ll see below, that the elementary divisors of T are unique. In other words, up to a possible permutation of primary blocks along the diagonal, the rational canonical form of T is unique. If we just want the rational canonical form of T and not the basis giving it, there is a constructive procedure leading to the calculation of the elementary divisors of T. The procedure is easy to describe, but the proof that the procedure works is a bit beyond the scope of this course. (iii) Of course, the rational canonical form theorem has a form for matrices. Take B in M n (F ). Let T be the linear transformation taking the jth standard basis of V to the jth column of B. If V is the basis for which the matrix of T with respect to V has rational canonical form A, for A as in Theorem L, then P 1 BP = A, for P the change of basis matrix corresponding to V. Likewise, we call A the rational canonical form of B. (iv) While the rational canonical form for a transformation or matrix is unique, different rational canonical forms could have the same minimal polynomial. Thus, the minimal polynomial alone does not determine the rational canonical form. However, for a fixed n and a fixed q(x), there are only finitely many n n matrices in rational canonical form having minimal polynomial q(x). We illustrate this in the Examples below. We now consider the case where our linear transformation T has the property that its minimal polynomial has all of its roots in F. We ll see below that the Jordan canonical form for T is lower triangular. In fact, aside diagonal entries and possibly having 1 s below the main diagonal, the rest of its entries will be zero. The first step towards the Jordan canonical form is a consideration of the case where

13 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 13 all of the roots of q(x) are zero. In this case q(x) = X e, for some e, so T e = 0. Such a transformation is said to be nilpotent. Similarly, a matrix B is said to be nilpotent if B e = 0 for some e. The archetypal nilpotent matrix is a lower (or upper) triangular matrix with zero s down the diagonal. More importantly, for each s 1 let M s denote the following s s matrix Then M s satisfies M s s = 0 and M s 1 s M s = In other words, the minimal polynomial for M s is X s. We now show that any nilpotent transformation (or nilpotent matrix) can brought to block diagonal form with matrices M s down the diagonal. Theorem N (Nilpotent Form). Assume that T is a nilpotent linear transformation with minimal polynomial X e. Then there exists a basis V for V and integers e = e 1... e t such that A, the matrix of T with respect to V, has the block diagonal form M e M e2 0 A = M et. Proof. We just use Theorem L to find a basis V such that A, the matrix of T with respect to V, is in rational canonical form. Since the minimal polynomial of T is X e, A has just one primary block, A itself. The primary block in turn has the block diagonal form C(X e1 ) C(X e2 ) 0 A =......, 0 0 C(X et ) with e = e 1 e t. But a quick check reveals that C(X ei ) = M ei, for all i, which gives what we want.

14 14 D. KATZ We now use the nilpotent form theorem to give us the Jordan canonical form theorem. The proof is based upon the following observation. If the minimal polynomial of T has the form (X λ) e, then the transformation S := T λ is nilpotent and can therefore be brought to nilpotent form. Using the same basis for T yields a matrix obtained from the one for S by placing λ s above the 1 s appearing in the nilpotent form. In fact, for s 1 and λ F, we define the s s Jordan block associated to λ to be the s s matrix λ λ 0 0 J(s, λ) := λ Theorem O (Jordan Canonical Form). Let q(x) be the minimal polynomial of T and suppose that q(x) has all of its roots in F. Write q(x) = (X λ 1 ) e1 (X λ k ) e k. Then there exists a basis V for V such that A, the matrix of T with repect to V, has the block diagonal form A A A = , 0 0 A k where A 1,..., A k are primary blocks corresponding to (X λ 1 ) e1,..., (X λ k ) e k, and for each 1 i k, J(e i,1, λ i ) J(e i,2, λ i ) 0 A i = J(e i,ti, λ i ) for integers e i = e i,1 e i,2 e i,ti. We call {(X λ i ) ei,j }, the Jordan elementary divisors of T. Proof. By Proposition D, we may find T -invariant subspaces W 1,..., W k V such that for each 1 i k, the minimal polynomial of T Wi is (X λ i ) ei. Set S i := (T λ i ) Wi. S i

15 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 15 is nilpotent, so there exists a basis W i of W i such that the matrix of S i with respect to W i equals M ei, M ei, , 0 0 M ei,ti For e i = e i,1 e i,ti. Therefore, the matrix A i of T Wi with respect to W i satisfies J(e i,1, λ i ) J(e i,2, λ i ) 0 A i =......, 0 0 J(e i,ti, λ i ) as required. Putting together the bases W i yields the required basis V. Examples P. (i) Let F = Q and set q(x) = (X 2) 3 (X 2 + 1) 2. We want to find rational canonical forms for all matrices in M 11 (Q) having minimal polynomial q(x). Since q(x) has two irreducible factors, the matrices we seek have two primary blocks, each of which are to be decomposed into blocks of companion matrices associated either to a power of X 2 less than or equal to 3 or a power of X less than or equal to 2. Each primary block is strictly determined by its elementary divisors, so in fact it suffices to list all possible sets of elementary divisors for matrices having minimal polynomial q(x). It turns out that there are eight possible rational canonical forms. Call these B 1,..., B 8. We assign each of these the following elementary divisors : (a) B 1 : (X 2) 3, (X 2) 3, (X 2), (X 2 + 1) 2. (b) B 2 : (X 2) 3, (X 2) 2, (X 2) 2, (X 2 + 1) 2. (c) B 3 : (X 2) 3, (X 2) 2, (X 2), (X 2), (X 2 + 1) 2. (d) B 4 : (X 2) 3, (X 2), (X 2), (X 2), (X 2), (X 2 + 1) 2. (e) B 5 : (X 2) 3, (X 2) 2, (X 2 + 1) 2, (X 2 + 1). (f) B 6 : (X 2) 3, (X 2), (X 2), (X 2 + 1) 2, (X 2 + 1). (g) B 7 : (X 2) 3, (X 2 + 1) 2, (X 2 + 1) 2. (h) B 8 : (X 2) 3, (X 2 + 1) 2, (X 2 + 1), (X 2 + 1).

16 16 D. KATZ Each B i therefore is block diagonal, with companion matrices corresponding to the listed elementary divisors down the diagonal. For example B 1 = , where the block diagonal entries are C((X 2) 3 ), C((X 2) 3 ), C(X 2) and C((X 2 + 1) 2 ). (ii) Set F = Q(i) and q(x) = (X 2) 3 (X 2 + 1) 2 = (X 2) 3 (X + i) 2 (X i) 2. We seek Jordan canonical forms for all 9 9 matrices having minimal polynomial q(x). As before, the primary blocks are determined by their elementary divisors, though the Jordan block associated to a Jordan elementary divisor is not the companion matrix of the elementary divisor. For example, if the elementary divisors are (X 2) 3, (X 2) 2, (X +i) 2, (X i) 2, the corresponding Jordan form has block diagonal entries J(3, 2), J(2, 2), J(2, i), J(2, i). Here, the first two Jordan blocks make up the primary block corresponding to (X 2) 3 and the second and third Jordan blocks are the primary blocks associated to (X i) 2 and (X + i) 2. The complete list of possible Jordan elementary divisors for this example is : (a) C 1 : (X 2) 3, (X 2) 2, (X + i) 2, (X i) 2. (b) C 2 : (X 2) 3, (X 2), (X 2), (X + i) 2, (X i) 2. (c) C 3 : (X 2) 3, (X 2), (X + i) 2, (X + i), (X i) 2. (d) C 4 : (X 2) 3, (X 2), (X + i) 2, (X i) 2, (X i). (e) C 5 : (X 2) 3, (X + i) 2, (X + i) 2, (X i) 2. (f) C 6 : (X 2) 3, (X + i) 2, (X + i), (X + i), (X i) 2.

17 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 17 (g) C 7 : (X 2) 3, (X + i) 2, (X + i), (X i) 2, (X i). (h) C 8 : (X 2) 3, (X + i) 2, (X i) 2, (X i) 2. (i) C 9 : (X 2) 3, (X + i) 2, (X i) 2, (X i), (X i). Thus, for example, i C 5 = i i i i i The last issue I want to discuss concerns uniqueness of the canonical forms. As before, let q(x) = q 1 (X) e1 q k (X) e k, q i (X) irreducible, be the minimal polynomial of T. Suppose that V and V are bases such that A and A, the matrices of T with respect to V and V, are in the rational canonical form given by Theorem L. We wish to show that as long as we fix the order of the irreducible factors of q(x) (equivalently, fix the order of the primary blocks associated to T ), A = A. Otherwise, A can be obtained from A by permuting its primary blocks. After all, the primary blocks are defined to be matrices of T restricted to ker(q i (T ) ei ), for all 1 i k, and are therefore uniquely determined by the factors q i (X) ei. So, we fix the given order of factors in q(x). Clearly, within any primary block, the structure of A and A are uniquely determined by the corresponding invariant factors. Thus, A = A if we prove the following Theorem. Theorem Q. Let W be a finite dimensional vector space with linear operator T. Let q(x) e be the minimal polynomial of T. Assume that q(x) is irreducible and deg(q(x) e ) = r. Suppose that W 1,..., W t are T -invariant subspaces satisfying the conclusions of Theorem J. For each 1 i t, let W i denote the corresponding basis and q(x) ei the corresponding elementary divisor (so e = e 1 e t ). Suppose, in addition, that U 1,..., U h are T - invariant subspaces satisfying the conclusions of Theorem G and that for each 1 j h,

18 18 D. KATZ U j := {u j, T (u j ),..., T sj 1 (u j )} is the corresponding basis and q(x) fj is the corresponding elementary divisor (so e = f 1 f h ). Then t = h and e 1 = f 1,..., e t = f t. Proof. Let m be the smallest positive integer such that e m f m. Without loss of generality, suppose e m > f m. We now make the following claim : For all i m, dim(q(t ) fm (U i )) = s i s m and dim(q(t ) fm (W i )) = r i r m. Suppose the claim holds. Since q(t ) fm (U i ) = 0 for all i m, it follows that q(t ) fm (V ) = q(t ) fm (U 1 ) q(t ) fm (U m 1 ), so dim(q(t ) fm (V )) = (s 1 s m ) + + (s m 1 s m ). Similarly, q(t ) fm (V ) = q(t ) fm (V 1 ) q(t ) fm (V m 1 ) q(t ) fm (V m ), so dim(q(t ) fm (V )) = (r 1 s m ) + + (r m 1 s m ) + (r m s m ). Therefore, (s 1 s m ) + + (s m 1 s m ) = (r 1 s m ) + + (r m 1 s m ) + (r m s m ). But, s j = f j deg(q(x)) and r i = e i deg(q(x)), for all i and j. By our choice of m, we get r m s m = 0, a contradiction. Thus, e i = f i for all i and t = h. To verify the claim, it clearly suffices to prove the following statement. Let W be a T -invariant subspace with basis {w, T (w),..., T r 1 (w)}. Suppose that q(x) a (q(x) irreducible) is the minimal polynomial of T W and deg(q(x) a ) = r. Suppose b a and deg(q(x) b ) = d. Then dim(q(t ) b (W )) = r d. To see this, set W := q(t ) b (W ) and w := q(t ) b (w). We will show that W := {w, T (w ),..., T r d 1 (w )} is a basis for W. Now, q(t ) a b (w ) = q(t ) a (w) = 0. Since deg(q(x) a b ) = d r, T j (w ) can be expressed in terms of the vectors in W for all j d r, so W spans W. This also shows that W is T -invariant, so the minimal polynomial of T W must divide q(x) a b. Let f(x) denote the minimal polynomial of T W. Then, 0 = f(t )(w ) = f(t )q(t ) b (w), so f(t W )q(t W ) b = 0. Thus, q(x) a divides f(x)q(x) b, so q(x) a b divides f(x). Therefore, f(x) = q(x) a b and it follows that W is linearly independent. Hence the claim has been verified and the proof of Theorem N is now complete.

19 CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES 19 It follows from the previous theorem that if the minimal polynomial of T has its roots in F, then the Jordan canonical form for T is unique. As before, it suffices to show that the the form taken by any primary block is unique. But the ith primary block is determined by its Jordan elementary divisors, which in turn are the elementary divisors in the rational canonical form of the nilpotent transformation T λ i. These latter elementary divisors are unique, by Theorem Q. Department of Mathematics, The University of Kansas, Lawrence, KS, 66046

NOTES II FOR 130A JACOB STERBENZ

NOTES II FOR 130A JACOB STERBENZ NOTES II FOR 130A JACOB STERBENZ Abstract. Here are some notes on the Jordan canonical form as it was covered in class. Contents 1. Polynomials 1 2. The Minimal Polynomial and the Primary Decomposition

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

M.6. Rational canonical form

M.6. Rational canonical form book 2005/3/26 16:06 page 383 #397 M.6. RATIONAL CANONICAL FORM 383 M.6. Rational canonical form In this section we apply the theory of finitely generated modules of a principal ideal domain to study the

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

The Cyclic Decomposition Theorem

The Cyclic Decomposition Theorem The Cyclic Decomposition Theorem Math 481/525, Fall 2009 Let V be a finite-dimensional F -vector space, and let T : V V be a linear transformation. In this note we prove that V is a direct sum of cyclic

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) =

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) = (Can) Canonical Forms Math 683L (Summer 2003) Following the brief interlude to study diagonalisable transformations and matrices, we must now get back to the serious business of the general case. In this

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series.

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series. 6 Polynomial Rings We introduce a class of rings called the polynomial rings, describing computation, factorization and divisibility in such rings For the case where the coefficients come from an integral

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since MAS 5312 Section 2779 Introduction to Algebra 2 Solutions to Selected Problems, Chapters 11 13 11.2.9 Given a linear ϕ : V V such that ϕ(w ) W, show ϕ induces linear ϕ W : W W and ϕ : V/W V/W : Solution.

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

where c R and the content of f is one. 1

where c R and the content of f is one. 1 9. Gauss Lemma Obviously it would be nice to have some more general methods of proving that a given polynomial is irreducible. The first is rather beautiful and due to Gauss. The basic idea is as follows.

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

SUPPLEMENT TO CHAPTERS VII/VIII

SUPPLEMENT TO CHAPTERS VII/VIII SUPPLEMENT TO CHAPTERS VII/VIII The characteristic polynomial of an operator Let A M n,n (F ) be an n n-matrix Then the characteristic polynomial of A is defined by: C A (x) = det(xi A) where I denotes

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 A typical course in abstract algebra starts with groups, and then moves on to rings, vector spaces, fields, etc. This sequence

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma Chapter 2 Integral Extensions 2.1 Integral Elements 2.1.1 Definitions and Comments Let R be a subring of the ring S, and let α S. We say that α is integral over R if α isarootofamonic polynomial with coefficients

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

COMMUTATIVE RINGS. Definition 3: A domain is a commutative ring R that satisfies the cancellation law for multiplication:

COMMUTATIVE RINGS. Definition 3: A domain is a commutative ring R that satisfies the cancellation law for multiplication: COMMUTATIVE RINGS Definition 1: A commutative ring R is a set with two operations, addition and multiplication, such that: (i) R is an abelian group under addition; (ii) ab = ba for all a, b R (commutative

More information

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples Chapter 3 Rings Rings are additive abelian groups with a second operation called multiplication. The connection between the two operations is provided by the distributive law. Assuming the results of Chapter

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension

More information

Minimum Polynomials of Linear Transformations

Minimum Polynomials of Linear Transformations Minimum Polynomials of Linear Transformations Spencer De Chenne University of Puget Sound 30 April 2014 Table of Contents Polynomial Basics Endomorphisms Minimum Polynomial Building Linear Transformations

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

Polynomial Rings. (Last Updated: December 8, 2017)

Polynomial Rings. (Last Updated: December 8, 2017) Polynomial Rings (Last Updated: December 8, 2017) These notes are derived primarily from Abstract Algebra, Theory and Applications by Thomas Judson (16ed). Most of this material is drawn from Chapters

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n ABSTRACT Title of Thesis: GRÖBNER BASES WITH APPLICATIONS IN GRAPH THEORY Degree candidate: Angela M. Hennessy Degree and year: Master of Arts, 2006 Thesis directed by: Professor Lawrence C. Washington

More information

MTH310 EXAM 2 REVIEW

MTH310 EXAM 2 REVIEW MTH310 EXAM 2 REVIEW SA LI 4.1 Polynomial Arithmetic and the Division Algorithm A. Polynomial Arithmetic *Polynomial Rings If R is a ring, then there exists a ring T containing an element x that is not

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

MODEL ANSWERS TO THE THIRD HOMEWORK

MODEL ANSWERS TO THE THIRD HOMEWORK MODEL ANSWERS TO THE THIRD HOMEWORK 1 (i) We apply Gaussian elimination to A First note that the second row is a multiple of the first row So we need to swap the second and third rows 1 3 2 1 2 6 5 7 3

More information

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G. Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

More information

QUALIFYING EXAM IN ALGEBRA August 2011

QUALIFYING EXAM IN ALGEBRA August 2011 QUALIFYING EXAM IN ALGEBRA August 2011 1. There are 18 problems on the exam. Work and turn in 10 problems, in the following categories. I. Linear Algebra 1 problem II. Group Theory 3 problems III. Ring

More information

Mathematical Olympiad Training Polynomials

Mathematical Olympiad Training Polynomials Mathematical Olympiad Training Polynomials Definition A polynomial over a ring R(Z, Q, R, C) in x is an expression of the form p(x) = a n x n + a n 1 x n 1 + + a 1 x + a 0, a i R, for 0 i n. If a n 0,

More information

ABSTRACT ALGEBRA 2 SOLUTIONS TO THE PRACTICE EXAM AND HOMEWORK

ABSTRACT ALGEBRA 2 SOLUTIONS TO THE PRACTICE EXAM AND HOMEWORK ABSTRACT ALGEBRA 2 SOLUTIONS TO THE PRACTICE EXAM AND HOMEWORK 1. Practice exam problems Problem A. Find α C such that Q(i, 3 2) = Q(α). Solution to A. Either one can use the proof of the primitive element

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

August 2015 Qualifying Examination Solutions

August 2015 Qualifying Examination Solutions August 2015 Qualifying Examination Solutions If you have any difficulty with the wording of the following problems please contact the supervisor immediately. All persons responsible for these problems,

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Algebra Qualifying Exam August 2001 Do all 5 problems. 1. Let G be afinite group of order 504 = 23 32 7. a. Show that G cannot be isomorphic to a subgroup of the alternating group Alt 7. (5 points) b.

More information

Lecture 7: Polynomial rings

Lecture 7: Polynomial rings Lecture 7: Polynomial rings Rajat Mittal IIT Kanpur You have seen polynomials many a times till now. The purpose of this lecture is to give a formal treatment to constructing polynomials and the rules

More information

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details. Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

22M: 121 Final Exam. Answer any three in this section. Each question is worth 10 points.

22M: 121 Final Exam. Answer any three in this section. Each question is worth 10 points. 22M: 121 Final Exam This is 2 hour exam. Begin each question on a new sheet of paper. All notations are standard and the ones used in class. Please write clearly and provide all details of your work. Good

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002 Background on Groups, Rings, and Finite Fields Andreas Klappenecker September 12, 2002 A thorough understanding of the Agrawal, Kayal, and Saxena primality test requires some tools from algebra and elementary

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

CHAPTER I. Rings. Definition A ring R is a set with two binary operations, addition + and

CHAPTER I. Rings. Definition A ring R is a set with two binary operations, addition + and CHAPTER I Rings 1.1 Definitions and Examples Definition 1.1.1. A ring R is a set with two binary operations, addition + and multiplication satisfying the following conditions for all a, b, c in R : (i)

More information

50 Algebraic Extensions

50 Algebraic Extensions 50 Algebraic Extensions Let E/K be a field extension and let a E be algebraic over K. Then there is a nonzero polynomial f in K[x] such that f(a) = 0. Hence the subset A = {f K[x]: f(a) = 0} of K[x] does

More information

[06.1] Given a 3-by-3 matrix M with integer entries, find A, B integer 3-by-3 matrices with determinant ±1 such that AMB is diagonal.

[06.1] Given a 3-by-3 matrix M with integer entries, find A, B integer 3-by-3 matrices with determinant ±1 such that AMB is diagonal. (January 14, 2009) [06.1] Given a 3-by-3 matrix M with integer entries, find A, B integer 3-by-3 matrices with determinant ±1 such that AMB is diagonal. Let s give an algorithmic, rather than existential,

More information

IRREDUCIBILITY TESTS IN Q[T ]

IRREDUCIBILITY TESTS IN Q[T ] IRREDUCIBILITY TESTS IN Q[T ] KEITH CONRAD 1. Introduction For a general field F there is no simple way to determine if an arbitrary polynomial in F [T ] is irreducible. Here we will focus on the case

More information

55 Separable Extensions

55 Separable Extensions 55 Separable Extensions In 54, we established the foundations of Galois theory, but we have no handy criterion for determining whether a given field extension is Galois or not. Even in the quite simple

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

Introduction to finite fields

Introduction to finite fields Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is in

More information

4.4 Noetherian Rings

4.4 Noetherian Rings 4.4 Noetherian Rings Recall that a ring A is Noetherian if it satisfies the following three equivalent conditions: (1) Every nonempty set of ideals of A has a maximal element (the maximal condition); (2)

More information

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA Mahmut Kuzucuoğlu Middle East Technical University matmah@metu.edu.tr Ankara, TURKEY March 14, 015 ii TABLE OF CONTENTS CHAPTERS 0. PREFACE..................................................

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

Solutions of exercise sheet 8

Solutions of exercise sheet 8 D-MATH Algebra I HS 14 Prof. Emmanuel Kowalski Solutions of exercise sheet 8 1. In this exercise, we will give a characterization for solvable groups using commutator subgroups. See last semester s (Algebra

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille Math 429/581 (Advanced) Group Theory Summary of Definitions, Examples, and Theorems by Stefan Gille 1 2 0. Group Operations 0.1. Definition. Let G be a group and X a set. A (left) operation of G on X is

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

GALOIS GROUPS AS PERMUTATION GROUPS

GALOIS GROUPS AS PERMUTATION GROUPS GALOIS GROUPS AS PERMUTATION GROUPS KEITH CONRAD 1. Introduction A Galois group is a group of field automorphisms under composition. By looking at the effect of a Galois group on field generators we can

More information

Factorization in Polynomial Rings

Factorization in Polynomial Rings Factorization in Polynomial Rings Throughout these notes, F denotes a field. 1 Long division with remainder We begin with some basic definitions. Definition 1.1. Let f, g F [x]. We say that f divides g,

More information

Finitely Generated Modules over a PID, I

Finitely Generated Modules over a PID, I Finitely Generated Modules over a PID, I A will throughout be a fixed PID. We will develop the structure theory for finitely generated A-modules. Lemma 1 Any submodule M F of a free A-module is itself

More information

7.4: Integration of rational functions

7.4: Integration of rational functions A rational function is a function of the form: f (x) = P(x) Q(x), where P(x) and Q(x) are polynomials in x. P(x) = a n x n + a n 1 x n 1 + + a 0. Q(x) = b m x m + b m 1 x m 1 + + b 0. How to express a

More information

Polynomial Rings. i=0. i=0. n+m. i=0. k=0

Polynomial Rings. i=0. i=0. n+m. i=0. k=0 Polynomial Rings 1. Definitions and Basic Properties For convenience, the ring will always be a commutative ring with identity. Basic Properties The polynomial ring R[x] in the indeterminate x with coefficients

More information

Basic Algebra. Final Version, August, 2006 For Publication by Birkhäuser Boston Along with a Companion Volume Advanced Algebra In the Series

Basic Algebra. Final Version, August, 2006 For Publication by Birkhäuser Boston Along with a Companion Volume Advanced Algebra In the Series Basic Algebra Final Version, August, 2006 For Publication by Birkhäuser Boston Along with a Companion Volume Advanced Algebra In the Series Cornerstones Selected Pages from Chapter I: pp. 1 15 Anthony

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

GALOIS THEORY BRIAN OSSERMAN

GALOIS THEORY BRIAN OSSERMAN GALOIS THEORY BRIAN OSSERMAN Galois theory relates the theory of field extensions to the theory of groups. It provides a powerful tool for studying field extensions, and consequently, solutions to polynomial

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Formal power series rings, inverse limits, and I-adic completions of rings

Formal power series rings, inverse limits, and I-adic completions of rings Formal power series rings, inverse limits, and I-adic completions of rings Formal semigroup rings and formal power series rings We next want to explore the notion of a (formal) power series ring in finitely

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Introduction to modules

Introduction to modules Chapter 3 Introduction to modules 3.1 Modules, submodules and homomorphisms The problem of classifying all rings is much too general to ever hope for an answer. But one of the most important tools available

More information

Formal Groups. Niki Myrto Mavraki

Formal Groups. Niki Myrto Mavraki Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal

More information

Algebra Qualifying Exam Solutions. Thomas Goller

Algebra Qualifying Exam Solutions. Thomas Goller Algebra Qualifying Exam Solutions Thomas Goller September 4, 2 Contents Spring 2 2 2 Fall 2 8 3 Spring 2 3 4 Fall 29 7 5 Spring 29 2 6 Fall 28 25 Chapter Spring 2. The claim as stated is false. The identity

More information

1 Linear transformations; the basics

1 Linear transformations; the basics Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Finite Fields. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Finite Fields. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 25 Finite Fields Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay September 25, 2014 2 / 25 Fields Definition A set F together

More information

Algebra Review 2. 1 Fields. A field is an extension of the concept of a group.

Algebra Review 2. 1 Fields. A field is an extension of the concept of a group. Algebra Review 2 1 Fields A field is an extension of the concept of a group. Definition 1. A field (F, +,, 0 F, 1 F ) is a set F together with two binary operations (+, ) on F such that the following conditions

More information