Linear Algebra II. Course No Spring 2007 Michael Stoll
|
|
- Steven Park
- 5 years ago
- Views:
Transcription
1 Linear Algebra II Course No Spring 2007 Michael Stoll With some additions by Ronald van Luijk, 2016
2
3 Contents 1. Review of Eigenvalues, Eigenvectors and Characteristic Polynomial 2 2. Direct Sums of Subspaces 2 3. The Cayley-Hamilton Theorem and the Minimal Polynomial 7 4. The Structure of Nilpotent Endomorphisms The Jordan Normal Form Theorem The Dual Vector Space Norms on Real Vector Spaces Bilinear Forms Inner Product Spaces Orthogonal Diagonalization External Direct Sums The Tensor Product Symmetric and Alternating Products 61 References 67 Index of notation 68
4 2 1. Review of Eigenvalues, Eigenvectors and Characteristic Polynomial Recall the topics we finished Linear Algebra I with. We were discussing eigenvalues and eigenvectors of endomorphisms and square matrices, and the question when they are diagonalizable. Foryourconvenience,Iwillrepeatherethemostrelevant definitions and results. Let V be a finite-dimensional F -vector space, dim V = n, andletf : V! V be an endomorphism. Then for 2 F,the -eigenspace of f was defined to be E (f) ={v 2 V : f(v) = v} =ker(f id V ). is an eigenvalue of f if E (f) 6= {0}, i.e.,ifthereis06= v 2 V such that f(v) = v. Suchavectorv is called an eigenvector of f for the eigenvalue. The eigenvalues are exactly the roots (in F )ofthecharacteristic polynomial of f, P f (x) =det(x id V f), which is a monic polynomial of degree n with coe cients in F. The geometric multiplicity of as an eigenvalue of f is defined to be the dimension of the -eigenspace, whereas the algebraic multiplicity of as an eigenvalue of f is defined to be its multiplicity as a root of the characteristic polynomial. The endomorphism f is said to be diagonalizable if there exists a basis of V consisting of eigenvectors of f. Thematrixrepresentingf relative to this basis is then a diagonal matrix, with the various eigenvalues appearing on the diagonal. Since n n matrices can be identified with endomorphisms F n! F n,allnotions and results makes sense for square matrices, too. A matrix A 2 Mat(n, F )is diagonalizable if and only if it is similar to a diagonal matrix, i.e., if there is an invertible matrix P 2 Mat(n, F )suchthatp 1 AP is diagonal. It is an important fact that the geometric multiplicity of an eigenvalue cannot exceed its algebraic multiplicity. An endomorphism or square matrix is diagonalizable if and only if the sum of the geometric multiplicities of all eigenvalues equals the dimension of the space. This in turn is equivalent to the two conditions (a) the characteristic polynomial is a product of linear factors, and (b) for each eigenvalue, algebraic and geometric multiplicities agree. For example, both conditions are satisfied if P f is the product of n distinct monic linear factors. 2. Direct Sums of Subspaces The proof of the Jordan Normal Form Theorem, which is one of our goals, uses the idea to split the vector space V into subspaces on which the endomorphism can be more easily described. In order to make this precise, we introduce the notion of direct sum of linear subspaces of V Definition. Suppose I is an index set and U i V (for i 2 I) arelinear subspaces of a vector space V satisfying 0 1 (1) U j X A = {0} i2i\{j} for all j 2 I. Then we write L i2i U i for the subspace P i2i U i of V,andwecall this sum the direct sum of the subspaces U i. Whenever we use this notation, the U i
5 hypothesis (1) is implied. If I = {1, 2,...,n}, thenwealsowriteu 1 U 2 U n Lemma. Let V be a vector space, and U i V (for i 2 I) linear subspaces. Then the following statements are equivalent. (1) Every v 2 V can be written uniquely as v = P i2i u i with u i 2 U i for all i 2 I (and only finitely many u i 6=0). (2) P i2i U i = V, and for all j 2 I, we have U j \ P i2i\{j} U i = {0}. (3) If we have any basis B i of U i for each i 2 I, then these bases B i are pairwise disjoint, and the union S i2i B i forms a basis of V. (4) There exists a basis B i of U i for each i 2 I such that these bases B i are pairwise disjoint, and the union S i2i B i forms a basis of V. By statement (2) of this lemma, if these conditions are satisfied, then V is the direct sum of the subspaces U i,thatis,wehavev = L i2i U i. Proof. (1) ) (2) : Since every v 2 V can be written as a sum of elements of the U i,wehavev = P i2i U i. Now assume that v 2 U j \ P i6=j U i.thisgivestwo representations of v as v = u j = P i6=j u i.sincethereisonlyonewayofwritingv as a sum of u i s, this is only possible when v =0. (2) ) (3) : Since the elements of any basis are nonzero, and B i is contained in U i for all i, itfollowsfromu j \ P i2i\{j} U i = {0} that B i \ B j = ; for all i 6= j. Let B = S i2i B i. Since B i generates U i and P i U i = V,wefindthatB generates V. To show that B is linearly independent, consider a linear combination X X i2i For any fixed j 2 I, wecanwritethisas b2b i i,bb =0. U j 3 u j = X b2b j j,bb = X i6=j X b2b i i,bb 2 X U i. By (2), this implies that u j =0. SinceB j is a basis of U j,thisisonlypossiblewhen j,b =0forallb 2 B j. Since j 2 I was arbitrary, this shows that all coe cients vanish. (3) ) (4) : This follows by choosing any basis B i for U i (see Remark 2.3). (4) ) (1) : Take a basis B i for U i for each i 2 I. Write v 2 V as a linear combination of the basis elements in S i B i.sinceb i is a basis of U i,wemaywrite the part of the linear combination coming from B i as u i,whichyieldsv = P i u i with u i 2 U i. To see that the u i are unique, we note that the u i can be written as linear combinations of elements in B i ; the sum v = P i u i is then a linear combination of elements in S i B i,whichhastobethesameastheoriginallinear combination, because S i B i is a basis for V. It follows that indeed all the u i are uniquely determined Remark. The proof of the implication (3) ) (4) implicitly assumes the existence of a basis B i for each U i. The existence of a basis B i for U i is clear when U i is finite-dimensional, but for infinite-dimensional vector spaces this is more subtle. Using Zorn s Lemma, which is equivalent to the Axiom of Choice of Set Theory, one can prove that all vector spaces do indeed have a basis. See Appendix D of Linear Algebra I, 2015 edition (or later). We will use this more often. i6=j
6 Remark. If U 1 and U 2 are linear subspaces of the vector space V, thenstatement V = U 1 U 2 is equivalent to U 1 and U 2 being complementary subspaces Lemma. Suppose V is a vector space with subspaces U and U 0 such that V = U U 0. If U 1,...,U r are subspaces of U with U = U 1 U r and U 0 1,...,U 0 s are subspaces of U 0 with U 0 = U 0 1 U 0 s, then we have V = U 1 U r U 0 1 U 0 s. Proof. This follows most easily from part (1) of Lemma 2.2. The converse of this lemma is trivial in the sense that if we have V = U 1 U r U 0 1 U 0 s, then apparently the r + s subspaces U 1,...,U r,u 0 1,...,U 0 s satisfy the hypothesis (1), which implies that also the r subspaces U 1,...,U r satisfy this hypothesis, as well as the subspaces U 0 1,...,U 0 s;thenalsothetwosubspacesu = U 1 U r and U 0 = U U 0 s together satisfy the hypothesis and we have V = U U 0. In other words, we may write (U 1 U r ) (U 0 1 U 0 s)=u 1 U r U 0 1 U 0 s in the sense that if all the implied conditions of the form (1) are satisfied for one side of the equality, then the same holds for the other side, and the (direct) sums are then equal. In particular, we have U 1 (U 2 U r )=U 1 U r. The following lemma states that if two subspaces intersect each other trivially, then one can be extended to a complementary space of the other. Its proof also suggests how we can do the extension explicitly Lemma. Let U and U 0 be subspaces of a finite-dimensional vector space V satisfying U \ U 0 = {0}. Then there exists a subspace W V with U 0 W that is a complementary subspace of U in V. Proof. Let (u 1,...,u r )beabasisforu and (v 1,...,v s )abasisforu 0. Then Lemma 2.2 we have a basis (u 1,...,u r,v 1,...,v s )foru + U 0 = U U 0. By the Basis Extension Theorem of Linear Algebra 1, we may extend this to a basis (u 1,...,u r,v 1,...,v s,w 1,...,w t )forv.wenowletw be the subspace generated by v 1,...,v s,w 1,...,w t.then(v 1,...,v s,w 1,...,w t )isabasisforw and clearly W contains U 0. By Lemma 2.2 we conclude that U and W are complementary spaces. Next, we discuss the relation between endomorphisms of V and endomorphisms between the U i Lemma and Definition. Let V be a vector space with linear subspaces U i (i 2 I) such that V = L i2i U i. For each i 2 I, let f i : U i! U i be an endomorphism. Then there is a unique endomorphism f : V! V such that f Ui = f i for all i 2 I. We call f the direct sum of the f i and write f = L i2i f i.
7 Proof. Let v 2 V. Then we have v = P i u i as above, therefore the only way to define f is by f(v) = P i f i(u i ). This proves uniqueness. Since the u i in the representation of v above are unique, f is a well-defined map, and it is clear that f is linear, so f is an endomorphism of V Remark. If in the situation of Definition 2.7, V is finite-dimensional and we choose a basis B of V that is the concatenation of bases B i of the U i,thenthe matrix representing f relative to B will be a block diagonal matrix, where the diagonal blocks are the matrices representing the f i relative to the bases B i of the U i. In this finite-dimensional case the number of indices i 2 I for which U i is nonzero is finite, and it follows that the characteristic polynomial P f equals P f = Y i2i In particular, we have det f = Q i2i det f i,andtrf = P i2i Tr f i for the determinant and the trace Remark. An endomorphism f : V! V is diagonalisable if and only if V is the direct sum of the eigenspaces of f Lemma. Let V be a vector space with linear subspaces U i (i 2 I) such that V = L i2i U i. Let f : V! V be an endomorphism. Then there are endomorphims f i : U i! U i for i 2 I such that f = L i2i f i if and only if each U i is invariant under f (or f-invariant), i.e., f(u i ) U i. Proof. If f = L i f i, then f i = f Ui, hence f(u i ) = f Ui (U i ) = f i (U i ) U i. Conversely, suppose that f(u i ) U i. Then we can define f i : U i! U i to be the restriction of f to U i ;itisthenclearthatf i is an endomorphism of U i and that f equals L i f i,asthetwocoincideonallthesubspacesu i,whichtogether generate V. We now come to a relation between splittings of f as a direct sum and the characteristic or minimal polynomial of f. We call two polynomials p 1 (x) andp 2 (x) coprime if there are polynomials a 1 (x) and a 2 (x) such that a 1 (x)p 1 (x)+a 2 (x)p 2 (x) = Lemma. Let V be a vector space and f : V! V an endomorphism. Let p(x) =p 1 (x)p 2 (x) be a polynomial such that p(f) =0and such that p 1 (x) and p 2 (x) are coprime. Let U i =ker p i (f), for i =1, 2. Then V = U 1 U 2 and the U i are f-invariant. In particular, f = f 1 f 2, where f i = f Ui. Moreover, we have U 1 =im p 2 (f) and U 2 =im p 1 (f). Proof. Set K 1 =im p 2 (f) and K 2 =im p 1 (f).wefirstshowthatk i U i for i =1, 2. Let v 2 K 1 =im p 2 (f),sov = p 2 (f) (u) forsomeu 2 V.Then p 1 (f) (v) = p 1 (f) p 2 (f) (u) = p 1 (f)p 2 (f) (u) = p(f) (u) =0, so K 1 = im p 2 (f) ker p 1 (f) = U 1. The statement for i = 2 follows by symmetry. Now we show that U 1 \ U 2 = {0}. So let v 2 U 1 \ U 2. Then p 1 (f) (v) = p 2 (f) (v) = 0. Using id V =1(f) = a 1 (x)p 1 (x)+a 2 (x)p 2 (x) (f) =a 1 (f) p 1 (f)+a 2 (f) p 2 (f), P fi. 5
8 6 we see that v = a 1 (f) p 1 (f) (v) + a 2 (f) p 2 (f) (v) = a 1 (f) (0) + a 2 (f) (0) = 0. Next, we show that K 1 + K 2 = V. Using the same relation above, and the fact that p i (f) anda i (f) commute,wefindforv 2 V arbitrary that v = p 1 (f) a 1 (f) (v) + p 2 (f) a 2 (f) (v) 2 im p 1 (f) +im p 2 (f). These statements together imply that K i = U i for i =1, 2, and V = U 1 U 2. Indeed, let v 2 U 1. We can write v = v 1 + v 2 with v i 2 K i. Then U 1 3 v v 1 = v 2 2 U 2,butU 1 \ U 2 = {0}, sov = v 1 2 K 1. Finally, we have to show that U 1 and U 2 are f-invariant. So let (e.g.) v 2 U 1. Since f commutes with p 1 (f), we have p 1 (f) f(v) = p 1 (f) f (v) = f p 1 (f) (v) =f p 1 (f) (v) = f(0) = 0, (since v 2 U 1 =ker p 1 (f) ), hence f(v) 2 U 1 as well Proposition. Let V be a vector space and f : V! V an endomorphism. Let p(x) =p 1 (x)p 2 (x) p k (x) be a polynomial such that p(f) =0and such that the factors p i (x) are coprime in pairs. Let U i =ker p i (f). Then V = U 1 U k and the U i are f-invariant. In particular, f = f 1 f k, where f i = f Ui. Proof. We proceed by induction on k. Thecasek =1istrivial. Soletk 2, and denote q(x) =p 2 (x) p k (x). Then I claim that p 1 (x) andq(x) arecoprime. To see this, note that by assumption, we can write, for i =2,...,k, Multiplying these equations, we obtain a i (x)p 1 (x)+b i (x)p i (x) =1. A(x)p 1 (x)+b 2 (x) b k (x)q(x) =1; note that all the terms except b 2 (x) b k (x)q(x) thatwegetwhenexpandingthe product of the left hand sides contains a factor p 1 (x). We can then apply Lemma 2.11 to p(x) =p 1 (x)q(x) andfindthatv = U 1 U 0 and f = f 1 f 0 with U 1 =ker p 1 (f), f 1 = f U1,andU 0 =ker q(f), f 0 = f U 0. In particular, q(f 0 )=0. Byinduction,wethenknowthatU 0 = U 2 U k with U j =ker p j (f 0 ) and f 0 = f 2 f k,wheref j = f 0 Uj,forj =2,...,k. Finally, ker p j (f 0 ) =ker p j (f) (since the latter is contained in U 0 )andf j = f 0 Uj = f Uj, so that we obtain the desired conclusion from Lemma 2.5. The following little lemma about polynomials is convenient if we want to apply Lemma Lemma. If p(x) is a polynomial (over F ) and 2 F such that p( ) 6= 0, then (x ) m and p(x) are coprime for all m 1. Proof. First, consider m = 1. Let q(x) = p(x) p( ) 1;
9 7 this is a polynomial such that q( )=0. Therefore,wecanwriteq(x) =(x with some polynomial r(x). This gives us r(x)(x )+ 1 p(x) =1. p( ) Now, taking the mth power on both sides, we obtain an equation r(x) m (x ) m + a(x)p(x) =1. )r(x) 3. The Cayley-Hamilton Theorem and the Minimal Polynomial Let A 2 Mat(n, F ). We know that Mat(n, F )isanf-vector space of dimension n 2. Therefore, the elements I, A, A 2,...,A n2 cannot be linearly independent (because their number exceeds the dimension). If we define p(a) intheobviouswayforp apolynomialwithcoe cientsinf (as we already did in the previous chapter), then we can deduce that there is a (non-zero) polynomial p of degree at most n 2 such that p(a) = 0 (0 here is the zero matrix). In fact, much more is true. Consider a diagonal matrix D =diag( 1, 2,..., n). (This notation is supposed to mean that j is the (j, j) entryofd; the o -diagonal entries are zero, of course.) Its characteristic polynomial is P D (x) =(x 1)(x 2) (x n). Since the diagonal entries are roots of P D,wealsohaveP D (D) =0.Moregenerally, consider a diagonalizable matrix A. Then there is an invertible matrix Q such that D = Q 1 AQ is diagonal. Since (Exercise!) p(q 1 AQ) =Q 1 p(a)q for p a polynomial, we find 0=P D (D) =Q 1 P D (A)Q = Q 1 P A (A)Q =) P A (A) =0. (Recall that P A = P D similarmatriceshavethesamecharacteristicpolynomial.) The following theorem states that this is true for all square matrices (or endomorphisms of finite-dimensional vector spaces) Theorem (Cayley-Hamilton). Let A 2 Mat(n, F ). Then P A (A) =0. Proof. Here is a simple, but wrong proof. By definition, P A (x) =det(xi A), so, plugging in A for x, wehavep A (A) = det(ai A) = det(a A) =det(0)=0. (Exercise: find the mistake!) For the correct proof, we need to consider matrices whose entries are polynomials. Since polynomials satisfy the field axioms except for the existence of inverses, we can perform all operations that do not require divisions. This includes addition, multiplication and determinants; in particular, we can use the adjugate matrix. Let B = xi A, thendet(b) =P A (x). Let B be the adjugate matrix; then we still have BB =det(b)i. Theentriesof B come from determinants of (n 1) (n 1) submatrices of B, therefore they are polynomials of degree at most n 1. We can then write B = x n 1 B n 1 + x n 2 B n xb 1 + B 0, and we have the equality (of matrices with polynomial entries) (x n 1 B n 1 +x n 2 B n 2 + +B 0 )(xi A) =P A (x)i =(x n +b n 1 x n 1 + +b 0 )I,
10 8 where we have set P A (x) =x n + b n 1 x n b 0.Expandingthelefthandside and comparing coe cients of like powers of x, wefindtherelations B n 1 = I, B n 2 B n 1 A = b n 1 I,..., B 0 B 1 A = b 1 I, B 0 A = b 0 I. We multiply these from the right by A n, A n 3.2. Remarks. B n 1 A n = A n B n 2 A n 1 B n 1 A n = b n 1 A n 1... B 0 A B 1 A 2 = b 1 A B 0 A = b 0 I 0 = P A (A) 1,...,A, I, respectively,andadd: (1) The reason why we cannot simply plug in A for x in the identity B (xi A) =P A (x)i is that whereas x (as a scalar) commutes with the matrices occurring as coe cients of powers of x, itisnotaprioriclearthata does so, too. We will discuss this in more detail in the Introductory Algebra course, where polynomial rings will be studied in some detail. (2) Another idea of proof (and maybe easier to grasp) is to say that a generic matrix is diagonalizable (if we assume F to be algebraically closed... ), hence the statement holds for most matrices. Since it is just a bunch of polynomial relations between the matrix entries, it then must hold for all matrices. This can indeed be turned into a proof, but unfortunately, this requires rather advanced tools from algebra. (3) Of course, the statement of the theorem remains true for endomorphisms. Let f : V! V be an endomorphism of the finite-dimensional F -vector space V,thenP f (f) =0(whichisthezeroendomorphisminthiscase). For evaluating the polynomial at f, wehavetointerpretf n as the n-fold composition f f f, andf 0 =id V. Our next goal is to define the minimal polynomial of a matrix or endomorphism, as the monic polynomial of smallest degree that has the matrix or endomorphism as a root. However, we need to know a few more facts about polynomials in order to see that this definition makes sense Lemma (Polynomial Division). Let f and g be polynomials, with g monic. Then there are unique polynomials q and r such that r =0or deg(r) < deg(g) and such that f = qg + r. Proof. We first prove existence, by induction on the degree of f. If deg(f) < deg(g), then we take q =0andr = f. So we now assume that m =deg(f) deg(g) =n, f = a m x m + + a 0.Letf 0 = f a m x m n g,then(sinceg = x n +...) deg(f 0 ) < deg(f). By the induction hypothesis, there are q 0 and r such that deg(r) < deg(g) orr =0andsuchthatf 0 = q 0 g +r. Thenf =(q 0 +a m x m n )g +r. (This proof leads to the well-known algorithm for polynomial long division.)
11 As to uniqueness, suppose we have f = qg + r = q 0 g + r 0,withr and r 0 both of degree less than deg(g) orzero.then (q q 0 )g = r 0 r. If q 6= q 0,thenthedegreeofthelefthandsideisatleastdeg(g), but the degree of the right hand side is smaller, hence this is not possible. So q = q 0,andtherefore r = r 0,too. Taking g = x, thisprovidesadi erentproofforcasek =1ofExample8.4of Linear Algebra I, 2015 edition (or later) Lemma and Definition. Let A 2 Mat(n, F ). There is a unique monic polynomial M A of minimal degree such that M A (A) =0. If p is any polynomial satisfying p(a) =0, then p is divisible by M A (as a polynomial). This polynomial M A is called the minimal (or minimum) polynomial of A. Similarly, we define the minimal polynomial M f of an endomorphism f of a finitedimensional vector space. Proof. It is clear that monic polynomials p with p(a) =0exist(bytheCayley- Hamilton Theorem 3.1, we can take p = P A ). So there will be one such polynomial of minimal degree. Now assume p and p 0 were two such monic polynomials of (the same) minimal degree with p(a) =p 0 (A) =0. Thenwewouldhave(p p 0 )(A) = p(a) p 0 (A) =0. Ifp 6= p 0,thenwecandividep p 0 by its leading coe cient, leading to a monic polynomial q of smaller degree than p and p 0 with q(a) =0, contradicting the minimality of the degree. Now let p be any polynomial such that p(a) = 0. By Lemma 3.3, there are polynomials q and r, deg(r) < deg(m A )orr = 0, such that p = qm A + r. Plugging in A, wefindthat 0=p(A) =q(a)m A (A)+r(A) =q(a) 0+r(A) =r(a). If r 6= 0,thendeg(r) < deg(m A ), but the degree of M A is the minimal possible degree for a polynomial that vanishes on A, sowehaveacontradiction. Therefore r =0andhencep = qm A Remark. In Introductory Algebra, you will learn that the set of polynomials as discussed in the lemma forms an ideal and that the polynomial ring is a principal ideal domain, which means that every ideal consists of the multiples of some fixed polynomial. The proof is exactly the same as for the lemma. By Lemma 3.4, the minimal polynomial divides the characteristic polynomial. As a simple example, consider the identity matrix I n.itscharacteristicpolynomialis (x 1) n,whereasitsminimalpolynomialisx 1. In some sense, this is typical, as the following result shows Proposition. Let A 2 Mat(n, F ) and 2 F. If is a root of the characteristic polynomial of A, then it is also a root of the minimal polynomial of A. In other words, both polynomials have the same linear factors. Proof. If P A ( )=0,then is an eigenvalue of A, sothereis06= v 2 F n such that Av = v. SettingM A (x) =a m x m + + a 0,wefind mx mx 0=M A (A)v = a j A j v = j v = M A ( )v. j=0 j=0 a j 9
12 10 (Note that the terms in this chain of equalities are vectors.) Since v 6= 0,this implies M A ( )=0. By Lemma 3.4, we know that each root of M A is a root of P A,andwehavejust shown the converse. So both polynomials have the same linear factors Remark. If F is algebraically closed (i.e., every non-zero polynomial is a product of linear factors), this shows that P A is a multiple of M A,andM k A is a multiple of P A when k is large enough. In fact, the latter statement is true for general fields F (and can be interpreted as saying that both polynomials have the same irreducible factors). For the proof, one replaces F by a larger field F 0 such that both polynomials split into linear factors over F 0. That this can always be done is shown in Introductory Algebra. One nice property of the minimal polynomial is that it provides another criterion for diagonalizability Proposition. Let A 2 Mat(n, F ). Then A is diagonalizable if and only if its minimal polynomial M A is a product of distinct monic linear factors. Proof. First assume that A is diagonalizable. It is easy to see that similar matrices have the same minimal polynomial (Exercise), so we can as well assume that A is already diagonal. But for a diagonal matrix, the minimal polynomial is just the product of factors x, where runs through the distinct diagonal entries. (It is the monic polynomial of smallest degree that has all diagonal entries as roots.) Conversely, assume that M A (x) = (x 1) (x m) with 1,..., m 2 F distinct. The polynomials q i = x i (with 1 apple i apple m) arepairwisecoprime,so by Proposition 2.12 the eigenspaces U i = E i (A) =ker(a ii) =kerq i (A) satisfy F n = U 1 U m. This implies n = P m i=1 dim E (A), which in turn i (by Cor of Linear Algebra I, 2015 edition) impliesthata is diagonalizable Example. Consider the matrix 0 A A Is it diagonalizable? Its characteristic polynomial is clearly P A (x) =(x 1) 3,soitsminimalpolynomial must be (x 1) m for some m apple 3. Since A I 6= 0,m>1(infact,m =3),hence A is not diagonalizable. On the other hand, the matrix (for F = R, say) 0 B A has M B (x) =P B (x) =(x 1)(x 4)(x 6); B therefore is diagonalizable. Exercise: what happens for fields F of small characteristic?
13 11 4. The Structure of Nilpotent Endomorphisms 4.1. Definition. AmatrixA2Mat(n, F )issaidtobenilpotent, if A m =0for some m 1. Similarly, if V is a finite-dimensional vector space and f : V! V is an endomorphism, then f is said to be nilpotent if f m = f f f =0for {z } some m 1. m times It follows that the minimal polynomial of A or f is of the form x m,wherem is the smallest number that has the property required in the definition Proposition. A nilpotent matrix or endomorphism is diagonalizable if and only if it is zero. Proof. The minimal polynomial is x m. Prop. 3.6 then implies that the matrix or endomorphism is diagonalizable if and only if m = 1. But then the minimal polynomial is x, whichmeansthatthematrixorendomorphismiszero. The following result tells us more about the structure of nilpotent endomorphisms. It is the main ingredient to proving the existence of the Jordan Normal Form Theorem. Let V be an F -vector space, dim V = n, and let f : V! V be a nilpotent endomorphism. Then V has a basis (v 1,v 2,...,v n ) such that f(v j ) is either zero or v j+1. Proof. Let m be an integer such that f m = 0. Note that we have a chain of inclusions {0} =kerf 0 ker f 1 ker f 2 ker f m 1 ker f m = V. We prove by descending induction that for all j 2{0, 1,...,m} there are elements w 1,...,w s 2 V and non-negative integers e 1,...,e s, such that the sequence (2) w 1,f(w 1 ),...,f e 1 (w 1 ),w 2,f(w 2 ),...,f e 2 (w 2 ),...,w s,f(w s ),...,f es (w s ) is a basis of a complementary space U j of ker f j inside V and the elements f e 1+1 (w 1 ),...,f es+1 (w s )arecontainedinkerf j. For j = m we have ker f m = V,sowemaytakes =0andU m = {0}. Suppose 0 apple j < m and suppose that we have elements w 1,...,w t 2 V and integers e 1,...,e s,suchthatthesequenceanalogousto(2)isabasisforacomplementary subspace U j+1 of ker f j+1 inside V,andtheelementsf e 1+1 (w 1 ),...,f es+1 (w s )are contained in ker f j+1. We claim that if any scalars 1,..., s satisfy P s i=1 if ei+1 (w i ) 2 ker f j,thenwe have 1 = = s =0. Indeed,setz = P s i=1 if e i (w i ). Then the assumption states that f(z) 2 ker f j,sowehavez 2 ker f j+1. But z is also a linear combination of elements in a basis for U j+1,sowehavez 2 U j+1 \ ker f j+1 = {0} and thus z =0,whichinturnimplies 1 = = s =0. The claim implies in particular that the elements f ei+1 (w i )fori =1,...,s are linearly independent, so they form a basis for the subspace W j ker f j+1 that they generate. The claim also implies W j \ ker f j = {0}, sow j can be extended to a complementary space X j of ker f j inside ker f j+1,andthebasisforw j can be extended to a basis (f e1+1 (w 1 ),...,f es+1 (w s ),w s+1,...,w t )forx j. Set U j = U j+1 X j.thenwehave V = U j+1 ker f j+1 = U j+1 (X j ker f j )=(U j+1 X j ) ker f j = U j ker f j,
14 12 so U j is a complementary space of ker f j in V. The union of the two bases for U j+1 and X j gives a basis for U j,whichwecanrearrangeto w 1,f(w 1 ),...,f e 1+1 (w 1 ),w 2,f(w 2 ),...,f e 2+1 (w 2 ),...,w s,f(w s ),...,f es+1 (w s ),w s+1,...,w t. This finishes the induction step, as it is of the same form as (2), with t substituted for s, withe i +1substitutedfore i (for 1 apple i apple s), and with 0 substituted for e i (for s<i<apple t). The statement of the Theorem now follows from the case j = 0, as the only complementary subspace of ker f 0 = {0} is V,andwemaydenotetheelementsof (2) for j =0byv 1,v 2,...,v n Remark. The matrix A =(a ij )representingf with respect to v n,...,v 2,v 1, where v 1,...,v n is a basis as in Thm. 4.3 above, has all entries zero except a j,j+1 = 1iff(v n j )=v n+1 j.thereforeais a block diagonal matrix 0 1 B B A = 2 0 C. A 0 0 B n k with blocks of the form B j =. B A Note that we reversed the order of the basis elements! 4.5. Corollary. A nilpotent matrix is similar to a matrix of the form just described. Proof. This is clear from our discussion Corollary. A matrix A 2 Mat(n, F ) is nilpotent if and only if P A (x) =x n. Proof. If P A (x) =x n,thena n = 0 by the Cayley-Hamilton Theorem 3.1, hence A is nilpotent. Conversely, if A is nilpotent, then it is similar to a matrix of the form above, which visibly has characteristic polynomial x n Remark. The statement of Cor. 4.6 would also follow from the fact that P A (x) dividessomepowerofm A (x) =x m, see Remark 3.7. However, we have proved this only in the case that P A (x) splitsintolinearfactors(whichweknow is true, but only after the fact).
15 4.8. Remark. The proof of Theorem 4.3 gives an e cient algorithm to construct such a special basis. (1) Compute an m with f m =0. (2) Start with j = m and U m = {0} and s =0andtheemptybasisforU m. (3) Decrease j by 1. We now have a basis w 1,f(w 1 ),...,f e 1 (w 1 ),w 2,f(w 2 ),...,f e 2 (w 2 ),...,w s,f(w s ),...,f es (w s ) for a complement U j+1 of ker(f j+1 )inv such that f e j+1 (w i ) 2 ker(f j+1 ) for 1 apple i apple s. (4) Extend f e 1+1 (w 1 ),...,f es+1 (w s )toabasis f e 1+1 (w 1 ),...,f es+1 (w s ),w s+1,...,w t of a complementary space of ker f j in ker f j+1 (see Lemma 2.6). (5a) For 1 apple i apple s, increasee i by 1. (5b) For s<iapple t, sete i =0. (5c) Set s equal to t. (6) If j =0,thenwearedone. Ifj>0, go back to (3). The last proof of Theorem 4.3 shows that step (4) is possible, that the condition after step (3) always holds, and that in the end we end up with a basis of the desired form Example. Consider 0 A A 2 Mat(3, R) We find 0 A A and A 3 =0,soAisnilpotent. Let us find a basis as given in Thm The first step in the process comes down to finding a complementary subspace of ker(a 2 )=L (2, 0, 1) >, ( 1, 1, 0) >. We can take (1, 0, 0) >,forexample,asthe basis of a complement. This will be w 1 in the notation of the proof above. We then have Aw 1 =(3, 1, 2) > and A 2 w 1 =( 1, 1, 1) >,andthesethreealready form a basis. Reversing the order, we get A A The Jordan Normal Form Theorem In this section, we will formulate and prove the Jordan Normal Form Theorem, which will tell us that any matrix whose characteristic polynomial is a product of linear factors is similar to a matrix of a very special near-diagonal form. Now we can feed this into Prop
Topics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationThe Cayley-Hamilton Theorem and the Jordan Decomposition
LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal
More informationNotes on the matrix exponential
Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not
More informationBare-bones outline of eigenvalue theory and the Jordan canonical form
Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional
More informationFurther linear algebra. Chapter IV. Jordan normal form.
Further linear algebra. Chapter IV. Jordan normal form. Andrei Yafaev In what follows V is a vector space of dimension n and B is a basis of V. In this chapter we are concerned with linear maps T : V V.
More informationInfinite-Dimensional Triangularization
Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationLINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS
LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationA proof of the Jordan normal form theorem
A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationThe minimal polynomial
The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic
More informationGeneralized eigenspaces
Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More information9.1 Eigenvectors and Eigenvalues of a Linear Map
Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E, letf : E! E be any linear map. If, by luck, there is a basis (e 1,...,e
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationJORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5
JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationGENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION
GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationA connection between number theory and linear algebra
A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationREU 2007 Apprentice Class Lecture 8
REU 2007 Apprentice Class Lecture 8 Instructor: László Babai Scribe: Ian Shipman July 5, 2007 Revised by instructor Last updated July 5, 5:15 pm A81 The Cayley-Hamilton Theorem Recall that for a square
More informationTheorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.
5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationA Field Extension as a Vector Space
Chapter 8 A Field Extension as a Vector Space In this chapter, we take a closer look at a finite extension from the point of view that is a vector space over. It is clear, for instance, that any is a linear
More informationINTRODUCTION TO LIE ALGEBRAS. LECTURE 10.
INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. 10. Jordan decomposition: theme with variations 10.1. Recall that f End(V ) is semisimple if f is diagonalizable (over the algebraic closure of the base field).
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationMATH JORDAN FORM
MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationAlgebra Qualifying Exam August 2001 Do all 5 problems. 1. Let G be afinite group of order 504 = 23 32 7. a. Show that G cannot be isomorphic to a subgroup of the alternating group Alt 7. (5 points) b.
More informationQ N id β. 2. Let I and J be ideals in a commutative ring A. Give a simple description of
Additional Problems 1. Let A be a commutative ring and let 0 M α N β P 0 be a short exact sequence of A-modules. Let Q be an A-module. i) Show that the naturally induced sequence is exact, but that 0 Hom(P,
More informationChapter 2: Linear Independence and Bases
MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space
More information1.4 Solvable Lie algebras
1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationAlgebra Exam Syllabus
Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationMAT 445/ INTRODUCTION TO REPRESENTATION THEORY
MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations
More informationSchur s Triangularization Theorem. Math 422
Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a
More informationThe converse is clear, since
14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationLinear Algebra II Lecture 13
Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationMath 113 Homework 5. Bowei Liu, Chao Li. Fall 2013
Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name
More informationMTH 5102 Linear Algebra Practice Final Exam April 26, 2016
Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write
More information4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial
Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More information(a + b)c = ac + bc and a(b + c) = ab + ac.
2. R I N G S A N D P O LY N O M I A L S The study of vector spaces and linear maps between them naturally leads us to the study of rings, in particular the ring of polynomials F[x] and the ring of (n n)-matrices
More informationMAS4107 Linear Algebra 2
General Prerequisites MAS4107 Linear Algebra 2 Peter Sin University of Florida email: sin@math.ufl.edu Familiarity with the notion of mathematical proof and some experience in reading and writing proofs.
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More information(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S
1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space
More information(VI.D) Generalized Eigenspaces
(VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationEigenvectors. Prop-Defn
Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationMath 113 Winter 2013 Prof. Church Midterm Solutions
Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis
More informationMath 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.
Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.
More informationThe Jordan Canonical Form
The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationEigenvalues and Eigenvectors
November 3, 2016 1 Definition () The (complex) number λ is called an eigenvalue of the n n matrix A provided there exists a nonzero (complex) vector v such that Av = λv, in which case the vector v is called
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationGiven a finite-dimensional vector space V over a field K, recall that a linear
Jordan normal form Sebastian Ørsted December 16, 217 Abstract In these notes, we expand upon the coverage of linear algebra as presented in Thomsen (216). Namely, we introduce some concepts and results
More information2 Eigenvectors and Eigenvalues in abstract spaces.
MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors
More informationReal representations
Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationarxiv: v1 [math.rt] 7 Oct 2014
A direct approach to the rational normal form arxiv:1410.1683v1 [math.rt] 7 Oct 2014 Klaus Bongartz 8. Oktober 2014 In courses on linear algebra the rational normal form of a matrix is usually derived
More informationLinear and Bilinear Algebra (2WF04) Jan Draisma
Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension
More informationUNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if
UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is
More information(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) =
(Can) Canonical Forms Math 683L (Summer 2003) Following the brief interlude to study diagonalisable transformations and matrices, we must now get back to the serious business of the general case. In this
More informationTherefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.
Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More information5 Quiver Representations
5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )
More informationJORDAN NORMAL FORM NOTES
18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm
More informationSolutions to Final Exam
Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationAlgebra Homework, Edition 2 9 September 2010
Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationLECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector
More information12x + 18y = 30? ax + by = m
Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.
More informationNotes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers
Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1
More informationNOTES II FOR 130A JACOB STERBENZ
NOTES II FOR 130A JACOB STERBENZ Abstract. Here are some notes on the Jordan canonical form as it was covered in class. Contents 1. Polynomials 1 2. The Minimal Polynomial and the Primary Decomposition
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST
me me ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST 1. (1 pt) local/library/ui/eigentf.pg A is n n an matrices.. There are an infinite number
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More information