Topics in linear algebra

Size: px
Start display at page:

Transcription

1 Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over F and f : V W be a linear transformation. Fix bases v 1,..., v n for V and w 1,..., w m for W. Then, the linear transformation f is uniquely determined by knowledge of the vectors f(v 1 ),..., f(v n ). So if we write m f(v j ) = A i,j w i for an m n matrix A = (A i,j ), the original linear transformation f is determined uniquely by the matrix A (providing of course you know what the bases v i and w j are to start with!). We call this matrix A the matrix of f with respect to the bases (v j ), (w i ). The above formula is the Golden rule of linear algebra: The jth column of the matrix A of f in the given bases is f(v j ) expanded in terms of the w i s. Remark. Hungerford uses a different definition of the matrix of a linear transformation!!! His definition (made after Theorem 1.2 in section VII.1) is the transpose of our definition. In other words, Hungerford thinks of vectors as row vectors and writes matrices for linear maps on the right. I ALWAYS think of vectors as column vectors and write matrices (like all maps always) on the left. Now we need to consider what happens to this matrix A of f in one set of bases if we switch to another set of bases. So first, let v 1,..., v n be another basis for V. Then, we obtain the change of basis matrix P = (P i,j ) by writing the v j in terms of the v i: v j = P i,j v i. On the other hand, we can write the v j in terms of the v i giving us the inverse change of basis matrix P = (P i,j ): v j = P i,jv i. The relationship between P and P comes from the equation: v j = P h,ip i,j v h. P i,j v i = i,h 89

2 90 CHAPTER 6. TOPICS IN LINEAR ALGEBRA This tells us that P h,ip i,j = δ h,j, where δ h,j = 1 if h = j or 0 otherwise. In other words, P P = I n. Similarly one gets that P P = I n. Hence, the change of basis matrix P is invertible, and the inverse change of basis matrix P really is its inverse! Now also let w 1,..., w m be another basis for W and let Q be the change of basis matrix in this case, defined from m w j = Q i,j w i. Now let B be the matrix of our original linear transformation f with respect to the bases (v j ) and (w i ), so m f(v j) = Q h,i B i,j w h. Also, f(v j) = f( B i,j w i = i,h P j,i v j ) = i,h A h,j P j,i w h. Equating w h coefficients gives us that QB = AP. We have shown: Change of basis theorem. Given bases (v j ), (v j ) for V with change of basis matrix P and bases (w i ), (w i ) for W with change of basis matrix Q, the relationship between the matrix A of a linear transformation f : V W in the unprimed bases to its matrix B in primed bases is given by B = Q 1 AP, A = QBP 1. Note that the matrices A and B in the theorem are thus equivalent matrices in the sense defined in section 3.4. The result proved there shows (since we are working now over a field) that bases for V and W can be chosen with respect to which the matrix of f looks like an r r identity matrix in the top left corner with zeros elsewhere. Of course, this integer r is the rank of the linear transformation f, and is simply equal to the dimension of the image of f. I m sure you remember the basic equation in linear algebra which is nothing more than the first isomorphism theorem for vector spaces: The rank-nullity theorem. dim V = dim ker f + dim im f. In this chapter we are really interested in a slightly different situation, namely, when f : V V is an endomorphism of a vector space V. Then it makes more sense to record the matrix of the linear transformation f with respect to just one basis v 1,..., v n for V (i.e. take w i = v i in the above notation). We obtain the n n matrix A of f with respect to the basis v 1,..., v n so that f(v j ) = A i,j v i. Now take another basis v 1,..., v n and let P be the change of basis matrix as before. Then the change of basis theorem tells us in the special case that: Change of basis theorem. If (v i ) and (v i ) are two bases for V related by change of basis matrix P, and A is the matrix of an endomorphism f : V V with respect to the first basis, B is its matrix with respect to the second basis, then B = P 1 AP, A = P AP 1.

3 6.2. JORDAN NORMAL FORM 91 So: the matrix of f in the (v i ) basis is obtained from the matrix of f in the (v j) basis by conjugating by the change of basis matrix. Now define two n n matrices to be similar, written A B if there exists an invertible n n matrix P such that B = P 1 AP. One easily checks that similarity is an equivalence relation on n n matrices. The equivalence classes are the similarity classes (or conjugacy classes) of matrices over F. We will be interested in finding a set of representatives for the similarity classes of matrices having a particularly nice form: a normal form. Equivalently, given a linear transformation f : V V, we are interested in finding a nice basis for V with respect to which the matrix of f looks especially nice. 6.2 Jordan normal form Suppose in this section that the field F is algebraically closed. This means that the irreducible polynomials in the polynomial ring F [X] are of the form (X λ) for λ F. Fix once and for all a linear transformation f : V V of some finite dimensional vector space V over F. The goal is to pick a nice basis for V... We make V into an F [X]-module, so that X acts on V by the linear transformation f. In other words, a polynomial a n X n + + a 1 X + a 0 F [X] acts on V by the linear transformation a n f n + +a 1 f+a 0. Note the way V is viewed as an F [X]-module encodes the linear transformation f into the module structure: we can recover f simply as the linear transformation of V determined by multiplication X. Now, V is a finitely generated F [X]-module indeed, it is even finitely generated as an F - module as its finite dimensional. Also, F [X] is a PID. So we can apply the primary decomposition theorem for modules over PIDs to get that V decomposes as V = V 1 V N where each V i is a cyclic F [X]-module of prime power order. Moreover, none of these primes are the zero element of F [X] (the only prime that s not irreducible) since V is finite dimensional over F so can t have the infinite dimensional regular module F [X] as a summand. So since we re working over an algebraically closed field, we get scalars λ i F and d i N such that V i = F [X]/((X λi ) di ) as F [X]-modules. Now to find a nice basis for V, we focus on a particular summand V i and want a nice basis for that V i. In other words, using the above isomorphism, we should study the F [X]-module F [X]/((X λ) d ) and find a basis in which the matrix of the linear transformation determined by multiplication by X has a nice shape Lemma. The images of the elements (X λ) d 1, (X λ) d 2,..., (X λ), 1 in the F [X]- module F [X]/((X λ) d ) form an F -basis, Moreover, the linear transformation determined by multiplication by X has the following matrix in this basis: λ λ λ λ (We call the above matrix the Jordan block of type (X λ) d.) Proof. We re considering F [X]/((X λ) d ). As an F -vector space, F [X] has basis given by all the monomials 1, X, X 2,... (its infinite dimensional of course). But (X λ) d is a monic polynomial of

4 92 CHAPTER 6. TOPICS IN LINEAR ALGEBRA degree d so in the quotient F [X]/((X λ) d ), we can rewrite X d as a linear combination of lower degree monomials. In other words, we just need the monomials 1, X, X 2,..., X d 1 to span the quotient F [X]/((X λ) d ). Moreover, they re linearly independent, else there d be an element in the ideal (X λ) d of degree less than d. Hence, the images of 1, X,..., X d 1 give a basis for the quotient. It follows easily that the images of (X λ) d 1,..., (X λ), 1 also form a basis, since they re related to the preceeding basis by a unitriangular change of basis matrix. It just remains to write down the linear transformation multiply by X in this basis. We have that X(X λ) i = (X λ) i+1 + λ(x λ) i. Now just write down the matrix, remembering: the jth column of the matrix of the linear transformation is given by expanding its effect on the jth basis vector in terms of the basis. Now recall that we ve decomposed V = V 1 V N with each V i = F [X]/((X λi ) di ). The lemma gives us a choice of a nice basis for each F [X]/((X λ i ) di ). Lifting it through the isomorphism we get a nice basis for V i, hence putting them all together, we get a nice basis for V. We ve proved the first half of: Jordan normal form. There exists a basis for V in which the matrix of the linear transformation f has the Jordan normal form: B B B N where B i is the Jordan block of type (X λ i ) di. Moreover, this normal form for the linear transformation f is unique up to permuting the Jordan blocks. Proof. It just remains to prove the uniqueness statement. This will follow from uniqueness of the orders of the indecomposable modules appearing in the primary decomposition of the F [X]-module V. We just need show that these orders are exactly the (X λ i ) di : in other words, the types of the Jordan blocks are determined exactly by the orders of the primary components of the module. Since these orders are uniquely determined up to reordering, that gives that the Jordan blocks are uniquely determined up to reordering. Everything now reduces to considering the special case that f has just one Jordan block of type (X λ) d, and we need to prove that this is also the order of V viewed as an F [X]-module. Let the basis in which f has the Jordan block form be v 1,..., v d. Then, V is a cyclic F [X]-module generated by v d. So we just need to calculate the order of v d. We are given that (X λ)v i = v i 1 (where v 0 = 0 by convention). Hence, (X λ) d v d = 0, while the (X λ) i v d for i = 0,..., d 1 are linearly independent. This shows that no non-zero polynomial in X of degree less than d can annihilate the generator v d, but (X λ) d does. Hence, the order of v d is (X λ) d, as required. I now wish to give a rather silly consequence of the Jordan normal form. First, recall that the characteristic polynomial of the linear transformation f is the polynomial in X obtained by calculating det(a XI) where A is the matrix representing f in any basis of V. Write χ f (X) F [X] for the characteristic polynomial of f. Note its definition is independent of the choice of basis used to calculate it! In other words, we may as well pick the basis in which A has Jordan normal form then in the above notation, N χ f (X) = (λ i X) di.

5 6.3. RATIONAL NORMAL FORM 93 So the characteristic polynomial tells us the eigenvalues ie the diagonal entries in the Jordan normal form together with their multiplicities. But it doesn t give enough information (unless all eigenvalues turn out to be distinct) to work out the precise sizes of the Jordan blocks in the JNF. Cayley-Hamilton theorem. χ f (f) = 0. Proof. Explicit calculation writing f as a matrix in Jordan normal form and using the above formula for χ f (X). Remark. The Cayley-Hamilton theorem is true more generally. Let R be an arbitrary integral domain and A be an n n matrix with entries in R. Then, we can define its characteristic polynomial in exactly the same way as above, namely, χ A (X) = det(a XI n ) R[X]. Then, we always have that the matrix χ A (A) is the zero matrix. Proof: Embed R into its field of fractions (section 2.5) and its field of fractions into its algebraic closure (section 7.3). Then the conclusion is immediate from the Cayley-Hamilton theorem over an algebraically closed field that we just proved. There is one other useful polynomial associated to the linear transformation f which can help in computing the Jordan normal form: the minimal polynomial of the linear transformation f. This is defined as the unique monic polynomial m f (X) F [X] of minimal degree such that m f (f) = 0. In other words (this is how you see uniqueness), m f (X) is the monic polynomial that generates the ideal of F [X] consisting of all polynomials that act as zero on the F [X]-module V Lemma. In the standing notation for the Jordan normal form of f, we have that m f (X) = LCM{(X λ i ) di i = 1,..., N}. Proof. We showed in the proof of the Jordan normal form that the minimal polynomial of a single Jordan block of type (X λ) d was exactly (X λ) d. This polynomial also gives zero when any Jordan block of type (X λ) e for e < d is substituted for X, but gives an invertible matrix on any other Jordan block of type (X µ) c for µ λ. Hence, m f (X) must be exactly divisible by (X λ) d for each eigenvalue λ with d being the maximum of all the d i with λ i = λ. Of course, the lemma shows in particular that m f (X) divides χ f (X), giving the Cayley- Hamilton theorem again! Knowing the minimal polynomial gives a little extra information: it tells you the biggest Jordan block associated to each eigenvalue λ. For small matrices, knowing both the characteristic and the minimal polynomial is often enough to write down the Jordan normal form right away. 6.3 Rational normal form I wish to briefly mention one normal form which (unlike the JNF) makes sense over an arbitrary field F : the rational normal form. So now let F be any field, V a finite dimensional vector space over F and f : V V be a linear transformation. Always, we view V as an F [X]-module so that X acts on V as the linear transformation f. We can decompose V using the structure theorem for finitely generated modules over PIDs, giving that V decomposes as an F [X]-modules as V = V 1 V N where each V i is cyclic of order g i, where g i F [X] is monic and g 1 g 2... g N. Hence, V i = F [X]/(gi ) as an F [X]-module. So if we re looking for a nice basis for V, we should consider finding a nice basis for the cyclic F [X]-module of order g.

6 94 CHAPTER 6. TOPICS IN LINEAR ALGEBRA Lemma. Let g(x) F [X] be the monic polynomial g(x) = X m a m 1 X m 1 a 1 X a 0. Then, the F [X]-module F [X]/(g) has F -basis the images of 1, X,..., X m 1. Moreover, in this basis, the matrix of the linear transformation given by multiplication by X has the form: 0... a a a m a m 1 (This matrix is called the companion matrix of the monic polynomial g(x).) Proof. That the given elements form a basis follows in exactly the same way as in the proof of Lemma You just have to calculate the matrix of the linear transformation given by multiplication by X: its jth column is what X does to the jth basis vector! Remark. In Hungerford, the definition of companion matrix is the transpose of ours, because he uses a different convention for the matrix of a linear transformation Now take the decomposition of V above. So each V i is isomorphic to F [X]/(g i ) for a monic polyomial g i (X). Applying the lemma, we get a basis for V i and putting all the bases together, we get a basis for V. We deduce from the lemma that: Rational normal form. There exists a basis for V in which the matrix of the linear transformation f has the following form: C C C N where C i is the companion matrix of a monic polynomial g i F [X] and g 1 g 2... g N. Remark. One can also show using the uniqueness in the structure theorem for modules over PIDs that the rational normal form for f is unique. In other words, if f and g are two linear transformations having the same rational normal form, then f and g are similar. Now consider the minimal polynomial of the linear transformation f again. Recall m f (X) is the monic polynomial that generates the ideal of F [X] consisting of all polynomials that act as zero on the F [X]-module V. Let V = V 1 V N be its decomposition according to the structure theorem, so V i is of degree g i and g 1... g N. Then, it is obvious that m f (X) = g N (X). So the minimal polynomial precisely tells you the largest block in the rational normal form of f. For example, if the minimal polynomial has the same degree as the dimension of V (equivalently, if V is already a cyclic F [X]-module) you get lucky and the rational normal form is simply the companion matrix of the minimal polynomial, which coincides with the characteristic polynomial by the Cayley-Hamilton theorem. 6.4 *Simultaneous diagonalization* Fix a linear transformation f : V V throughout the section. You should recall that for λ F, the λ-eigenspace of V is the subspace V λ = ker(f λ id), i.e. the span of all eigenvectors with eigenvalue λ. We record:

7 6.4. *SIMULTANEOUS DIAGONALIZATION* Lemma. λ F V λ = λ F V λ. Proof. Suppose not. Then we can find distinct eigenvalues λ 1,..., λ n and 0 v i V λi such that v v n = 0. Take such vectors v 1,..., v n with n minimal. Applying f λ 1 id, which annihilates v 1 and maps v i to (λ i λ 1 )v i for i > 1, we get that (λ 2 λ 1 )v (λ n λ 1 )v n = 0. This contradicts the minimality of the choice of n. We call the linear transformation f diagonalizable over F if there exists a basis for V with respect to which the matrix of f is a diagonalizable matrix Lemma. The following are equivalent: (1) f is diagonalizable over F ; (2) V = λ F V λ (i.e. V is spanned by eigenvectors); (3) V has a basis of eigenvectors. Proof. The equivalence of (1) and (3) is immediate from the definition. Clearly, (3) implies (2), while (2) implies (3) by Lemma Now we prove the important: Criterion for diagonalizibility. The linear transformation f is diagonalizable over F if and only if the minimal polynomial m f (X) splits as a product of distinct linear factors in F [X]. Proof. View V as an F [X]-module so X acts on V via f. Let V = V 1 V N be the structure theorem decomposition, so V i is cyclic of order g i F [X] and g 1... g N. We have observed before that m f (X) = g N (X). Now by Lemma 6.4.2, f is diagonalizable over F if and only if V has a basis v 1,..., v n of eigenvectors which is if and only if V decomposes according to the primary decomposition theorem as V = V 1 V n with each V i being a one dimensional cyclic F [X]-module. This last statement is equivalent to g N (X), the largest order polynomial appearing in the structure theorem decomposition, being a product of linear factors. Now we can prove the important: Criterion for simultaneous diagonalizability. Let f i (i I) be a family of linear transformations of V. Then, they are simultaneously diagonalizable (i.e. there exists a basis for V in which the matrices of all f i are diagonal) if and only if each f i is diagonalizable and f i f j = f j f i for all i, j I (i.e. the f i commute pairwise). Proof. Obviously, if the f i are simultaneously diagonalizable, they are each diagonalizable and they commute pairwise since diagonal matrices commute. Conversely, suppose each f i is diagonalizable and that they commute pairwise. We proceed by induction on dim V, there being nothing to prove if dim V = 0. So suppose dim V > 0. If all f i act as scalars on V, then any basis for V simultaneously diagonalizes all f i. Else, choose some f i which does not act as a scalar on V and consider the f i -eigenspace decomposition V = V λ1 V λm of V where 1 dim V λk < dim V for each k = 1,..., m. We claim that for each k = 1,..., m and each j I, f j (V λk ) V λk and moreover that the restriction of f j to the subspace V λk is diagonalizable. Since clearly the restrictions of the f j

8 96 CHAPTER 6. TOPICS IN LINEAR ALGEBRA to V λk pairwise commute, the claim will then give by the induction hypothesis that the f j are simultaneously diagonalizable on each V λk, hence on all of V proving the theorem. For the claim, note that for v V λk, f i f j (v) = f j f i (v) = f j λ k v = λ k f j v hence f j (v) V λk. Now f j is diagonalizable on V, so by the criterion for diagonalizability, the minimal polynomial m fj (X) of f j splits as a product of distinct linear factors. But clearly the minimal polynomial of the restriction of f j to V λk divides m fj (X), hence also splits as a product of distinct linear factors. Hence by the criterion for diagonalizability again, the restriction of f j to V λk is also diagonalizable. 6.5 Tensor and symmetric algebras Let F be a field (actually, everything here could be done more generally over an arbitrary commutative ring, e.g. Z but let s stick to the case of a field for simplicity). Given a vector space V over F, we have defined V V (meaning V F V ). Hence, we have (V V ) V and V (V V ). But these two are canonically isomorphic, the isomorphism mapping a generator (u v) w of the first to u (v w) in the second. From now on we ll identify the two, and write simply V V V for either. More generally, we ll write T n (V ) for V V (n times), where the tensor is put together in any order you like, all different ways giving canonically isomorphic vector spaces. Set T (V ) = n 0 T n (V ) and call it the tensor algebra on V. Note T 0 (V ) = F by convention, just a copy of the ground field, and T 1 (V ) = V. Then, T (V ) is an F -vector space given, but we can define a multiplication on it making it into an F -algebra as follows: define a map T m (V ) T n (V ) T m+n (V ) = T m (V ) T n (V ) by (x, y) x y. Then extend linearly to give a map from T (V ) T (V ) T (V ). Since is associative, the resulting multiplication is associative, and T (V ) is an F -algebra. To be quite explicit, suppose that e i (i I) is a basis for V. Then, {e i1 e i2 e in n 0, i 1,..., i n I} gives a basis of monomials for T (V ). Multiplication of the basis elements is just by joining tensors together. In other words, T (V ) looks like non-commutative polynomials in the e i (i I) an arbitrary element being a linear combination of finitely many non-commuting monomials. The importance of the tensor algebra is as follows: Universal property of the tensor algebra. Given any F -linear map f : V A to an F - algebra A, there is a unique F -linear ring homomorphism f : T (V ) A such that f = f i, where i : V T (V ) is the inclusion of V into T 1 (V ) T (V ). (Note: this universal property could be taken as the definition of T (V )!) Proof. The vector space T n (V ) is defined by the following universal property: given a multilinear map f n : V V (n copies) to a vector space W, there exists a unique F -linear map f n : T n (V ) W such that f n = f n i n, where i n is the obvious map V V T n (V ). Now define the map f n so that f n (v 1, v 2,..., v n ) = f(v 1 )f(v 2 )... f(v n ) (multiplication in the ring A). Check that it is multilinear in the v i, since A is an F -algebra. Hence it induces a unique f n : T n (V ) A. Now define f : T (V ) A by glueing all the f n together. In other words, using the basis notation above, we have that f(e i1 e in ) = f(e i1 )f(e i2 )... f(e in ).

9 6.5. TENSOR AND SYMMETRIC ALGEBRAS 97 This is an F -linear ring homomorphism, and is clearly the only option to extend f to such a thing. We can restate the theorem as follows: Let X be a basis of V. Then, given any set map f from X to an F -algebra A, there exists a unique F -linear homomorphism f : T (V ) A extending f (proof: extend the map X A to a linear map V A in the unique way then apply the theorem). In other words, T (V ) plays the role of the free F -algebra on the set X. Then you can define algebras by generators and relations but we won t go into that... Next, we come to the symmetric algebra on V. Continue with V being a vector space. We ll be working with V V (n times). Call a map f : V V W a symmetric multilinear map if its multilinear, i.e. f(v 1,..., cv i + c v i,..., v n ) = cf(v 1,..., v i,..., v n ) + c f(v 1,..., v i,..., v n ) for each i, and moreover f(v 1,..., v i, v i+1,..., v n ) = f(v 1,..., v i+1, v i,..., v n ) for each i (so its symmetric). The symmetric power S n (V ), together with a distinguished map i : V V S n (V ), is characterized by the following universal property: given any symmetric multilinear map f : V V W to a vector space W, there exists a unique linear map f : S n (V ) W such that f = f i. Note this is exactly the same universal property as defined T n (V ) in the proof of the universal property of the tensor algebra, but we ve added the word symmetric too. As usual with universal properties, S n (V ), if it exists, is unique up to canonical isomorphism (so we call it the symmetric algebra). Still, we need to prove existence with some sort of construction. So now define S n (V ) = T n (V )/I n where I n = v 1 v i v i+1 v n v 1 v i+1 v i v n is the subspace spanned by all such terms for all v 1,..., v n V. The distinguished map i : V V S n (V ) is the obvious map V V T n (V ) followed by the quotient map T n (V ) S n (V ). Now take a symmetric multilinear map f : V V W. By the universal property defining T n (V ), there is a unique F -linear map f 1 : T n (V ) W extending f. Now since f is symmetric, we have that f 1 (v 1 v i v i+1 v n v 1 v i+1 v i v n ) = 0. Hence, f 1 annihilates all generators of I n, hence all of I n. So f 1 factors through the quotient S n (V ) of T n (V ) to induce a unique map f : S n (V ) W with f = f i. Note we have really just used the univeral property of T n followed by the universal property of quotients! So now we ve defined the nth symmetric power of a vector space. Note its customary to denote the image in S n (V ) of (v 1,..., v n ) V V under the map i by v 1 v n. So v 1 v n is the image of v 1 v n under the quotient map T n (V ) S n (V ). We can glue all S n (V ) together to define the symmetric algebra on V : S(V ) = n 0 S n (V ) where again S 0 (V ) = F, S 1 (V ) = V. Since each S n (V ) is by construction a quotient of T n (V ), we see that S(V ) is a quotient of T (V ), i.e. S(V ) = T (V )/I where I = n 0 I n. I claim in fact that I is an ideal of T (V ), so that S(V ) is actually a quotient algebra of T (V ). Actually, I claim even more, namely:

10 98 CHAPTER 6. TOPICS IN LINEAR ALGEBRA Lemma. I is the two-sided ideal of T (V ) generated by the elements v w w v for all v, w V. Proof. Let J be the two-sided ideal generated by the given elements. By definition, I is spanned as an F -vector space by terms like v 1 v i v i+1 v n v 1 v i+1 v i v n = v 1 v i 1 (v i v i+1 v i+1 v i ) v i+2 v n. This clearly lies in J, hence I J. On the other hand, the generators of the ideal J lie in I, so it just remains to show that I is a two-sided ideal of T (V ). Since T (V ) is spanned by monomials of the form u = u 1 u m, it suffices to check that for any generator v = v 1 v i v i+1 v n v 1 v i+1 v i v n of I, we have that uv and vu both lie in I. But that s obvious! The lemma shows that S(V ) really is an F -algebra, as a quotient of T (V ) by an ideal. Multiplication of two monomials v 1 v m and w 1 w n in S(V ) is just by concatenation, giving v 1 v m w 1 w n. Moreover, since is symmetric, we can reorder this as we like in S(V ) to see that v 1 v m w 1 w n = w 1 w n v 1 v n. Hence, since these pure elements (the images of the pure tensors which generate T (V )) span S(V ) as an F -vector space, we see that S(V ) is a commutative F -algebra. Note not all elements of S(V ) can be written as v 1 v m, just as not all tensors are pure tensors. The symmetric algebra S(V ), together with the inclusion map i : V S(V ), is characterized by the following universal property (compare with the universal property of the tensor algebra): Universal property of the symmetric algebra. Given an F -linear map f : V A, where A is a commutative F -algebra, there exists a unique F -linear homomorphism f : S(V ) A such that f = f i. Proof. Since A is commutative, the map f 1 : T (V ) A given by the universal property of the tensor algebra annihilates all elements v w w v T 2 (V ). These generate the ideal I by the preceeding lemma. Hence, f 1 factors through the quotient S(V ) of T (V ). Now suppose that V is finite dimensional on basis e 1,..., e m. Then, we ve seen S(V ) before! Indeed, let F [x 1,..., x m ] be the polynomial ring over F in indeterminates x 1,..., x m. The map e i x i extends to a unique F -linear map V F [x 1,..., x m ], hence since the polynomial ring is commutative, the universal property of symmetric algebras gives us a unique F -linear homomorphism S(V ) F [x 1,..., x m ]. Now, S(V ) is spanned by the images of pure tensors of the form e i1 e in. Moreover, any such can be reordered in S(V ) using the symmetric property to assume that i 1 i n. Hence, S(V ) is spanned by the ordered monomials of the form e i1... e in for all i 1 i n and all n 0. Clearly, such a monomial maps to x i1... x in in the polynomial ring F [x 1,..., x m ]. But we know (by definition) that the ordered monomials give a basis for the F -vector space F [x 1,..., x m ]. Hence, they must in fact be linearly independent in S(V ) too, and we ve constructed an isomorphism: Basis theorem for symmetric powers. Let V be an F -vector space of dimension m. Then, S(V ) is isomorphic to F [x 1,..., x m ], the isomorphism mapping a basis element e i of V to the indeterminate x i in the polynomial ring. In particular, S n (V ) has basis given by all ordered monomials in the basis of the form e i1 e in with 1 i 1 i n m.

11 6.6. DETERMINANTS AND THE EXTERIOR ALGEBRA 99 In this language, the universal property of symmetric algebras gives that the polynomial algebra F [x 1,..., x m ] is the free commutative F -algebra on the generators x 1,..., x m. Note in particular that if V is finite dimensional, say of dimenion m, the basis theorem implies ( ) m + n 1 dim S n (V ) = n (exercise!). 6.6 Determinants and the exterior algebra The last important topic in multilinear algebra I want to cover is the exterior algebra. The construction goes in much the same way as the symmetric algebra, however unlike there we do not have a known object like the polynomial algebra to compare it with so we have to work harder to get the analogous basis theorem to the basis theorem for symmetric powers. Start with V being any vector space. Define K to be the two-sided ideal of the tensor algebra T (V ) generated by the elements {x x x V }. Note that (v + w) (v + w) = v v + w v + v w + w w. So K also contains all the elements {v w + w v v, w V } automatically. Define the exterior (or Grassmann) algebra (V ) to be the quotient algebra (V ) = T (V )/K. We ll write v 1 v n for the image in (V ) of the pure tensor v 1 v n T (V ). So, now we have anti-symmetric properties like: and v 1 v i v i... v n = 0 v 1 v i v i+1... v n = v 1 v i+1 v i... v n. Since the ideal K is generated by homogeneous elements, K = n 0 K n where K n = K T n (V ). It follows that n (V ) = (V ) n 0 where n (V ) = T n (V )/K n is the subspace spanned by the images of all pure tensors of degree n. We ll call n (V ) the nth exterior power. The exterior power n (V ), together with the distinguished map i : V V n (V ), (v 1,..., v n ) v 1 v n, is characterized by the following universal property. First, call a multlinear map f : V V W to a vector space W alternating if we have that f(v 1,..., v j, v j+1,..., v n ) = 0 whenever v j = v j+1 for some j. The map i : V V n V is multilinear and alternating. Moreover, given any other multilinear, alternating map f : V V W, there exists a unique linear map f : n V W such that f = f i. The proof is the same as for the symmetric powers, you should compare the two constructions! Using this all important universal property, we can now prove:

12 100 CHAPTER 6. TOPICS IN LINEAR ALGEBRA Basis theorem for exterior powers. Suppose that V is finite dimensional, with basis e 1,..., e m. Then, n (V ) has basis given by all monomials e i1 e in for all sequences 1 i 1 < < i n m. In particular, ( ) dim n m (V ) =, n and is zero for n > m. Proof. Since n (V ) is a quotient of T n (V ), we certainly have that n (V ) is spanned by all monomials e i1 e in. Now using the antisymmetric properties of, we get that it is spanned by the given strictly ordered monomials. The problem is to prove that the given elements are linearly independent. For this, we proceed by induction on n, the case n = 1 being immediate since 1 (V ) = V. Now consider n > 1. Let f 1,..., f n be the basis for V dual to the given basis e 1,..., e n for V. For each i = 1,..., n, we wish to define a map F i : n (V ) n 1 (V ). Start with the multilinear, alternating map f i : V V n 1 (V ) defined by f i (v 1,..., v n ) = ( 1) j f i (v j )v 1 v j 1 v j+1 v n. j=1 Its clearly multilinear, and to check its alternating, we just need to see that if v j = v j+1 then: f i (v 1,..., v j, v j+1,..., v n ) = ( 1) j f i (v j )v 1 v j+1 v n + ( 1) j+1 f i (v j+1 )v 1 v j v n which is zero. Now the univeral property of gives that f i induces a unique linear map F i : n V n 1 V. Now we show that the given elements are linearly independent. Take a linear relation i 1< <i n a i1,...,i n e i1 e in = 0. Apply the linear map F 1, which annihilates all e j except for e 1 by definition. We get: 1<i 2< <i n a 1,i2,...,i n e i2 e in = 0. By the induction hypothesis, all such monomials are linearly independent, hence all a 1,i2,...,i n are zero. Now apply F 2 in the same way to get that all a 2,i2,...,i n are zero, etc... Now let s focus on a special case. Suppose that dim V = n and consider n V. Its dimension, by the theorem, is ( n n) = 1, and it has basis just the element e1 e n. Let f : V V be a linear map. Define f : V V n (V ) by f(v 1,..., v n ) = f(v 1 ) f(v n ). One easily checks that this is multilinear and alternating. Hence, it induces a unique linear map we ll denote n f : n (V ) n (V ). But n (V ) is one dimensional, so such a linear map is just multiplication by a scalar. In other words, there is a unique scalar which we ll call det f determined by the equation f(e 1 ) f(e n ) = (det f)e 1 e n.

13 6.6. DETERMINANTS AND THE EXTERIOR ALGEBRA 101 Of course, det f is exactly the determinant of the linear transformation f. Note our definition of determinant is basis free, always a good thing. You can see what we re calling det f is the same as usual, as follows. Suppose the matrix of f in our fixed basis is A, defined from f(e j ) = i A i,j e i. Then, f(e 1 ) f(e n ) = i 1,...,i n A i1,1a i2,2... A in,ne i1 e in. Now terms on the right hand side are zero unless all i 1,..., i n are distinct, i.e. (i 1,..., i n ) = (g1, g2,..., gn) for some permutation g S n. So: f(e 1 ) f(e n ) = g S n A g1,1 A g2,2... A gn,n e g1 e gn. Finally, e g1 e gn = sgn(g)e 1 e n where sgn denotes the sign of a permutation. So: This shows that f(e 1 ) f(e n ) = det f = g S n sgn(g)a g1,1 A g2,2... A gn,n e 1 e n. g S n sgn(g)a g1,1 A g2,2... A gn,n which is exactly the usual Laplace expansion definition of determinant. Here s a final result to illustrate how nicely the universal property definition of determinant gives its properties: Multiplicativity of determinant. For linear transformations f, g : V V, we have that det(f g) = det f det g. Proof. By definition, n [f g] is the map uniquely determined by ( n [f g])(v 1 v n ) = f(g(v 1 )) f(g(v n )). But n f n g also satisfies this equation. So, n (f g) = n f n g. The left hand side is scalar multiplication by det(f g), the right hand side is scalar multiplication by det f det g.

Algebras. Chapter Definition

Chapter 4 Algebras 4.1 Definition It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) that is an R-module

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

MAS 5312 Section 2779 Introduction to Algebra 2 Solutions to Selected Problems, Chapters 11 13 11.2.9 Given a linear ϕ : V V such that ϕ(w ) W, show ϕ induces linear ϕ W : W W and ϕ : V/W V/W : Solution.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

A connection between number theory and linear algebra

A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

Structure of rings. Chapter Algebras

Chapter 5 Structure of rings 5.1 Algebras It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) together

0.1 Rational Canonical Forms

We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

First we introduce the sets that are going to serve as the generalizations of the scalars.

Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

Q N id β. 2. Let I and J be ideals in a commutative ring A. Give a simple description of

Additional Problems 1. Let A be a commutative ring and let 0 M α N β P 0 be a short exact sequence of A-modules. Let Q be an A-module. i) Show that the naturally induced sequence is exact, but that 0 Hom(P,

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

The Cayley-Hamilton Theorem and the Jordan Decomposition

LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

(VI.D) Generalized Eigenspaces

(VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C

QUALIFYING EXAM IN ALGEBRA August 2011

QUALIFYING EXAM IN ALGEBRA August 2011 1. There are 18 problems on the exam. Work and turn in 10 problems, in the following categories. I. Linear Algebra 1 problem II. Group Theory 3 problems III. Ring

Linear Algebra Highlights

Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

JORDAN AND RATIONAL CANONICAL FORMS

JORDAN AND RATIONAL CANONICAL FORMS MATH 551 Throughout this note, let V be a n-dimensional vector space over a field k, and let φ: V V be a linear map Let B = {e 1,, e n } be a basis for V, and let A

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

5. Finitely Generated Modules over a PID We want to give a complete classification of finitely generated modules over a PID. ecall that a finitely generated module is a quotient of n, a free module. Let

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

Math 594. Solutions 5

Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

(VI.C) Rational Canonical Form

(VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars

MATHEMATICS 217 NOTES

MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

Real representations

Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

Bulletin of the Iranian Mathematical Society

ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Special Issue of the Bulletin of the Iranian Mathematical Society in Honor of Professor Heydar Radjavi s 80th Birthday Vol 41 (2015), No 7, pp 155 173 Title:

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

Some notes on linear algebra

Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

UNIVERSAL IDENTITIES. b = det

UNIVERSAL IDENTITIES KEITH CONRAD 1 Introduction We want to describe an idea which reduces the verification of algebraic identities valid over all commutative rings to the verification over the complex

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

Math 396. Quotient spaces

Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

ALGEBRAIC GROUPS J. WARNER

ALGEBRAIC GROUPS J. WARNER Let k be an algebraically closed field. varieties unless otherwise stated. 1. Definitions and Examples For simplicity we will work strictly with affine Definition 1.1. An algebraic

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006

Jordan Canonical Forms December 6, 2006 1 Introduction We know that not every n n matrix A can be diagonalized However, it turns out that we can always put matrices A into something called Jordan Canonical

Finitely Generated Modules over a PID, I

Finitely Generated Modules over a PID, I A will throughout be a fixed PID. We will develop the structure theory for finitely generated A-modules. Lemma 1 Any submodule M F of a free A-module is itself

Solutions of exercise sheet 8

D-MATH Algebra I HS 14 Prof. Emmanuel Kowalski Solutions of exercise sheet 8 1. In this exercise, we will give a characterization for solvable groups using commutator subgroups. See last semester s (Algebra

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.

Exercises on chapter 0

Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

Eigenvalues & Eigenvectors

Eigenvalues & Eigenvectors Page 1 Eigenvalues are a very important concept in linear algebra, and one that comes up in other mathematics courses as well. The word eigen is German for inherent or characteristic,

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

Algebra Exam Syllabus

Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

Introduction to modules

Chapter 3 Introduction to modules 3.1 Modules, submodules and homomorphisms The problem of classifying all rings is much too general to ever hope for an answer. But one of the most important tools available

1.4 Solvable Lie algebras

1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

Representation Theory

Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 Paper 1, Section II 19I 93 (a) Define the derived subgroup, G, of a finite group G. Show that if χ is a linear character

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

Generalized eigenspaces

Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

Linear Algebra. Min Yan

Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

4. Noether normalisation

4. Noether normalisation We shall say that a ring R is an affine ring (or affine k-algebra) if R is isomorphic to a polynomial ring over a field k with finitely many indeterminates modulo an ideal, i.e.,

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

= W z1 + W z2 and W z1 z 2

Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

MAT1302F Mathematical Methods II Lecture 19

MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

REPRESENTATIONS OF S n AND GL(n, C)

REPRESENTATIONS OF S n AND GL(n, C) SEAN MCAFEE 1 outline For a given finite group G, we have that the number of irreducible representations of G is equal to the number of conjugacy classes of G Although

The minimal polynomial

The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Bryan Félix Abril 12, 217 Section 11.1 Exercise 6. Let V be a vector space of finite dimension. If ϕ is any linear transformation

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

Definition (T -invariant subspace) Example. Example

Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F)

LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F) IVAN LOSEV In this lecture we will discuss the representation theory of the algebraic group SL 2 (F) and of the Lie algebra sl 2 (F), where F is

ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

ALGEBRAIC GROUPS 33 3. Lie algebras Now we introduce the Lie algebra of an algebraic group. First, we need to do some more algebraic geometry to understand the tangent space to an algebraic variety at

Eigenvectors and Hermitian Operators

7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

The Jordan Canonical Form

The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

SUPPLEMENT TO CHAPTERS VII/VIII

SUPPLEMENT TO CHAPTERS VII/VIII The characteristic polynomial of an operator Let A M n,n (F ) be an n n-matrix Then the characteristic polynomial of A is defined by: C A (x) = det(xi A) where I denotes

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS MARK WILDON Contents 1. Definition of polynomial representations 1 2. Weight spaces 3 3. Definition of the Schur functor 7 4. Appendix: some

Solutions to the August 2008 Qualifying Examination

Solutions to the August 2008 Qualifying Examination Any student with questions regarding the solutions is encouraged to contact the Chair of the Qualifying Examination Committee. Arrangements will then

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

Computing Invariant Factors

Computing Invariant Factors April 6, 2016 1 Introduction Let R be a PID and M a finitely generated R-module. If M is generated by the elements m 1, m 2,..., m k then we can define a surjective homomorphism

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

Further linear algebra. Chapter IV. Jordan normal form.

Further linear algebra. Chapter IV. Jordan normal form. Andrei Yafaev In what follows V is a vector space of dimension n and B is a basis of V. In this chapter we are concerned with linear maps T : V V.

[06.1] Given a 3-by-3 matrix M with integer entries, find A, B integer 3-by-3 matrices with determinant ±1 such that AMB is diagonal.

(January 14, 2009) [06.1] Given a 3-by-3 matrix M with integer entries, find A, B integer 3-by-3 matrices with determinant ±1 such that AMB is diagonal. Let s give an algorithmic, rather than existential,

Eigenvalues and Eigenvectors

LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are