Given a finite-dimensional vector space V over a field K, recall that a linear

Size: px
Start display at page:

Download "Given a finite-dimensional vector space V over a field K, recall that a linear"

Transcription

1 Jordan normal form Sebastian Ørsted December 16, 217 Abstract In these notes, we expand upon the coverage of linear algebra as presented in Thomsen (216). Namely, we introduce some concepts and results of fundamental importance in the field, but which were not available to us without the greater level of abstraction developed in the current course. Specifically, we introduce concepts like the Jordan normal form and generalized eigenspaces. Given a finite-dimensional vector space V over a field K, recall that a linear operator θ : V V is called diagonalizable if V admits a basis consisting of eigenvectors of θ. Thus each element of this basis lies in some eigenspace E(λ) = {v V θ(v) = λv } corresponding to an eigenvalue λ of θ (but all basis elements need not belong to the same E(λ)!). We note the following alternative formulation of being diagonalizable. It is considered standard knowledge from linear algebra, and we omit the proof. Proposition J.1. If λ 1,λ 2,...,λ r are the distinct eigenvalues of θ, then θ is diagonalizable if and only if V can be written as the direct sum V = E(λ 1 ) E(λ 2 ) E(λ r ) of the eigenspaces of θ. Furthermore, we may choose a basis for each space E(λ i ) and combine them to a basis for V in which θ has the block diagonal form θ = D n1 (λ 1 ) D n2 (λ 2 )... D nr (λ r ) where D ni (λ i ) = λ i id ni is the n i n i diagonal matrix with all diagonal entries equal to λ i. Recall that V being the direct sum of subspaces V 1,V 2,...,V k, means that any v V can be written uniquely as a sum v = v 1 + v v n 1,

2 of vectors with v i V i. In this case we write V = V 1 V 2 V k = k V i. i=1 In general, not all operators are diagonalizable; indeed, many matrices, like ( 1 1 ) over K = R, do not have eigenvalues at all, since its characteristic polynomial x R[x] has no roots in R. Nevertheless, one of the many consequences of the theory we develop here is that when the field K is, for instance, the complex numbers, all operators θ satisfy a slightly weaker form of Proposition J.1: In the definition of E(λ), we replace the condition that θ(v) = λv (or, equivalently, (θ λid)v = ) by the requirement that (θ λid) n v = for some n 1 (which is allowed to depend on v). In other words, define E(λ) = {v V (θ λid) n v = for some n 1}, the generalized eigenspace of θ with respect to λ. We note that E(λ) is a vector subspace of E(λ) (see Exercise J.1), and that this inclusion can be strict. Note also that we have θ(e(λ)) E(λ) (see Exercise J.2). It would seem natural to extend the well-known terminology further and refer to λ as a generalized eigenvalue if E(λ) ; however, this terminology turns out to be redundant, because Proposition J.2. A generalized eigenvalue is automatically an eigenvalue, hence we simply refer to λ as the corresponding eigenvalue. Proof. If (θ λid) n v = for some v, then = det((θ λid) n ) = (det(θ λid)) n, so det(θ λid) =, hence λ is an eigenvalue. Example J.3. The matrix θ = ( ) is not diagonalizable over C; the only root of its characteristic polynomial (1 x) 2 is x = 1. If it were diagonalizable, we would get from Proposition J.1 that C 2 = E(1), which would imply θv = v for all v C 2 ; this is absurd since θ is not the identity matrix. However, the difference ( ) θ 1id = 1 satisfies (θ 1id) 2 =, hence (θ 1id) 2 v = for all v C 2. This implies that E(1) = C 2. Example J.4. We have only defined generalized eigenspaces for vector spaces of finite dimension and will only deal with those outside of this example. However, the definition obviously makes sense in the infinite-dimensional setting as well, and we shall give a particularly pretty example. Let V = C (R) be the real vector space of smooth (i.e., infinitely often differentiable) functions R R, and let θ = d/dx : V V be the differential operator. The generalized eigenspace E() consists of all smooth functions f : R R such 2

3 that d n f /dx n = for some n. Calculating antiderivatives one by one, this means that d n 1 f /dx n 1 is a constant, hence d n 2 f /dx n 2 is a polynomial of degree at most 1, and in general, d n k f /dx n k is a polynomial of degree k 1. In particular, f = d f /dx is a polynomial of degree at most n 1. We conclude that E() = R[x] consists exactly of all polynomial functions on R. This is probably one of the many things that make polynomials interesting for an analyst as well as an algebraist. In order to state our main result, we need one more definition. Recall the result known by the misleading name, the Fundamental Theorem of Algebra ; it is not fundamental to modern abstract algebra, but received its name from an earlier discipline known simply as algebra, which was mainly concerned with finding roots of real and complex polynomials. The abstract in abstract algebra is there for a reason. Theorem J.5. Every non-constant polynomial in C[x] has a root in C. Proof. See, for instance, Exercise 1.54 in Stetkær (212), Theorem 4.23 in Berg (213), or Theorem 1.8 in Hatcher (22). This motivates the following definition: The field K is called algebraically closed if it satisfies the same theorem, that is, if every non-constant polynomial in K[x] has a root in K. Thus C is algebraically closed, while R and Q are not. No finite field is algebraically closed (see Exercise J.3). There are many fields satisfying this definition, but very few can be constructed concretely, and most of our results here will be developed with C in mind. Our main theorem states that for algebraically closed fields, a basis for the vector space V can be chosen in which θ has a representation given in terms of blocks of the form J n (λ) = λ 1 λ λ 1 λ the n n square matrix with λs along the diagonal and ones immediately above it. This block is known as the Jordan block of order n with respect to λ. Theorem J.6. If the field K is algebraically closed and θ : V V is a linear operator on a finite-dimensional K-vector space, and if λ 1,λ 2,...,λ r are its distinct eigenvalues, then V = E(λ 1 ) E(λ 2 ) E(λ r ) is the direct sum of the generalized eigenspaces. Restricting θ to an operator θ : E(λ i ) E(λ i ) on one of these spaces, we may choose a basis for E(λ i ) in which θ has the block diagonal form B(λ i ) = J n1 (λ i ) J n2 (λ i ) 3..., J nk (λ i )

4 consisting of Jordan blocks, where n 1 n 2 n k 1 are integers (depending on i). Combining these bases to a basis for all of V, θ thus has the matrix representation B(λ 1 ) B(λ 2 ) θ =.... B(λ r ) This representation, called the Jordan normal form of θ, is unique up to reordering of the blocks. The Jordan normal form is named after the French mathematician Camille Jordan ( ). The proof of this theorem will occupy the next two sections. Since we have E(λ i ) E(λ i ), we see that θ is diagonalizable if and only if equality holds. Also recall from linear algebra that two different matrix representations A and B of the same linear map are similar, meaning that S 1 AS = B for some invertible matrix S. In other words, the theorem shows that any square matrix over an algebraically closed field is similar to some matrix on Jordan normal form. Looking at the definition of E(λ), we see it is not immediately obvious how to calculate it; it is equal to the union of the kernels Ker((θ λid) k ) for all k, but do we have to calculate infinitely many powers of θ λid and their kernels? Fortunately, one is enough: Corollary J.7. Let N be the algebraic multiplicity of λ as an eigenvalue of θ. Then E(λ) = {v V (θ λid) N v = } = Ker((θ λid) N ). Proof. Exercise J.8(iii). The next example provides the general algorithm for computing the Jordan normal form. Example J.8. Let us consider the complex matrix θ = Its characteristic polynomial is given by χ(x) = x 4 2x 3 + x 2 = x 2 (x 1) 2, hence the eigenvalues are λ = and λ = 1, both with algebraic multiplicity 2. According to Corollary J.7, the generalized eigenspaces are therefore given by E() = Ker(θ 2 ) and E(1) = Ker((θ id) 2 ). As the reader can verify using their existing knowledge of linear algebra, we therefore have and E() = C(1,,,) + C(,1,,) E(1) = C(1,,1,) + C(,1,,1). 4

5 These four vectors are definitely a basis for C 4. The matrix representation of θ in this basis is given by S 1 θs, where S =. 1 1 This yields 1 S 1 θs =, which has Jordan normal form, consisting of the two Jordan blocks J 2 () and J 2 (1). Proof of existence of the Jordan normal form For any principal ideal domain R (like K[x]!), recall that if d is a greatest common divisor of x,y R, then there exist λ,µ R with d = λx + µy. This statement can be generalized to any finite collection of elements. Given x 1,x 2,...,x n in any commutative ring R (not necessarily a principal ideal domain), an element d R is called a greatest common divisor if it is a common divisor and if any other common divisor divides d. In a unique factorization domain, any finite collection of elements has a greatest common divisor (see Exercise J.4). A finite collection of elements is called coprime if 1 is a greatest common divisor. Lemma J.9. Given elements x 1,x 2,...,x n in a principal ideal domain R with greatest common divisor d, there exist µ 1,µ 2,...,µ n R such that d = µ 1 x 1 + µ 2 x µ n x n. Proof. Exercise J.5. Warning J.1. We should warn the reader that we use the convention that the characteristic polynomial of a matrix θ is given by χ(x) = det(xid θ), in contrast to Thomsen (216) where it is given by det(θ xid). The difference is a factor of ( 1) n, where n is the dimension of the vector space. Our convention has the advantage that χ becomes a monic polynomial. Proof that V is the direct sum of the generalized eigenspaces. Let χ(x) = det(xid θ) denote the characteristic polynomial of θ. The Cayley Hamilton theorem (see, for instance, ibid., Sætning 15.11) tells us that χ(θ) =. Because the base field K is algebraically closed, we may factorize χ in the form χ(x) = (x λ 1 ) n 1 (x λ 2 ) n2 (x λ r ) n r, where the λ i are the distinct eigenvalues of θ and all n i 1. We first claim that in fact E(λ i ) = V i, where V i = {v V (θ λ i id) n i v = }, 5

6 so that in the definition of E(λ i ), the multiplicity n i always works in the place of n (note that we cannot appeal to Corollary J.7, since this result relies on the existence of the Jordan normal form). It is clear that V i E(λ i ), so let us prove the other inclusion. If v E(λ i ), then (θ λ i ) n v = for some n, and we might as well assume that n n i. Now (x λ i ) n i is a greatest common divisor of χ(x) and (x λ i ) n, thus we may find p(x),q(x) K[x] such that (x λ i ) n i = p(x)χ(x) + q(x)(x λ i ) n. Substituting θ for t and using χ(θ) =, we find that (θ λ i id) n i = p(θ)χ(θ) + q(θ)(θ λ i id) n = q(θ)(θ λ i id) n. Applying this to v, we get (θ λ i id) n i v =, so v V i. For i = 1,2,...,n, we now define χ(x) f i (x) = (x λ i ) n i = (x λ 1 ) n 1 (x λ 2 ) n2 (x λ i 1 ) n i 1 (x λ i+1 ) n i+1 (x λ r ) n r and note that f 1,f 2,...f r are coprime polynomials (why?). Hence Lemma J.9 implies that there exist µ 1,µ 2,...,µ r K[x] such that Substituting θ for x, we get 1 = f 1 µ 1 + f 2 µ 2 + f r µ r. id = f 1 (θ)µ 1 (θ) + f 2 (θ)µ 2 (θ) + + f r (θ)µ r (θ). Applying this to any v V, we find that v = f 1 (θ)µ 1 (θ)v + f 2 (θ)µ 2 (θ)v + + f r (θ)µ r (θ)v. In particular, any v V lies in the sum Im(f 1 (θ)) + + Im(f r (θ)) of the images of the f i (θ), hence V = Im(f 1 (θ)) + Im(f 2 (θ)) + + Im(f r (θ)). Also, note that = χ(θ) = (θ λ i id) n i f i (θ), which implies Im(f i (θ)) V i. Thus we also have V = V 1 + V V r. To prove that this sum is direct, it suffices by Exercise J.6 to show that V i ( V V i 1 + V i V r ) = for all i = 1,2,...,r. So let v lie in this intersection. Because v V i, we have (θ λ i id) n i v =, while the fact that v lies in the set in parentheses implies that f i (θ)v =. Now f i (x) and (x λ i ) n i are coprime polynomials, so me may find u(x),v(x) K[x] such that 1 = u(x)f i (x) + v(x)(x λ i ) n i. Substituting θ for x, we have id = u(θ)f i (θ) + v(θ)(θ λ i id) n i. Now applying this to v, both terms become zero, and we have v =. 6

7 In order to prove the existence of a basis for E(λ i ) in which the matrix has Jordan normal form, we note that θ λ i id is a nilpotent operator on E(λ i ), since, according to the above proof, (θ λ i id) n i = on E(λ i ). Hence to finish the proof, all we need to do is to apply the following proposition to θ λ i id on each subspace E(λ i ). Proposition J.11. Given a nilpotent operator θ : V V on a finite-dimensional K-vector space V, there exists a basis for V in which θ has block diagonal form θ = J n1 () for suitable n 1 n 2 n k 1. J n2 ()... J nk () For the proof, we define a Jordan chain to be a chain of elements of the form v, θ(v), θ 2 (v),..., θ p 1 (v) where θ i (v) for i =,1,2,...,p 1, but θ p (v) =. Certainly, Jordan chains exist in V because θ is nilpotent (those worried about allowing empty Jordan chains may assume V ). Let us note that a Jordan chain is automatically linearly independent: If p 1 i= α iθ i (v) =, suppose that there is some j with α j, and assume that j is minimal with this property, so that α = α 1 = = α j 1 =. Then we have p = θ p 1 j α i θ i (v) = α θ p 1 j (v)+α 1 θ p 1 j+1 (v)+ +α j θ p 1 (v) = α j θ p 1 (v), i= showing that α j =, a contradiction. If U = span{θ p 1 (v),θ p 2 (v),...,θ(v),v} is the span of some Jordan chain, then the chain is a basis for U, and the matrix representation of θ on U becomes which just happens to be J p (). A Jordan chain as above is called maximal if v does not lie in the image θ(v ) of θ; this means that we cannot extend the chain backwards. Note that any v V is contained in some maximal Jordan chain (why?). For the proof, because of the above matrix representation, it suffices to prove that V is the direct sum V = U 1 U k of subspaces U k each spanned by some maximal Jordan chain., 7

8 Proof of Proposition J.11. We shall argue by induction on dim V, the case dimv = being empty. Because θ is nilpotent, θ(v ) must be a proper subspace of V (why?). By induction, we may write θ(v ) = W 1 W r as a direct sum of subspaces each spanned by some Jordan chain which is maximal in θ(v ). Choose a generator in W i for each such chain and write it as θ(u i ) because it lies in the image θ(v ). Then the chain has the form θ(u i ),θ 2 (u i ),...,θ p i 1 (u i ). We let U i = W i Ku i for i = 1,2,...,r. Now θ p 1(u 1 ),...,θ p r (u r ) form a basis for θ(v ) Kerθ (why?), and we may extend this to a basis for all of Kerθ by adding elements which we denote u r+1,u r+2,...,u k. Each of these forms a maximal Jordan chain consisting of one element. We let U i = Ku i for i = r + 1,...,k. We claim that V = U 1 U k, or, in other words, that u 1,θ(u 1 ),...,θ p 1 1 (u 1 ),..., u k,θ(u k ),...,θ p k 1 (u k ), is a basis for V, where we define p r+1 = p r+2 = = p k = 1. To check that V = U U k, let v V be arbitrary. Then θ(v) θ(v ) = W 1 W r, and because of how the u i were chosen, we can find a u U U r such that θ(u) = θ(v). Thus θ(v u) =, so that v u Kerθ U U k. Therefore, v = u + (v u) lies in U U k as well. Now to check that the above vectors constitute a basis, it is enough to count that their numbers are equal to the dimension of V. From linear algebra, dimv = dimθ(v ) + dimkerθ. But θ(v ) had a basis consisting of θ(u i ),θ 2 (u i ),...,θ p i 1 (u i ) for i = 1,2,...,r, so dimθ(v ) = (p 1 1) + (p 2 1) + + (p r 1). By choice, θ p 1 1 (u 1 ),...,θ p r 1 (u r ) together with u r+1,...,u k are a basis for Kerθ, hence dimkerθ = k. Thus dimv = (p 1 1) + (p 2 1) + + (p r 1) + k = p 1 + p p r + (k r) = p 1 + p p r + p r p k. This is exactly the number of elements in the claimed basis, which proves that V = U 1 U k. Reorganizing the U i according to dimension n i = dimu i, we can get the desired form of θ where n 1 n k 1. Remark J.12. Notice that the assumption that K was algebraically closed was only used to argue that the characteristic polynomial of θ could be factorized as the product χ(x) = (x λ 1 ) n 1 (x λ 2 ) n2 (x λ r ) n r of linear polynomials. So in other words, the conclusion of the theorem holds for operators over any field K as long as the characteristic polynomial can be factorized this way. In particular, this is the case for K = R if χ has no non-real roots. Note that the uniqueness proof of the next section does not rely on the assumption of algebraic closure, either. 8

9 Proof of uniqueness of the Jordan normal form To prove the uniqueness statement of Theorem J.6, note first that if θ is represented by a matrix of this form, then the size of the block B(λ i ) measures the dimension of E(λ i ). Thus the size of each such block is uniquely determined. It thus suffices to prove that the internal structure of the block B(λ i ) is uniquely determined, or, in other words, that the matrix representation of θ on E(λ i ) is uniquely determined for each i. Therefore, in proving uniqueness, we may as well assume that V = E(λ) for some λ, so that θ λid is nilpotent on all of V. Also, replacing θ by θ λid, we may as well assume that λ =, so that θ itself is a nilpotent operator on V. With these assumptions, suppose that θ = J n1 () J n2 ()... J nk () where n 1 n 2 n k 1. All we need is to prove that the integers n i are uniquely determined. We claim that dimkerθ k dimkerθ k 1 = #{i k n i }. (1) for all k. We leave it to the reader to verify that this uniquely determines the sequence n 1 n 2 n k 1. To prove the claim, note that and that each block J ni () k is given by J n1 () k θ k J n2 () k =.,.. J nk () k J ni () k =... 1., which is the n i n i matrix which is everywhere zero except for a diagonal line of ones from the (1,k)th to the (k,1)th cell. In particular, J ni () k = if and only if k n i. For each such matrix J ni () k, the kernel has a basis consisting of the basis vectors corresponding to the first k columns. So the difference in dimension between Ker(J ni ()) k and Ker(J ni ()) k 1 is 1 if k n i and otherwise. Adding the dimensions of the kernels of each block, we arrive at the claim (1). 9

10 References Berg, C. (213). Complex Analysis. Matematisk afdeling, Københavns Universitet. Hatcher, A. (22). Algebraic Topology. Cambridge University Press. url: www. math.cornell.edu/~hatcher/at/at.pdf. Stetkær, H. (212). Følger og rækker af funktioner. Lecture notes for the course Mathematical analysis 2. Thomsen, J. F. (216). Lineær algebra. Lecture notes for Linear Algebra at Aarhus University. 1

11 Exercises J.1. Prove that for any λ K, E(λ) is a vector subspace of E(λ). J.2. Prove that θ(e(λ)) E(λ). We usually formulate this by saying that E(λ) is invariant under θ. J.3. Prove that no finite field F q is algebraically closed. (Hint: Recall that x q = x for all x F q.) J.4. Prove that in a unique factorization domain, a greatest common divisor exists between any finite number of elements. J.5. Prove Lemma J.9. J.6. Prove that a vector space V is the direct sum V = V 1 V n of subspaces if and only if V = V V n and for all i = 1,2,...,n. V i ( V V i 1 + V i+1 + V n ) = J.7. Calculate the Jordan normal form of the following matrices: (a) , 5 4 (b) , (c) J.8. If θ is given on Jordan normal form as in Theorem J.6, verify the following: (i) The algebraic multiplicity of λ i is the size of the block B(λ i ). (ii) The geometric multiplicity of λ i is the number of Jordan blocks in B(λ i ). (iii) The generalized eigenspace is given by E(λ i ) = {v V (θ λ i id) n i = } = Ker((θ λ i id) n i ), where n i is the algebraic multiplicity of λ i ; in other words, in the definition of E(λ i ), n = n i always suffices. J.9. Generalize the results of Example J.4 by finding E(λ) for any λ R. Also try replacing θ = d/dx by θ = d 2 /dx 2. J.1. The Cayley Hamilton theorem states that a linear operator θ : V V on a finite-dimensional vector space is annihilated by its characteristic polynomial χ, meaning that χ(θ) =. However, there may very well be nonzero polynomials of smaller degree with the same property. (i) Verify that I = {f K[x] f (θ) = } is an ideal of K[x]. Deduce that I has a unique monic generator µ, called the minimal polynomial of θ. Note that µ divides χ. (ii) What are the roots of µ? 11

12 (iii) Can you deduce µ from the Jordan normal form of θ? (iv) Under what circumstances do we have χ = µ? (v) What is the minimal polynomial of a diagonalizable operator? J.11. Prove that the determinant and trace of a linear map on a finite-dimensional vector space are well-defined in the sense that they do not depend on the choice of matrix representation. (Recall that the trace tr(a) of a square matrix is the sum of its diagonal entries and that tr(ab) = tr(ba) for all A,B.) J.12. Let θ : V V be an operator on a finite-dimensional vector space over the algebraically closed field K. Let λ 1,λ 2,...,λ r be the distinct eigenvalues of θ with algebraic multiplicities n 1,n 2,...,n r, respectively. Prove that the determinant and trace of θ are given by (cf. Exercise J.11) det(θ) = λ n 1 1 λn 2 2 λn r r and tr(θ) = n 1 λ 1 + n 2 λ n r λ r. In other words, the determinant and trace are the product and sum, respectively, of the eigenvalues, counted with multiplicity. J.13. Let θ : V V be a linear operator on a finite-dimensional vector space over the algebraically closed field K. (i) Prove that there exists a unique decomposition θ = D + P of θ as a sum of a diagonal matrix D and a nilpotent matrix N. Tis is called the additive Jordan decomposition. (ii) Prove that if θ is nonsingular, it can be written uniquely as a product θ = DU of a diagonalizable matrix D and a unipotent matrix U, meaning that U is the sum of the identity and a nilpotent matrix. This is called the multiplicative Jordan decomposition. J.14. (i) Prove that over an algebraically closed field of characteristic p >, some positive power of any matrix is diagonalizable. (ii) Prove the same statement over finite fields. J.15. The exponential of a linear operator θ : V V on a finite-dimensional real or complex vector space is defined by exp(θ) = n= 1 n! θn. It is not immediately obvious that the sum always converges, and in this exercise, we give a proof using the Jordan normal form. Because of this, we assume that the base field is C. (i) Argue that the sum converges for diagonalizable and nilpotent matrices. (ii) Prove that if exp(a) and exp(b) converge for two matrices A and B satisfying AB = BA, then exp(a + B) converges to exp(a)exp(b). 12

13 (iii) Combine the two above statements with the existence of the Jordan normal form to prove that exp(θ) exists for all operators θ, and write down an explicit expression for it. J.16. In this note, we proved the existence of the Jordan normal form as a corollary to Cayley Hamilton, but it is also possible to give independent proofs. Show how one can then derive Cayley Hamilton as a corollary to the existence of the Jordan normal form. (The proof becomes universal once we know that any field can be embedded inside an algebraically closed field. The smallest algebraically closed field containing a given field K is called the algebraic closure of K and is written K. For instance, C is the algebraic closure of R. Existence and uniqueness (up to isomorphism) of the algebraic closure can be proved using the Axiom of Choice.) 13

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Reid 5.2. Describe the irreducible components of V (J) for J = (y 2 x 4, x 2 2x 3 x 2 y + 2xy + y 2 y) in k[x, y, z]. Here k is algebraically closed.

Reid 5.2. Describe the irreducible components of V (J) for J = (y 2 x 4, x 2 2x 3 x 2 y + 2xy + y 2 y) in k[x, y, z]. Here k is algebraically closed. Reid 5.2. Describe the irreducible components of V (J) for J = (y 2 x 4, x 2 2x 3 x 2 y + 2xy + y 2 y) in k[x, y, z]. Here k is algebraically closed. Answer: Note that the first generator factors as (y

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Further linear algebra. Chapter IV. Jordan normal form.

Further linear algebra. Chapter IV. Jordan normal form. Further linear algebra. Chapter IV. Jordan normal form. Andrei Yafaev In what follows V is a vector space of dimension n and B is a basis of V. In this chapter we are concerned with linear maps T : V V.

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

M.6. Rational canonical form

M.6. Rational canonical form book 2005/3/26 16:06 page 383 #397 M.6. RATIONAL CANONICAL FORM 383 M.6. Rational canonical form In this section we apply the theory of finitely generated modules of a principal ideal domain to study the

More information

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1) Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1) Travis Schedler Tue, Nov 29, 2011 (version: Tue, Nov 29, 1:00 PM) Goals

More information

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

A Field Extension as a Vector Space

A Field Extension as a Vector Space Chapter 8 A Field Extension as a Vector Space In this chapter, we take a closer look at a finite extension from the point of view that is a vector space over. It is clear, for instance, that any is a linear

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10.

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. 10. Jordan decomposition: theme with variations 10.1. Recall that f End(V ) is semisimple if f is diagonalizable (over the algebraic closure of the base field).

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ANDREW SALCH 1. Hilbert s Nullstellensatz. The last lecture left off with the claim that, if J k[x 1,..., x n ] is an ideal, then

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

(VI.D) Generalized Eigenspaces

(VI.D) Generalized Eigenspaces (VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5 JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct

More information

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b . FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Cartan s Criteria. Math 649, Dan Barbasch. February 26

Cartan s Criteria. Math 649, Dan Barbasch. February 26 Cartan s Criteria Math 649, 2013 Dan Barbasch February 26 Cartan s Criteria REFERENCES: Humphreys, I.2 and I.3. Definition The Cartan-Killing form of a Lie algebra is the bilinear form B(x, y) := Tr(ad

More information

Math 145. Codimension

Math 145. Codimension Math 145. Codimension 1. Main result and some interesting examples In class we have seen that the dimension theory of an affine variety (irreducible!) is linked to the structure of the function field in

More information

Math 249B. Nilpotence of connected solvable groups

Math 249B. Nilpotence of connected solvable groups Math 249B. Nilpotence of connected solvable groups 1. Motivation and examples In abstract group theory, the descending central series {C i (G)} of a group G is defined recursively by C 0 (G) = G and C

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

MATRIX LIE GROUPS AND LIE GROUPS

MATRIX LIE GROUPS AND LIE GROUPS MATRIX LIE GROUPS AND LIE GROUPS Steven Sy December 7, 2005 I MATRIX LIE GROUPS Definition: A matrix Lie group is a closed subgroup of Thus if is any sequence of matrices in, and for some, then either

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

5. Diagonalization. plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found

5. Diagonalization. plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found 5. Diagonalization plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found eigenvalues EV, eigenvectors, eigenspaces 5.. Eigenvalues and eigenvectors.

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1. August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei 1. Vector spaces 1.1. Notations. x S denotes the fact that the element x

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

9. Integral Ring Extensions

9. Integral Ring Extensions 80 Andreas Gathmann 9. Integral ing Extensions In this chapter we want to discuss a concept in commutative algebra that has its original motivation in algebra, but turns out to have surprisingly many applications

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

Structure of rings. Chapter Algebras

Structure of rings. Chapter Algebras Chapter 5 Structure of rings 5.1 Algebras It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) together

More information

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

The most important result in this section is undoubtedly the following theorem.

The most important result in this section is undoubtedly the following theorem. 28 COMMUTATIVE ALGEBRA 6.4. Examples of Noetherian rings. So far the only rings we can easily prove are Noetherian are principal ideal domains, like Z and k[x], or finite. Our goal now is to develop theorems

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors November 3, 2016 1 Definition () The (complex) number λ is called an eigenvalue of the n n matrix A provided there exists a nonzero (complex) vector v such that Av = λv, in which case the vector v is called

More information

REU 2007 Apprentice Class Lecture 8

REU 2007 Apprentice Class Lecture 8 REU 2007 Apprentice Class Lecture 8 Instructor: László Babai Scribe: Ian Shipman July 5, 2007 Revised by instructor Last updated July 5, 5:15 pm A81 The Cayley-Hamilton Theorem Recall that for a square

More information

Linear Algebra Final Exam Solutions, December 13, 2008

Linear Algebra Final Exam Solutions, December 13, 2008 Linear Algebra Final Exam Solutions, December 13, 2008 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) =

(Can) Canonical Forms Math 683L (Summer 2003) M n (F) C((x λ) ) = (Can) Canonical Forms Math 683L (Summer 2003) Following the brief interlude to study diagonalisable transformations and matrices, we must now get back to the serious business of the general case. In this

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

Solutions of exercise sheet 8

Solutions of exercise sheet 8 D-MATH Algebra I HS 14 Prof. Emmanuel Kowalski Solutions of exercise sheet 8 1. In this exercise, we will give a characterization for solvable groups using commutator subgroups. See last semester s (Algebra

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

NOTES ON FINITE FIELDS

NOTES ON FINITE FIELDS NOTES ON FINITE FIELDS AARON LANDESMAN CONTENTS 1. Introduction to finite fields 2 2. Definition and constructions of fields 3 2.1. The definition of a field 3 2.2. Constructing field extensions by adjoining

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

10. Noether Normalization and Hilbert s Nullstellensatz

10. Noether Normalization and Hilbert s Nullstellensatz 10. Noether Normalization and Hilbert s Nullstellensatz 91 10. Noether Normalization and Hilbert s Nullstellensatz In the last chapter we have gained much understanding for integral and finite ring extensions.

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

CHEVALLEY S THEOREM AND COMPLETE VARIETIES

CHEVALLEY S THEOREM AND COMPLETE VARIETIES CHEVALLEY S THEOREM AND COMPLETE VARIETIES BRIAN OSSERMAN In this note, we introduce the concept which plays the role of compactness for varieties completeness. We prove that completeness can be characterized

More information

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma Chapter 2 Integral Extensions 2.1 Integral Elements 2.1.1 Definitions and Comments Let R be a subring of the ring S, and let α S. We say that α is integral over R if α isarootofamonic polynomial with coefficients

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Algebra Qualifying Exam August 2001 Do all 5 problems. 1. Let G be afinite group of order 504 = 23 32 7. a. Show that G cannot be isomorphic to a subgroup of the alternating group Alt 7. (5 points) b.

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm Math 676. A compactness theorem for the idele group 1. Introduction Let K be a global field, so K is naturally a discrete subgroup of the idele group A K and by the product formula it lies in the kernel

More information

REPRESENTATION THEORY, LECTURE 0. BASICS

REPRESENTATION THEORY, LECTURE 0. BASICS REPRESENTATION THEORY, LECTURE 0. BASICS IVAN LOSEV Introduction The aim of this lecture is to recall some standard basic things about the representation theory of finite dimensional algebras and finite

More information

(VI.C) Rational Canonical Form

(VI.C) Rational Canonical Form (VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information