Identity Zero Rep B,D (h) = Zero Zero
|
|
- Shonda Richards
- 5 years ago
- Views:
Transcription
1 Chapter Five Similarity While studying matrix equivalence, we have shown that for any homomorphism there are bases B and D such that the representation matrix has a block partialidentity form. Identity Zero Rep B,D (h) = Zero Zero This representation describes the map as sending c 1β1 + + c nβn to c 1 δ1 + + c k δk , where n is the dimension of the domain and k is the dimension of the range. So, under this representation the action of the map is easy to understand because most of the matrix entries are zero. This chapter considers the special case where the domain and the codomain are equal, that is, where the homomorphism is a transformation. In this case we naturally ask to find a single basis B so that Rep B,B (t) is as simple as possible (we will take simple to mean that it has many zeroes). A matrix having the above block partial-identity form is not always possible here. But we will develop a form that comes close, a representation that is nearly diagonal. I Complex Vector Spaces This chapter requires that we factor polynomials. Of course, many polynomials do not factor over the real numbers; for instance, x does not factor into the product of two linear polynomials with real coefficients. For that reason, we shall from now on take our scalars from the complex numbers. That is, we are shifting from studying vector spaces over the real numbers to vector spaces over the complex numbers in this chapter vector and matrix entries are complex. Any real number is a complex number and a glance through this chapter shows that most of the examples use only real numbers. Nonetheless, the critical theorems require that the scalars be complex numbers, so the first section below is a quick review of complex numbers. 345
2 346 Chapter Five. Similarity In this book we are moving to the more general context of taking scalars to be complex only for the pragmatic reason that we must do so in order to develop the representation. We will not go into using other sets of scalars in more detail because it could distract from our goal. However, the idea of taking scalars from a structure other than the real numbers is an interesting one. Delightful presentations taking this approach are in [Halmos] and [Hoffman & Kunze]. I.1 Factoring and Complex Numbers; A Review This subsection is a review only and we take the main results as known. For proofs, see [Birkhoff & MacLane] or [Ebbinghaus]. Just as integers have a division operation e.g., 4 goes 5 times into 21 with remainder 1 so do polynomials. 1.1 Theorem (Division Theorem for Polynomials) Let c(x) be a polynomial. If m(x) is a non-zero polynomial then there are quotient and remainder polynomials q(x) and r(x) such that c(x) = m(x) q(x) + r(x) where the degree of r(x) is strictly less than the degree of m(x). In this book constant polynomials, including the zero polynomial, are said to have degree 0. (This is not the standard definition, but it is convienent here.) The point of the integer division statement 4 goes 5 times into 21 with remainder 1 is that the remainder is less than 4 while 4 goes 5 times, it does not go 6 times. In the same way, the point of the polynomial division statement is its final clause. 1.2 Example If c(x) = 2x 3 3x 2 + 4x and m(x) = x then q(x) = 2x 3 and r(x) = 2x + 3. Note that r(x) has a lower degree than m(x). 1.3 Corollary The remainder when c(x) is divided by x λ is the constant polynomial r(x) = c(λ). Proof. The remainder must be a constant polynomial because it is of degree less than the divisor x λ, To determine the constant, take m(x) from the theorem to be x λ and substitute λ for x to get c(λ) = (λ λ) q(λ) + r(x). QED If a divisor m(x) goes into a dividend c(x) evenly, meaning that r(x) is the zero polynomial, then m(x) is a factor of c(x). Any root of the factor (any λ R such that m(λ) = 0) is a root of c(x) since c(λ) = m(λ) q(λ) = 0. The prior corollary immediately yields the following converse. 1.4 Corollary If λ is a root of the polynomial c(x) then x λ divides c(x) evenly, that is, x λ is a factor of c(x).
3 Section I. Complex Vector Spaces 347 Finding the roots and factors of a high-degree polynomial can be hard. But for second-degree polynomials we have the quadratic formula: the roots of ax 2 + bx + c are λ 1 = b + b 2 4ac λ 2 = b b 2 4ac 2a 2a (if the discriminant b 2 4ac is negative then the polynomial has no real number roots). A polynomial that cannot be factored into two lower-degree polynomials with real number coefficients is irreducible over the reals. 1.5 Theorem Any constant or linear polynomial is irreducible over the reals. A quadratic polynomial is irreducible over the reals if and only if its discriminant is negative. No cubic or higher-degree polynomial is irreducible over the reals. 1.6 Corollary Any polynomial with real coefficients can be factored into linear and irreducible quadratic polynomials. That factorization is unique; any two factorizations have the same powers of the same factors. Note the analogy with the prime factorization of integers. In both cases, the uniqueness clause is very useful. 1.7 Example Because of uniqueness we know, without multiplying them out, that (x + 3) 2 (x 2 + 1) 3 does not equal (x + 3) 4 (x 2 + x + 1) Example By uniqueness, if c(x) = m(x) q(x) then where c(x) = (x 3) 2 (x + 2) 3 and m(x) = (x 3)(x + 2) 2, we know that q(x) = (x 3)(x + 2). While x has no real roots and so doesn t factor over the real numbers, if we imagine a root traditionally denoted i so that i = 0 then x factors into a product of linears (x i)(x + i). So we adjoin this root i to the reals and close the new system with respect to addition, multiplication, etc. (i.e., we also add 3 + i, and 2i, and 3 + 2i, etc., putting in all linear combinations of 1 and i). We then get a new structure, the complex numbers, denoted C. In C we can factor (obviously, at least some) quadratics that would be irreducible if we were to stick to the real numbers. Surprisingly, in C we can not only factor x and its close relatives, we can factor any quadratic. ax 2 + bx + c = a (x b + b 2 4ac 2a ) ( b b 2 4ac) x 2a 1.9 Example The second degree polynomial x 2 +x+1 factors over the complex numbers into the product of two first degree polynomials. ( 1 + 3)( 1 3) ( 1 3 x x = x ( i))( x ( i)) 1.10 Corollary (Fundamental Theorem of Algebra) Polynomials with complex coefficients factor into linear polynomials with complex coefficients. The factorization is unique.
4 348 Chapter Five. Similarity I.2 Complex Representations Recall the definitions of the complex number addition and multiplication. (a + bi) + (c + di) = (a + c) + (b + d)i (a + bi)(c + di) = ac + adi + bci + bd( 1) = (ac bd) + (ad + bc)i 2.1 Example For instance, (1 2i) + (5 + 4i) = 6 + 2i and (2 3i)(4 0.5i) = i. Handling scalar operations with those rules, all of the operations that we ve covered for real vector spaces carry over unchanged. 2.2 Example Matrix multiplication is the same, although the scalar arithmetic involves more bookkeeping i 2 0i 1 + 0i 1 0i i 2 + 3i 3i i (1 + 1i) (1 + 0i) + (2 0i) (3i) (1 + 1i) (1 0i) + (2 0i) ( i) = (i) (1 + 0i) + ( 2 + 3i) (3i) (i) (1 0i) + ( 2 + 3i) ( i) 1 + 7i 1 1i = 9 5i 3 + 3i Everything else from prior chapters that we can, we shall also carry over unchanged. For instance, we shall call this 1 + 0i 0 + 0i 0 + 0i.,..., 0 + 0i i 1 + 0i the standard basis for C n as a vector space over C and again denote it E n.
5 Section II. Similarity 349 II Similarity II.1 Definition and Examples We ve defined H and Ĥ to be matrix-equivalent if there are nonsingular matrices P and Q such that Ĥ = P HQ. That definition is motivated by this diagram h V w.r.t. B W w.r.t. D H id id V w.r.t. ˆB h Ĥ W w.r.t. showing that H and Ĥ both represent h but with respect to different pairs of bases. We now specialize that setup to the case where the codomain equals the domain, and where the codomain s basis equals the domain s basis. V w.r.t. B id V w.r.t. D t ˆD V w.r.t. B id t V w.r.t. D To move from the lower left to the lower right we can either go straight over, or up, over, and then down. In matrix terms, Rep D,D (t) = Rep B,D (id) Rep B,B (t) ( Rep B,D (id) ) 1 (recall that a representation of composition like this one reads right to left). 1.1 Definition The matrices T and S are similar if there is a nonsingular P such that T = P SP 1. Since nonsingular matrices are square, the similar matrices T and S must be square and of the same size. 1.2 Example With these two, 2 1 P = 1 1 S = calculation gives that S is similar to this matrix. 0 1 T = 1 1
6 350 Chapter Five. Similarity 1.3 Example The only matrix similar to the zero matrix is itself: P ZP 1 = P Z = Z. The only matrix similar to the identity matrix is itself: P IP 1 = P P 1 = I. Since matrix similarity is a special case of matrix equivalence, if two matrices are similar then they are equivalent. What about the converse: must matrix equivalent square matrices be similar? The answer is no. The prior example shows that the similarity classes are different from the matrix equivalence classes, because the matrix equivalence class of the identity consists of all nonsingular matrices of that size. Thus, for instance, these two are matrix equivalent but not similar T = S = So some matrix equivalence classes split into two or more similarity classes similarity gives a finer partition than does equivalence. This picture shows some matrix equivalence classes subdivided into similarity classes. A B... To understand the similarity relation we shall study the similarity classes. We approach this question in the same way that we ve studied both the row equivalence and matrix equivalence relations, by finding a canonical form for representatives of the similarity classes, called Jordan form. With this canonical form, we can decide if two matrices are similar by checking whether they reduce to the same representative. We ve also seen with both row equivalence and matrix equivalence that a canonical form gives us insight into the ways in which members of the same class are alike (e.g., two identically-sized matrices are matrix equivalent if and only if they have the same rank). Exercises 1.4 For 1 3 S = 2 6 T = /2 5 P = ( 4 ) check that T = P SP Example 1.3 shows that the only matrix similar to a zero matrix is itself and that the only matrix similar to the identity is itself. (a) Show that the 1 1 matrix (2), also, is similar only to itself. (b) Is a matrix of the form ci for some scalar c similar only to itself? (c) Is a diagonal matrix similar only to itself? 1.6 Show that these matrices are not similar. ( ) ( 1 0 ) More information on representatives is in the appendix.
7 Section II. Similarity Consider the transformation t: P 2 P 2 described by x 2 x + 1, x x 2 1, and 1 3. (a) Find T = Rep B,B (t) where B = x 2, x, 1. (b) Find S = Rep D,D (t) where D = 1, 1 + x, 1 + x + x 2. (c) Find the matrix P such that T = P SP Exhibit an nontrivial similarity relationship in this way: let t: C 2 C 2 act by and pick two bases, and represent t with respect to then T = Rep B,B (t) and S = Rep D,D (t). Then compute the P and P 1 to change bases from B to D and back again. 1.9 Explain Example 1.3 in terms of maps Are there two matrices A and B that are similar while A 2 and B 2 are not similar? [Halmos] 1.11 Prove that if two matrices are similar and one is invertible then so is the other Show that similarity is an equivalence relation Consider a matrix representing, with respect to some B, B, reflection across the x-axis in R 2. Consider also a matrix representing, with respect to some D, D, reflection across the y-axis. Must they be similar? 1.14 Prove that similarity preserves determinants and rank. Does the converse hold? 1.15 Is there a matrix equivalence class with only one matrix similarity class inside? One with infinitely many similarity classes? 1.16 Can two different diagonal matrices be in the same similarity class? 1.17 Prove that if two matrices are similar then their k-th powers are similar when k > 0. What if k 0? 1.18 Let p(x) be the polynomial c n x n + + c 1 x + c 0. Show that if T is similar to S then p(t ) = c nt n + + c 1T + c 0I is similar to p(s) = c ns n + + c 1S + c 0I List all of the matrix equivalence classes of 1 1 matrices. Also list the similarity classes, and describe which similarity classes are contained inside of each matrix equivalence class Does similarity preserve sums? 1.21 Show that if T λi and N are similar matrices then T and N + λi are also similar. II.2 Diagonalizability The prior subsection defines the relation of similarity and shows that, although similar matrices are necessarily matrix equivalent, the converse does not hold. Some matrix-equivalence classes break into two or more similarity classes (the nonsingular n n matrices, for instance). This means that the canonical form for matrix equivalence, a block partial-identity, cannot be used as a canonical form for matrix similarity because the partial-identities cannot be in more than
8 352 Chapter Five. Similarity one similarity class, so there are similarity classes without one. This picture illustrates. As earlier in this book, class representatives are shown with stars.... We are developing a canonical form for representatives of the similarity classes. We naturally try to build on our previous work, meaning first that the partial identity matrices should represent the similarity classes into which they fall, and beyond that, that the representatives should be as simple as possible. The simplest extension of the partial-identity form is a diagonal form. 2.1 Definition A transformation is diagonalizable if it has a diagonal representation with respect to the same basis for the codomain as for the domain. A diagonalizable matrix is one that is similar to a diagonal matrix: T is diagonalizable if there is a nonsingular P such that P T P 1 is diagonal. 2.2 Example The matrix is diagonalizable. ( = ) ( ) ( ) Example Not every matrix is diagonalizable. The square of 0 0 N = 1 0 is the zero matrix. Thus, for any map n that N represents (with respect to the same basis for the domain as for the codomain), the composition n n is the zero map. This implies that no such map n can be diagonally represented (with respect to any B, B) because no power of a nonzero diagonal matrix is zero. That is, there is no diagonal matrix in N s similarity class. That example shows that a diagonal form will not do for a canonical form we cannot find a diagonal matrix in each matrix similarity class. However, the canonical form that we are developing has the property that if a matrix can be diagonalized then the diagonal matrix is the canonical representative of the similarity class. The next result characterizes which maps can be diagonalized. 2.4 Corollary A transformation t is diagonalizable if and only if there is a basis B = β 1,..., β n and scalars λ 1,..., λ n such that t( β i ) = λ i βi for each i.
9 Section II. Similarity 353 Proof. This follows from the definition by considering a diagonal representation matrix... λ 1 0 Rep B,B (t) = Rep B (t( β 1 )) Rep B (t( β n )) = λ n This representation is equivalent to the existence of a basis satisfying the stated conditions simply by the definition of matrix representation. QED 2.5 Example To diagonalize T = we take it as the representation of a transformation with respect to the standard basis T = Rep E2,E 2 (t) and we look for a basis B = β 1, β 2 such that λ1 0 Rep B,B (t) = 0 λ 2 that is, such that t( β 1 ) = λ 1β1 and t( β 2 ) = λ 2β β = λ 1 β β = λ 2 β 2 We are looking for scalars x such that this equation 3 2 b1 b1 = x 0 1 b 2 b 2 has solutions b 1 and b 2, which are not both zero. Rewrite that as a linear system. (3 x) b b 2 = 0 (1 x) b 2 = 0 ( ) In the bottom equation the two numbers multiply to give zero only if at least one of them is zero so there are two possibilities, b 2 = 0 and x = 1. In the b 2 = 0 possibility, the first equation gives that either b 1 = 0 or x = 3. Since the case of both b 1 = 0 and b 2 = 0 is disallowed, we are left looking at the possibility of x = 3. With it, the first equation in ( ) is 0 b b 2 = 0 and so associated with 3 are vectors with a second component of zero and a first component that is free. ( 3 2 b1 b1 = 3 0 1) 0 0 That is, one solution to ( ) is λ 1 = 3, and we have a first basis vector. 1 β 1 = 0
10 354 Chapter Five. Similarity In the x = 1 possibility, the first equation in ( ) is 2 b b 2 = 0, and so associated with 1 are vectors whose second component is the negative of their first component. 3 2 b1 b1 = b 1 b 1 Thus, another solution is λ 2 = 1 and a second basis vector is this. ( 1 β 2 = 1) To finish, drawing the similarity diagram R 2 w.r.t. E 2 id R 2 w.r.t. B t T R 2 w.r.t. E 2 t D id R 2 w.r.t. B and noting that the matrix Rep B,E2 (id) is easy leads to this diagonalization = In the next subsection, we will expand on that example by considering more closely the property of Corollary 2.4. This includes seeing another way, the way that we will routinely use, to find the λ s. Exercises 2.6 Repeat Example 2.5 for the matrix from Example Diagonalize these upper triangular matrices. (a) ( ) (b) ( ) 2.8 What form do the powers of a diagonal matrix have? 2.9 Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from different similarity classes? 2.10 Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular? 2.11 Show that the inverse of a diagonal matrix is the diagonal of the the inverses, if no element on that diagonal is zero. What happens when a diagonal entry is zero? 2.12 The equation ending Example 2.5 ( ) = is a bit jarring because for P we must take the first matrix, which is shown as an inverse, and for P 1 we take the inverse of the first matrix, so that the two 1 powers cancel and this matrix is shown without a superscript 1. (a) Check that this nicer-appearing equation holds. ( ) = ( ) ( ) 1
11 Section II. Similarity 355 (b) Is the previous item a coincidence? Or can we always switch the P and the P 1? 2.13 Show that the P used to diagonalize in Example 2.5 is not unique Find a formula for the powers of this matrix Hint: see Exercise Diagonalize these. 1 1 (a) (b) We can ask how diagonalization interacts with the matrix operations. Assume that t, s: V V are each diagonalizable. Is ct diagonalizable for all scalars c? What about t + s? t s? 2.17 Show that matrices of this form are not diagonalizable. 1 c c Show ( that ) each of these ( is) diagonalizable. 1 2 x y (a) (b) x, y, z scalars 2 1 y z II.3 Eigenvalues and Eigenvectors In this subsection we will focus on the property of Corollary Definition A transformation t: V V has a scalar eigenvalue λ if there is a nonzero eigenvector ζ V such that t( ζ) = λ ζ. ( Eigen is German for characteristic of or peculiar to ; some authors call these characteristic values and vectors. No authors call them peculiar.) 3.2 Example The projection map x y z π x y 0 x, y, z C has an eigenvalue of 1 associated with any eigenvector of the form x y 0 where x and y are non-0 scalars. On the other hand, 2 is not an eigenvalue of π since no non- 0 vector is doubled. That example shows why the non- 0 appears in the definition. Disallowing 0 as an eigenvector eliminates trivial eigenvalues.
12 356 Chapter Five. Similarity 3.3 Example The only transformation on the trivial space { 0 } is 0 0. This map has no eigenvalues because there are no non- 0 vectors v mapped to a scalar multiple λ v of themselves. 3.4 Example Consider the homomorphism t: P 1 P 1 given by c 0 + c 1 x (c 0 + c 1 ) + (c 0 + c 1 )x. The range of t is one-dimensional. Thus an application of t to a vector in the range will simply rescale that vector: c + cx (2c) + (2c)x. That is, t has an eigenvalue of 2 associated with eigenvectors of the form c + cx where c 0. This map also has an eigenvalue of 0 associated with eigenvectors of the form c cx where c Definition A square matrix T has a scalar eigenvalue λ associated with the non- 0 eigenvector ζ if T ζ = λ ζ. 3.6 Remark Although this extension from maps to matrices is obvious, there is a point that must be made. Eigenvalues of a map are also the eigenvalues of matrices representing that map, and so similar matrices have the same eigenvalues. But the eigenvectors are different similar matrices need not have the same eigenvectors. For instance, consider again the transformation t: P 1 P 1 given by c 0 + c 1 x (c 0 +c 1 )+(c 0 +c 1 )x. It has an eigenvalue of 2 associated with eigenvectors of the form c + cx where c 0. If we represent t with respect to B = 1 + 1x, 1 1x 2 0 T = Rep B,B (t) = 0 0 then 2 is an eigenvalue of T, associated with these eigenvectors. c0 2 0 c0 2c0 c0 { = } = { c c c 1 2c C, c 0 0} On the other hand, representing t with respect to D = 2 + 1x, 1 + 0x gives 3 1 S = Rep D,D (t) = 3 1 and the eigenvectors of S associated with the eigenvalue 2 are these. c0 3 1 c0 2c0 0 { = } = { c c c 1 2c 1 C, c 1 0} 1 c1 Thus similar matrices can have different eigenvectors. Here is an informal description of what s happening. The underlying transformation doubles the eigenvectors v 2 v. But when the matrix representing the transformation is T = Rep B,B (t) then it assumes that column vectors are representations with respect to B. In contrast, S = Rep D,D (t) assumes that column vectors are representations with respect to D. So the vectors that get doubled by each matrix look different.
13 Section II. Similarity 357 The next example illustrates the basic tool for finding eigenvectors and eigenvalues. 3.7 Example What are the eigenvalues and eigenvectors of this matrix? T = To find the scalars x such that T ζ = xζ for non- 0 eigenvectors ζ, bring everything to the left-hand side z 1 z 2 x z 1 z 2 = z 3 and factor (T xi) ζ = 0. (Note that it says T xi; the expression T x doesn t make sense because T is a matrix while x is a scalar.) This homogeneous linear system 1 x x x z 3 z 1 z 2 = 0 0 z 3 0 has a non- 0 solution if and only if the matrix is singular. We can determine when that happens. 0 = T xi 1 x 2 1 = 2 0 x x = x 3 4x 2 + 4x = x(x 2) 2 The eigenvalues are λ 1 = 0 and λ 2 = 2. To find the associated eigenvectors, plug in each eigenvalue. Plugging in λ 1 = 0 gives z 1 z 2 = 0 0 = z 1 z 2 = a a a z 3 for a scalar parameter a 0 (a is non-0 because eigenvectors must be non- 0). In the same way, plugging in λ 2 = 2 gives z 1 z 2 = 0 0 = z 1 z 2 = b b with b 0. z 3 z 3 z 3
14 358 Chapter Five. Similarity 3.8 Example If S = π (here π is not a projection map, it is the number ) then π x 1 = (x π)(x 3) 0 3 x so S has eigenvalues of λ 1 = π and λ 2 = 3. To find associated eigenvectors, first plug in λ 1 for x: ( ( π π 1 z1 0 z1 a = = = 0 3 π z 2 0) z 2 0) for a scalar a 0, and then plug in λ 2 : ( π 3 1 z1 0 = z 2 0) where b 0. = ( z1 ) b/π 3 = z 2 b 3.9 Definition The characteristic polynomial of a square matrix T is the determinant of the matrix T xi, where x is a variable. The characteristic equation is T xi = 0. The characteristic polynomial of a transformation t is the polynomial of any Rep B,B (t). Exercise 30 checks that the characteristic polynomial of a transformation is well-defined, that is, any choice of basis yields the same polynomial Lemma A linear transformation on a nontrivial vector space has at least one eigenvalue. Proof. Any root of the characteristic polynomial is an eigenvalue. Over the complex numbers, any polynomial of degree one or greater has a root. (This is the reason that in this chapter we ve gone to scalars that are complex.) QED Notice the familiar form of the sets of eigenvectors in the above examples Definition The eigenspace of a transformation t associated with the eigenvalue λ is V λ = { ζ t( ζ ) = λζ } { 0 }. The eigenspace of a matrix is defined analogously Lemma An eigenspace is a subspace. Proof. An eigenspace must be nonempty for one thing it contains the zero vector and so we need only check closure. Take vectors ζ 1,..., ζ n from V λ, to show that any linear combination is in V λ t(c 1 ζ1 + c 2 ζ2 + + c n ζn ) = c 1 t( ζ 1 ) + + c n t( ζ n ) = c 1 λ ζ c n λ ζ n = λ(c 1 ζ1 + + c n ζn ) (the second equality holds even if any ζ i is 0 since t( 0) = λ 0 = 0). QED
15 Section II. Similarity Example In Example 3.8 the eigenspace associated with the eigenvalue π and the eigenspace associated with the eigenvalue 3 are these. a b/π 3 V π = { a R} V 0 3 = { b R} b 3.14 Example In Example 3.7, these are the eigenspaces associated with the eigenvalues 0 and 2. a V 0 = { a b a R}, V 2 = { 0 b R}. a b 3.15 Remark The characteristic equation is 0 = x(x 2) 2 so in some sense 2 is an eigenvalue twice. However there are not twice as many eigenvectors, in that the dimension of the eigenspace is one, not two. The next example shows a case where a number, 1, is a double root of the characteristic equation and the dimension of the associated eigenspace is two Example With respect to the standard bases, this matrix represents projection. x y z π x y 0 x, y, z C Its eigenspace associated with the eigenvalue 0 and its eigenspace associated with the eigenvalue 1 are easy to find. V 0 = { 0 0 c 3 C} V 1 = { c 1 c 2 c 1, c 2 C} 0 c 3 By the lemma, if two eigenvectors v 1 and v 2 are associated with the same eigenvalue then any linear combination of those two is also an eigenvector associated with that same eigenvalue. But, if two eigenvectors v 1 and v 2 are associated with different eigenvalues then the sum v 1 + v 2 need not be related to the eigenvalue of either one. In fact, just the opposite. If the eigenvalues are different then the eigenvectors are not linearly related Theorem For any set of distinct eigenvalues of a map or matrix, a set of associated eigenvectors, one per eigenvalue, is linearly independent.
16 360 Chapter Five. Similarity Proof. We will use induction on the number of eigenvalues. If there is no eigenvalue or only one eigenvalue then the set of associated eigenvectors is empty or is a singleton set with a non- 0 member, and in either case is linearly independent. For induction, assume that the theorem is true for any set of k distinct eigenvalues, suppose that λ 1,..., λ k+1 are distinct eigenvalues, and let v 1,..., v k+1 be associated eigenvectors. If c 1 v c k v k + c k+1 v k+1 = 0 then after multiplying both sides of the displayed equation by λ k+1, applying the map or matrix to both sides of the displayed equation, and subtracting the first result from the second, we have this. c 1 (λ k+1 λ 1 ) v c k (λ k+1 λ k ) v k + c k+1 (λ k+1 λ k+1 ) v k+1 = 0 The induction hypothesis now applies: c 1 (λ k+1 λ 1 ) = 0,..., c k (λ k+1 λ k ) = 0. Thus, as all the eigenvalues are distinct, c 1,..., c k are all 0. Finally, now c k+1 must be 0 because we are left with the equation v k+1 0. QED 3.18 Example The eigenvalues of are distinct: λ 1 = 1, λ 2 = 2, and λ 3 = 3. A set of associated eigenvectors like is linearly independent. { 2 1, , 2 1 } Corollary An n n matrix with n distinct eigenvalues is diagonalizable. Proof. Form a basis of eigenvectors. Apply Corollary 2.4. QED Exercises 3.20 For ( each, find ) the characteristic polynomial ( and ) the eigenvalues (a) (b) (c) (d) (e) For each matrix, find the characteristic equation, and the eigenvalues and associated ( eigenvectors. ) (a) (b) Find the characteristic equation, and the eigenvalues and associated eigenvectors for this matrix. Hint. The eigenvalues are complex
17 Section II. Similarity Find the characteristic polynomial, the eigenvalues, and the associated eigenvectors of this matrix. ( 1 1 ) For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors (a) (b) Let t: P 2 P 2 be a 0 + a 1x + a 2x 2 (5a 0 + 6a 1 + 2a 2) (a 1 + 8a 2)x + (a 0 2a 2)x 2. Find its eigenvalues and the associated eigenvectors Find the eigenvalues and eigenvectors of this map t: M 2 M 2. a b 2c a + c c d b 2c d 3.27 Find the eigenvalues and associated eigenvectors of the differentiation operator d/dx: P 3 P Prove that the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal Find the formula for the characteristic polynomial of a 2 2 matrix Prove that the characteristic polynomial of a transformation is well-defined (a) Can any non- 0 vector in any nontrivial vector space be a eigenvector? That is, given a v 0 from a nontrivial V, is there a transformation t: V V and a scalar λ R such that t( v) = λ v? (b) Given a scalar λ, can any non- 0 vector in any nontrivial vector space be an eigenvector associated with the eigenvalue λ? 3.32 Suppose that t: V V and T = Rep B,B (t). Prove that the eigenvectors of T associated with λ are the non- 0 vectors in the kernel of the map represented (with respect to the same bases) by T λi Prove that if a,..., d are all integers and a + b = c + d then a b c has integral eigenvalues, namely a + b and a c Prove that if T is nonsingular and has eigenvalues λ 1,..., λ n then T 1 has eigenvalues 1/λ 1,..., 1/λ n. Is the converse true? 3.35 Suppose that T is n n and c, d are scalars. (a) Prove that if T has the eigenvalue λ with an associated eigenvector v then v is an eigenvector of ct + di associated with eigenvalue cλ + d. (b) Prove that if T is diagonalizable then so is ct + di Show that λ is an eigenvalue of T if and only if the map represented by T λi is not an isomorphism [Strang 80] (a) Show that if λ is an eigenvalue of A then λ k is an eigenvalue of A k. (b) What is wrong with this proof generalizing that? If λ is an eigenvalue of A and µ is an eigenvalue for B, then λµ is an eigenvalue for AB, for, if A x = λ x and B x = µ x then AB x = Aµ x = µa xµλ x? d
18 362 Chapter Five. Similarity 3.38 Do matrix-equivalent matrices have the same eigenvalues? 3.39 Show that a square matrix with real entries and an odd number of rows has at least one real eigenvalue Diagonalize Suppose that P is a nonsingular n n matrix. Show that the similarity transformation map t P : M n n M n n sending T P T P 1 is an isomorphism.? 3.42 Show that if A is an n square matrix and each row (column) sums to c then c is a characteristic root of A. [Math. Mag., Nov. 1967]
19 Section III. Nilpotence 363 III Nilpotence The goal of this chapter is to show that every square matrix is similar to one that is a sum of two kinds of simple matrices. The prior section focused on the first kind, diagonal matrices. We now consider the other kind. III.1 Self-Composition This subsection is optional, although it is necessary for later material in this section and in the next one. A linear transformations t: V V, because it has the same domain and codomain, can be iterated. That is, compositions of t with itself such as t 2 = t t and t 3 = t t t are defined. v t( v ) t 2 ( v ) Note that this power notation for the linear transformation functions dovetails with the notation that we ve used earlier for their square matrix representations because if Rep B,B (t) = T then Rep B,B (t j ) = T j. 1.1 Example For the derivative map d/dx: P 3 P 3 given by a + bx + cx 2 + dx 3 d/dx b + 2cx + 3dx 2 the second power is the second derivative the third power is the third derivative and any higher power is the zero map. a + bx + cx 2 + dx 3 d2 /dx 2 2c + 6dx a + bx + cx 2 + dx 3 d3 /dx 3 6d 1.2 Example This transformation of the space of 2 2 matrices a b t b a c d d 0 More information on function interation is in the appendix.
20 364 Chapter Five. Similarity has this second power ( a ) b c d and this third power. ( a ) b c d After that, t 4 = t 2 and t 5 = t 3, etc. t 2 t 3 a b 0 0 b a 0 0 These examples suggest that on iteration more and more zeros appear until there is a settling down. The next result makes this precise. 1.3 Lemma For any transformation t: V V, the rangespaces of the powers form a descending chain V R(t) R(t 2 ) and the nullspaces form an ascending chain. { 0 } N (t) N (t 2 ) Further, there is a k such that for powers less than k the subsets are proper (if j < k then R(t j ) R(t j+1 ) and N (t j ) N (t j+1 )), while for powers greater than k the sets are equal (if j k then R(t j ) = R(t j+1 ) and N (t j ) = N (t j+1 )). Proof. We will do the rangespace half and leave the rest for Exercise 13. Recall, however, that for any map the dimension of its rangespace plus the dimension of its nullspace equals the dimension of its domain. So if the rangespaces shrink then the nullspaces must grow. That the rangespaces form chains is clear because if w R(t j+1 ), so that w = t j+1 ( v), then w = t j ( t( v) ) and so w R(t j ). To verify the further property, first observe that if any pair of rangespaces in the chain are equal R(t k ) = R(t k+1 ) then all subsequent ones are also equal R(t k+1 ) = R(t k+2 ), etc. This is because if t: R(t k+1 ) R(t k+2 ) is the same map, with the same domain, as t: R(t k ) R(t k+1 ) and it therefore has the same range: R(t k+1 ) = R(t k+2 ) (and induction shows that it holds for all higher powers). So if the chain of rangespaces ever stops being strictly decreasing then it is stable from that point onward. But the chain must stop decreasing. Each rangespace is a subspace of the one before it. For it to be a proper subspace it must be of strictly lower dimension (see Exercise 11). These spaces are finite-dimensional and so the chain can fall for only finitely-many steps, that is, the power k is at most the dimension of V. QED 1.4 Example The derivative map a + bx + cx 2 + dx 3 d/dx b + 2cx + 3dx 2 of Example 1.1 has this chain of rangespaces P 3 P 2 P 1 P 0 { 0 } = { 0 } =
21 Section III. Nilpotence 365 and this chain of nullspaces. { 0 } P 0 P 1 P 2 P 3 = P 3 = 1.5 Example The transformation π : C 3 C 3 projecting onto the first two coordinates c 1 c 2 π c 1 c 2 0 has C 3 R(π) = R(π 2 ) = and { 0 } N (π) = N (π 2 ) =. c Example Let t: P 2 P 2 be the map c 0 + c 1 x + c 2 x 2 2c 0 + c 2 x. As the lemma describes, on iteration the rangespace shrinks R(t 0 ) = P 2 R(t) = {a + bx a, b C} R(t 2 ) = {a a C} and then stabilizes R(t 2 ) = R(t 3 ) =, while the nullspace grows N (t 0 ) = {0} N (t) = {cx c C} N (t 2 ) = {cx + d c, d C} and then stabilizes N (t 2 ) = N (t 3 ) =. This graph illustrates Lemma 1.3. The horizontal axis gives the power j of a transformation. The vertical axis gives the dimension of the rangespace of t j as the distance above zero and thus also shows the dimension of the nullspace as the distance below the gray horizontal line, because the two add to the dimension n of the domain. n rank(t j ) j Power j of the transformation n As sketched, on iteration the rank falls and with it the nullity grows until the two reach a steady state. This state must be reached by the n-th iterate. The steady state s distance above zero is the dimension of the generalized rangespace and its distance below n is the dimension of the generalized nullspace. 1.7 Definition Let t be a transformation on an n-dimensional space. The generalized rangespace (or the closure of the rangespace) is R (t) = R(t n ) The generalized nullspace (or the closure of the nullspace) is N (t) = N (t n ).
22 366 Chapter Five. Similarity Exercises 1.8 Give the chains of rangespaces and nullspaces for the zero and identity transformations. 1.9 For each map, give the chain of rangespaces and the chain of nullspaces, and the generalized rangespace and the generalized nullspace. (a) t 0 : P 2 P 2, a + bx + cx 2 b + cx 2 (b) t 1 : R 2 R 2, a b 0 a (c) t 2 : P 2 P 2, a + bx + cx 2 b + cx + ax 2 (d) t 3 : R 3 R 3, a a b a c b 1.10 Prove that function composition is associative (t t) t = t (t t) and so we can write t 3 without specifying a grouping Check that a subspace must be of dimension less than or equal to the dimension of its superspace. Check that if the subspace is proper (the subspace does not equal the superspace) then the dimension is strictly less. (This is used in the proof of Lemma 1.3.) 1.12 Prove that the generalized rangespace R (t) is the entire space, and the generalized nullspace N (t) is trivial, if the transformation t is nonsingular. Is this only if also? 1.13 Verify the nullspace half of Lemma Give an example of a transformation on a three dimensional space whose range has dimension two. What is its nullspace? Iterate your example until the rangespace and nullspace stabilize Show that the rangespace and nullspace of a linear transformation need not be disjoint. Are they ever disjoint? III.2 Strings This subsection is optional, and requires material from the optional Direct Sum subsection. The prior subsection shows that as j increases, the dimensions of the R(t j ) s fall while the dimensions of the N (t j ) s rise, in such a way that this rank and nullity split the dimension of V. Can we say more; do the two split a basis is V = R(t j ) N (t j )? The answer is yes for the smallest power j = 0 since V = R(t 0 ) N (t 0 ) = V { 0}. The answer is also yes at the other extreme. 2.1 Lemma Where t: V V is a linear transformation, the space is the direct sum V = R (t) N (t). That is, both dim(v ) = dim(r (t)) + dim(n (t)) and R (t) N (t) = { 0 }.
23 Section III. Nilpotence 367 Proof. We will verify the second sentence, which is equivalent to the first. The first clause, that the dimension n of the domain of t n equals the rank of t n plus the nullity of t n, holds for any transformation and so we need only verify the second clause. Assume that v R (t) N (t) = R(t n ) N (t n ), to prove that v is 0. Because v is in the nullspace, t n ( v) = 0. On the other hand, because R(t n ) = R(t n+1 ), the map t: R (t) R (t) is a dimension-preserving homomorphism and therefore is one-to-one. A composition of one-to-one maps is one-to-one, and so t n : R (t) R (t) is one-to-one. But now because only 0 is sent by a one-to-one linear map to 0 the fact that t n ( v) = 0 implies that v = 0. QED 2.2 Note Technically we should distinguish the map t: V V from the map t: R (t) R (t) because the domains or codomains might differ. The second one is said to be the restriction of t to R(t k ). We shall use later a point from that proof about the restriction map, namely that it is nonsingular. In contrast to the j = 0 and j = n cases, for intermediate powers the space V might not be the direct sum of R(t j ) and N (t j ). The next example shows that the two can have a nontrivial intersection. 2.3 Example Consider the transformation of C 2 defined by this action on the elements of the standard basis. ( ( ( 1 n 0 0 n N = Rep 0) 1 1) 0) E2,E 2 (n) = 1 0 The vector ( 0 e 2 = 1) is in both the rangespace and nullspace. Another way to depict this map s action is with a string. e 1 e Example A map ˆn: C 4 C 4 whose action on E 4 is given by the string e 1 e 2 e 3 e 4 0 has R(ˆn) N (ˆn) equal to the span [{ e 4 }], has R(ˆn 2 ) N (ˆn 2 ) = [{ e 3, e 4 }], and has R(ˆn 3 ) N (ˆn 3 ) = [{ e 4 }]. The matrix representation is all zeros except for some subdiagonal ones ˆN = Rep E4,E 4 (ˆn) = More information on map restrictions is in the appendix.
24 368 Chapter Five. Similarity 2.5 Example Transformations can act via more than one string. A transformation t acting on a basis B = β 1,..., β 5 by β 1 β 2 β 3 0 β 4 β 5 0 is represented by a matrix that is all zeros except for blocks of subdiagonal ones Rep B,B (t) = (the lines just visually organize the blocks). In those three examples all vectors are eventually transformed to zero. 2.6 Definition A nilpotent transformation is one with a power that is the zero map. A nilpotent matrix is one with a power that is the zero matrix. In either case, the least such power is the index of nilpotency. 2.7 Example In Example 2.3 the index of nilpotency is two. In Example 2.4 it is four. In Example 2.5 it is three. 2.8 Example The differentiation map d/dx: P 2 P 2 is nilpotent of index three since the third derivative of any quadratic polynomial is zero. This map s action is described by the string x 2 2x 2 0 and taking the basis B = x 2, 2x, 2 gives this representation. Rep B,B (d/dx) = Not all nilpotent matrices are all zeros except for blocks of subdiagonal ones. 2.9 Example With the matrix ˆN from Example 2.4, and this four-vector basis D = 0 1, 2 1, 1 1, a change of basis operation produces this representation with respect to D, D =
25 Section III. Nilpotence 369 The new matrix is nilpotent; it s fourth power is the zero matrix since (P ˆNP 1 ) 4 = P ˆNP 1 P ˆNP 1 P ˆNP 1 P ˆNP 1 = P ˆN 4 P 1 and ˆN 4 is the zero matrix. The goal of this subsection is Theorem 2.13, which shows that the prior example is prototypical in that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones Definition Let t be a nilpotent transformation on V. A t-string generated by v V is a sequence v, t( v),..., t k 1 ( v). This sequence has length k. A t-string basis is a basis that is a concatenation of t-strings Example In Example 2.5, the t-strings β 1, β 2, β 3 and β 4, β 5, of length three and two, can be concatenated to make a basis for the domain of t Lemma If a space has a t-string basis then the longest string in it has length equal to the index of nilpotency of t. Proof. Suppose not. Those strings cannot be longer; if the index is k then t k sends any vector including those starting the string to 0. So suppose instead that there is a transformation t of index k on some space, such that the space has a t-string basis where all of the strings are shorter than length k. Because t has index k, there is a vector v such that t k 1 ( v) 0. Represent v as a linear combination of basis elements and apply t k 1. We are supposing that t k 1 sends each basis element to 0 but that it does not send v to 0. That is impossible. QED We shall show that every nilpotent map has an associated string basis. Then our goal theorem, that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones, is immediate, as in Example 2.5. Looking for a counterexample, a nilpotent map without an associated string basis that is disjoint, will suggest the idea for the proof. Consider the map t: C 5 C 5 with this action. e e e 2 Rep E5,E 5 (t) = e 4 e Even after ommitting the zero vector, these three strings aren t disjoint, but that doesn t end hope of finding a t-string basis. It only means that E 5 will not do for the string basis. To find a basis that will do, we first find the number and lengths of its strings. Since t s index of nilpotency is two, Lemma 2.12 says that at least one
26 370 Chapter Five. Similarity string in the basis has length two. Thus the map must act on a string basis in one of these two ways. β 1 β 2 0 β 3 β 4 0 β 5 0 β 1 β 2 0 β 3 0 β 4 0 β 5 0 Now, the key point. A transformation with the left-hand action has a nullspace of dimension three since that s how many basis vectors are sent to zero. A transformation with the right-hand action has a nullspace of dimension four. Using the matrix representation above, calculation of t s nullspace x x N (t) = { z x, z, r C} 0 r shows that it is three-dimensional, meaning that we want the left-hand action. To produce a string basis, first pick β 2 and β 4 from R(t) N (t) 0 0 β 2 = β 4 = (other choices are possible, just be sure that { β 2, β 4 } is linearly independent). For β 5 pick a vector from N (t) that is not in the span of { β 2, β 4 }. 1 1 β 5 = Finally, take β 1 and β 3 such that t( β 1 ) = β 2 and t( β 3 ) = β β 1 = β 3 = 0 1 0
27 Section III. Nilpotence 371 Now, with respect to B = β 1,..., β 5, the matrix of t is as desired Rep B,B (t) = Theorem Any nilpotent transformation t is associated with a t-string basis. While the basis is not unique, the number and the length of the strings is determined by t. This illustrates the proof. Basis vectors are categorized into kind 1, kind 2, and kind 3. They are also shown as squares or circles, according to whether they are in the nullspace or not Proof. Fix a vector space V ; we will argue by induction on the index of nilpotency of t: V V. If that index is 1 then t is the zero map and any basis is a string basis β 1 0,..., β n 0. For the inductive step, assume that the theorem holds for any transformation with an index of nilpotency between 1 and k 1 and consider the index k case. First observe that the restriction to the rangespace t: R(t) R(t) is also nilpotent, of index k 1. Apply the inductive hypothesis to get a string basis for R(t), where the number and length of the strings is determined by t. B = β 1, t( β 1 ),..., t h1 ( β 1 ) β 2,..., t h2 ( β 2 ) β i,..., t hi ( β i ) (In the illustration these are the basis vectors of kind 1, so there are i strings shown with this kind of basis vector.) Second, note that taking the final nonzero vector in each string gives a basis C = t h 1 ( β 1 ),..., t h i ( β i ) for R(t) N (t). (These are illustrated with 1 s in squares.) For, a member of R(t) is mapped to zero if and only if it is a linear combination of those basis vectors that are mapped to zero. Extend C to a basis for all of N (t). Ĉ = C ξ 1,..., ξ p (The ξ s are the vectors of kind 2 so that Ĉ is the set of squares.) While many choices are possible for the ξ s, their number p is determined by the map t as it is the dimension of N (t) minus the dimension of R(t) N (t).
28 372 Chapter Five. Similarity Finally, B Ĉ is a basis for R(t)+N (t) because any sum of something in the rangespace with something in the nullspace can be represented using elements of B for the rangespace part and elements of Ĉ for the part from the nullspace. Note that dim ( R(t) + N (t) ) = dim(r(t)) + dim(n (t)) dim(r(t) N (t)) = rank(t) + nullity(t) i = dim(v ) i and so B Ĉ can be extended to a basis for all of V by the addition of i more vectors. Specifically, remember that each of β 1,..., β i is in R(t), and extend B Ĉ with vectors v 1,..., v i such that t( v 1 ) = β 1,..., t( v i ) = β i. (In the illustration, these are the 3 s.) The check that linear independence is preserved by this extension is Exercise 29. QED 2.14 Corollary Every nilpotent matrix is similar to a matrix that is all zeros except for blocks of subdiagonal ones. That is, every nilpotent map is represented with respect to some basis by such a matrix. This form is unique in the sense that if a nilpotent matrix is similar to two such matrices then those two simply have their blocks ordered differently. Thus this is a canonical form for the similarity classes of nilpotent matrices provided that we order the blocks, say, from longest to shortest Example The matrix M = has an index of nilpotency of two, as this calculation shows. p ( M p ) ( N ) (M p ) 1 1 x 1 M = { x C} 1 1 x M 2 = C The calculation also describes how a map m represented by M must act on any string basis. With one map application the nullspace has dimension one and so one vector of the basis is sent to zero. On a second application, the nullspace has dimension two and so the other basis vector is sent to zero. Thus, the action of the map is β 1 β 2 0 and the canonical form of the matrix is this We can exhibit such a m-string basis and the change of basis matrices witnessing the matrix similarity. For the basis, take M to represent m with respect
29 Section III. Nilpotence 373 to the standard bases, pick a β 2 N (m) and also pick a β 1 so that m( β 1 ) = β 2. ( ( 1 1 β 2 = β 1) 1 = 0) (If we take M to be a representative with respect to some nonstandard bases then this picking step is just more messy.) Recall the similarity diagram. C 2 w.r.t. E 2 id P C 2 w.r.t. B m M C 2 w.r.t. E 2 id P m C 2 w.r.t. B The canonical form equals Rep B,B (m) = P MP 1, where P 1 = Rep B,E2 (id) = P = (P 1 ) 1 = and the verification of the matrix calculation is routine = Example The matrix is nilpotent. These calculations show the nullspaces growing. p N p N (N p ) { 0 u v u v y { z u v 3 zero matrix C u, v C} y, z, u, v C} That table shows that any string basis must satisfy: the nullspace after one map application has dimension two so two basis vectors are sent directly to zero,
30 374 Chapter Five. Similarity the nullspace after the second application has dimension four so two additional basis vectors are sent to zero by the second iteration, and the nullspace after three applications is of dimension five so the final basis vector is sent to zero in three hops. β 1 β 2 β 3 0 β 4 β 5 0 To produce such a basis, first pick two independent vectors from N (n) β 3 = 1 β 5 = then add β 2, β 4 N (n 2 ) such that n( β 2 ) = β 3 and n( β 4 ) = β β 2 = 0 β 4 = and finish by adding β 1 N (n 3 ) = C 5 ) such that n( β 1 ) = β β 1 = Exercises 2.17 What is the index of nilpotency of the left-shift operator, here acting on the space of triples of reals? (x, y, z) (0, x, y) 2.18 For each string basis state the index of nilpotency and give the dimension of the rangespace and nullspace of each iteration of the nilpotent map. (a) β 1 β 2 0 β 3 β 4 0 (b) β 1 β 2 β 3 0 β 4 0 β 5 0 β 6 0 (c) β 1 β 2 β 3 0 Also give the canonical form of the matrix Decide which of these matrices are nilpotent.
Bare-bones outline of eigenvalue theory and the Jordan canonical form
Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional
More informationWe start with two examples that suggest the right definition.
Chapter Three Maps Between Spaces I Isomorphisms In the examples following the definition of a vector space we developed the intuition that some spaces are the same as others For instance, the space of
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationMath 321: Linear Algebra
Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,
More information! x y. z 1 2 = = r z 1 rz 1
Section II. Homomorphisms 83 II Homomorphisms The definition of isomorphism has two conditions. In this section we will consider the second one. We will study maps that are required only to preserve structure,
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationVector Spaces. Chapter Two
Chapter Two Vector Spaces The first chapter began by introducing Gauss method and finished with a fair understanding, keyed on the Linear Combination Lemma, of how it finds the solution set of a linear
More informationMath 113 Homework 5. Bowei Liu, Chao Li. Fall 2013
Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationThe eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute
A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi)
More informationMath 321: Linear Algebra
Math 32: Linear Algebra T Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J Hefferon E-mail: kapitula@mathunmedu Prof Kapitula, Spring
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationReview 1 Math 321: Linear Algebra Spring 2010
Department of Mathematics and Statistics University of New Mexico Review 1 Math 321: Linear Algebra Spring 2010 This is a review for Midterm 1 that will be on Thursday March 11th, 2010. The main topics
More informationChapter Five Notes N P U2C5
Chapter Five Notes N P UC5 Name Period Section 5.: Linear and Quadratic Functions with Modeling In every math class you have had since algebra you have worked with equations. Most of those equations have
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationComps Study Guide for Linear Algebra
Comps Study Guide for Linear Algebra Department of Mathematics and Statistics Amherst College September, 207 This study guide was written to help you prepare for the linear algebra portion of the Comprehensive
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationSymmetries and Polynomials
Symmetries and Polynomials Aaron Landesman and Apurva Nakade June 30, 2018 Introduction In this class we ll learn how to solve a cubic. We ll also sketch how to solve a quartic. We ll explore the connections
More informationLinear Algebra (Math-324) Lecture Notes
Linear Algebra (Math-324) Lecture Notes Dr. Ali Koam and Dr. Azeem Haider September 24, 2017 c 2017,, Jazan All Rights Reserved 1 Contents 1 Real Vector Spaces 6 2 Subspaces 11 3 Linear Combination and
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationGEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS
GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions
More informationLec 2: Mathematical Economics
Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationMATH 8253 ALGEBRAIC GEOMETRY WEEK 12
MATH 8253 ALGEBRAIC GEOMETRY WEEK 2 CİHAN BAHRAN 3.2.. Let Y be a Noetherian scheme. Show that any Y -scheme X of finite type is Noetherian. Moreover, if Y is of finite dimension, then so is X. Write f
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationMATH PRACTICE EXAM 1 SOLUTIONS
MATH 2359 PRACTICE EXAM SOLUTIONS SPRING 205 Throughout this exam, V and W will denote vector spaces over R Part I: True/False () For each of the following statements, determine whether the statement is
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationLinear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017
Linear maps Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Spring 2017 M. Macauley (Clemson) Linear maps Math 8530, Spring 2017
More informationMath 24 Winter 2010 Sample Solutions to the Midterm
Math 4 Winter Sample Solutions to the Midterm (.) (a.) Find a basis {v, v } for the plane P in R with equation x + y z =. We can take any two non-collinear vectors in the plane, for instance v = (,, )
More informationMath 20F Final Exam(ver. c)
Name: Solutions Student ID No.: Discussion Section: Math F Final Exam(ver. c) Winter 6 Problem Score /48 /6 /7 4 /4 5 /4 6 /4 7 /7 otal / . (48 Points.) he following are rue/false questions. For this problem
More informationSpanning, linear dependence, dimension
Spanning, linear dependence, dimension In the crudest possible measure of these things, the real line R and the plane R have the same size (and so does 3-space, R 3 ) That is, there is a function between
More informationChapter 2 Formulas and Definitions:
Chapter 2 Formulas and Definitions: (from 2.1) Definition of Polynomial Function: Let n be a nonnegative integer and let a n,a n 1,...,a 2,a 1,a 0 be real numbers with a n 0. The function given by f (x)
More informationThe converse is clear, since
14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The
More informationMath 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith
Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationGeneralized eigenvector - Wikipedia, the free encyclopedia
1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationGeneralized eigenspaces
Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationMath 110, Spring 2015: Midterm Solutions
Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make
More informationMath 110: Worksheet 3
Math 110: Worksheet 3 September 13 Thursday Sept. 7: 2.1 1. Fix A M n n (F ) and define T : M n n (F ) M n n (F ) by T (B) = AB BA. (a) Show that T is a linear transformation. Let B, C M n n (F ) and a
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More information= W z1 + W z2 and W z1 z 2
Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationLinear Algebra. Jim Hefferon. x x x 1
Linear Algebra Jim Hefferon ( 1 3) ( 2 1) 1 2 3 1 x 1 ( 1 3 ) ( 2 1) x 1 1 2 x 1 3 1 ( 6 8) ( 2 1) 6 2 8 1 Notation R real numbers N natural numbers: {, 1, 2,...} C complex numbers {......} set of... such
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationMATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.
MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationLinear Algebra. Jim Hefferon. x 1 2 x 3 1. x 1
Linear Algebra Jim Hefferon 3 2 2 3 x 3 2 x 2 x 3 6 8 2 6 2 8 Notation R real numbers N natural numbers: {,, 2,...} C complex numbers {......} set of... such that...... sequence; like a set but order matters
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More informationMath 110 Linear Algebra Midterm 2 Review October 28, 2017
Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections
More informationOrthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6
Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and
More informationEigenvalues and Eigenvectors
Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationSolution to Homework 1
Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false
More informationFinite Mathematics : A Business Approach
Finite Mathematics : A Business Approach Dr. Brian Travers and Prof. James Lampes Second Edition Cover Art by Stephanie Oxenford Additional Editing by John Gambino Contents What You Should Already Know
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLinear Algebra I. Ronald van Luijk, 2015
Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition
More informationMATH 320, WEEK 11: Eigenvalues and Eigenvectors
MATH 30, WEEK : Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors We have learned about several vector spaces which naturally arise from matrix operations In particular, we have learned about the
More informationThe Jordan Canonical Form
The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop
More informationMath 369 Exam #2 Practice Problem Solutions
Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.
More information40h + 15c = c = h
Chapter One Linear Systems I Solving Linear Systems Systems of linear equations are common in science and mathematics. These two examples from high school science [Onan] give a sense of how they arise.
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationVector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture
Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial
More informationMath 113 Winter 2013 Prof. Church Midterm Solutions
Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationPractice Final Exam Solutions
MAT 242 CLASS 90205 FALL 206 Practice Final Exam Solutions The final exam will be cumulative However, the following problems are only from the material covered since the second exam For the material prior
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationMatrix Multiplication
228 hapter Three Maps etween Spaces IV2 Matrix Multiplication After representing addition and scalar multiplication of linear maps in the prior subsection, the natural next operation to consider is function
More informationDaily Update. Math 290: Elementary Linear Algebra Fall 2018
Daily Update Math 90: Elementary Linear Algebra Fall 08 Lecture 7: Tuesday, December 4 After reviewing the definitions of a linear transformation, and the kernel and range of a linear transformation, we
More informationThe Cayley-Hamilton Theorem and the Jordan Decomposition
LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal
More informationContents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124
Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationComputationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:
Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication
More information2: LINEAR TRANSFORMATIONS AND MATRICES
2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,
More informationMATH JORDAN FORM
MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It
More information