Notes on the matrix exponential

Size: px
Start display at page:

Download "Notes on the matrix exponential"

Transcription

1 Notes on the matrix exponential Erik Wahlén February 14, Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not diagonalisable. This is done in Teschl by transforming A into Jordan normal form. As we will see here, it is not necessary to go this far. It suces to transform A into block form, where each block only has one eigenvalue (up to multiplicity). For completeness we also present a proof of the Jordan normal form at the end. The material in these notes is roughly the same as in Chapter 3.8 of Teschl, but the presentation and the proofs are a bit dierent. We also give more examples. We begin with a brief review of linear algebra. Recall that a complex n n matrix A can be identied with the linear operator x Ax on C n. A is simply the matrix for this linear operator in the standard basis {e 1,..., e n }, where e 1 = (1,,,..., ), e 2 = (, 1,,..., ), etc. If we choose a new basis {f 1,..., f n }, then the matrix for the operator in the new basis is B = T 1 AT, where T is the matrix whose columns consist of the coordinates for the vectors f j in the old basis (the standard basis). Recall also that the matrix B can be found directly by calculating Af j for each j and writing this vector in the basis {f 1,..., f n }. The coordinates will be the j:th column of B. In general we want to pick a basis such that B becomes as simple as possible. It is often more convenient to consider an abstract complex n-dimensional vector space V and a linear operator A on V. We assume that a basis for V has been selected and identify A with the matrix in this basis. Recall that the kernel (or null space) and range of A are dened by ker A = {x V : Ax = } and range A = {Ax: x V }. The kernel and the range are both linear subspaces of V and the dimension theorem says that dim ker A + dim range A = n. Recall also that λ C is called an eigenvalue of A if there exists some vector x such that Ax = λx. 1

2 The vector x is called an eigenvector of A corresponding to the eigenvalue λ. The subspace ker(a λi), that is, the subspace spanned by the eigenvectors belonging to λ, is called the eigenspace corresponding to λ. The number dim ker(a λi) is called the geometric multiplicity of λ. Note that λ C is an eigenvalue if and only if it is a root of the characteristic polynomial p char (z) = det(a zi). By the fundamental theorem of algebra we can write p char (z) as a product of rst degree polynomials, p char (z) = ( 1) n (z λ 1 ) a 1 (z λ 2 ) a2 (z λ k ) a k, where λ 1,..., λ k are the distinct eigenvalues of A. The positive integer a j is called the algebraic multiplicity of the eigenvalue λ j. The corresponding geometric multiplicity will be denoted g j. If it is possible to nd a basis for V consisting of eigenvectors, then the matrix T AT 1 will be particularly simple. Indeed, Af j = λ j f j, so the new matrix will have the diagonal form λ 1 λ 2 D = λ n One says that A has been diagonalised. Unfortunately, not all matrices can be diagonalised. Example 1.1. Consider the matrix A = ( ) 1. The characteristic polynomial is λ 2 so the only eigenvalue is λ =. On the other hand ( ) ( ) ( ) 1 x1 x2 Ax = = so that the only eigenvectors are (z, ) where z C. Clearly, we cannot nd a basis consisting of eigenvectors. There are however some important special cases in which one can diagonalise the matrix. For the proofs of the following results we refer to any textbook in linear algebra. Proposition 1.1. Suppose that A has n distinct eigenvalues. Then there exists a basis of eigenvectors for A. Proposition 1.2. Suppose that A is Hermitian, that is, A = A, where A = A t (the bar denoting complex conjugation). Then there exists an orthonormal basis of eigenvectors for A. Remark. The last property also holds when A is normal, meaning that A A = AA. Hermitian matrices are obviously normal. So are unitary matrices since they satisfy A A = I = AA. 2 x 2

3 2 Decomposition into invariant subspaces We will now consider what to do when A can't be diagonalised. Let V be a complex vector space and let V 1,..., V k be subspaces. We say that V is the direct sum of V 1,..., V k if each vector x V can be written in a unique way as x = x 1 + x x k, where x j V j, j = 1,..., k. If this is the case we use the notation V = V 1 V 2 V k. We say that a subspace W of V is invariant under A if x W Ax W. Example 2.1. Suppose that A has n distinct eigenvalues λ 1,..., λ n with corresponding eigenvectors u 1,..., u n. It then follows that the vectors u 1,..., u n are linearly independent and thus form a basis for V. Let be the corresponding eigenspaces. Then ker(a λ j I) = {zu j : z C}, j = 1,..., n, V = ker(a λ 1 I) ker(a λ 2 I) ker(a λ n I) by the denition of a basis. It is also clear that each eigenspace is invariant under A. More generally, suppose that A has k distinct eigenvalues λ 1,..., λ k and that the geometric multiplicity g j of each λ j equals the algebraic multiplicity a j. Let ker(a λ j I), j = 1,..., k, be the corresponding eigenspaces. We can then nd a basis for each eigenspace consisting of g j eigenvectors. The union of these bases consists of g 1 + +g k = a a k = n elements and is linearly independent, since eigenvectors belonging to dierent eigenvalues are linearly independent. We thus obtain a basis for V and it follows that V = ker(a λ 1 I) ker(a λ 2 I) ker(a λ k I). In this basis, A has the matrix D = λ 1 I 1... λ k I k where each I j is a g j g j unit matrix. In other words, D is a diagonal matrix with the eigenvalues on the diagonal, each repeated g j times. Given a polynomial p(z) = α m z m + α m 1 z m α 1 z + α, we dene p(a) = α m A m + α m 1 A m α 1 A + α I. 3

4 Lemma 2.1. There exists a non-zero polynomial p such that p(a) =. Proof. Note that C n n is an n 2 -dimensional vector space. It follows that the n matrices I, A, A 2,..., A n2 are linearly dependent. But this means that there exist numbers α,..., α n 2, not all zero, such that α n 2A n2 + α n 2 1A n α 1 A + α I =, that is, p(a) =, where p(z) = α n 2z n2 + + α 1 z + α. Let p min (z) be a monic polynomial (with leading coecient 1) of minimal degree such that p min (A) =. If p(z) is any polynomial such that p(a) = it follows that p(z) = q(z)p min (z) for some polynomial q. To see this, use the division algorithm on p and p min : p(z) = q(z)p min (z) + r(z), where r = or deg r < deg p min. Thus r(a) = p(a) q(a)p min (A) =. But this implies that r(z) =, since p min has minimal degree. This shows that the polynomial p min is unique. It is called the minimal polynomial for A. By the fundamental theorem of algebra, we can write the minimal polynomial as a product of rst degree polynomials, p min (z) = (z λ 1 ) m 1 (z λ 2 ) m2 (z λ k ) m k, (2.1) where the numbers λ j are distinct and each m j 1. Note that we don't know that the roots λ j of the minimal polynomial coincide with the eigenvalues of A yet. This will be shown in Theorem 2.1 below. Lemma 2.2. Suppose that p(z) = p 1 (z)p 2 (z) where p 1 and p 2 are relatively prime. If p(a) = we have that V = ker p 1 (A) ker p 2 (A) and each subspace ker p j (A) is invariant under A. Proof. The invariance follows from p j (A)Ax = Ap j (A)x =, x ker p j (A). Since p 1 and p 2 are relatively prime, it follow by Euclid's algorithm that there exist polynomials q 1, q 2 such that p 1 (z)q 1 (z) + p 2 (z)q 2 (z) = 1. Thus p 1 (A)q 1 (A) + p 2 (A)q 2 (A) = I. Applying this identity to the vector x V, we obtain x = p 1 (A)q 1 (A)x + p 2 (A)q 2 (A)x, }{{}}{{} x 2 x 1 4

5 where p 2 (A)x 2 = p 2 (A)p 1 (A)q 1 (A)x = p(a)q 1 (A)x =, so that x 2 ker p 2 (A). Similarly x 1 ker p 1 (A). Thus V = ker p 1 (A) + ker p 2 (A). On the other hand, if x 1 + x 2 = x 1 + x 2, x j, x j ker p j (A), j = 1, 2, we obtain that so that y = x 1 x 1 = x 2 x 2 ker p 1 (A) ker p 2 (A), y = p 1 (A)q 1 (A)y + p 2 (A)q 2 (A)y = q 1 (A)p 1 (A)y + q 2 (A)p 2 (A)y =. It follows that the representation x = x 1 + x 2 is unique and therefore V = ker p 1 (A) ker p 2 (A). Theorem 2.1. With λ 1,..., λ k and m 1,..., m k as in (2.1) we have V = ker(a λ 1 I) m 1 ker(a λ k I) m k, is invariant under A. The numbers λ 1,..., λ k are the eigen- where each ker(a λ j I) m j values of A. Proof. We begin by noting that the polynomials (z λ j ) m j, j = 1,..., k, are relatively prime. Repeated application of Lemma 2.2 therefore shows that V = ker(a λ 1 I) m 1 ker(a λ k I) m k, with each ker(a λ j I) m j invariant. Consider the linear operator A: ker(a λ j I) m j ker(a λ j I) m j. It is clear that ker(a λ j I) m j {}, for otherwise p min would not be minimal. Since every linear operator on a (non-trivial) nite dimensional complex vector space has an eigenvalue, it follows that there is some non-zero element u ker(a λ j I) m j with Au = λu, λ C. But then = (A λ j I) m j u = (λ λ j ) m j u, so λ = λ j. This shows that the roots λ j of the minimal polynomial are eigenvalues of A. On the other hand if u is an eigenvector of A corresponding to the eigenvalue λ, we have = p min (A)u = (A λ 1 I) m1 (A λ k I) m k u = (λ λ 1 ) m1 (λ λ k ) m k u, so λ = λ j for some j, that is, every eigenvalue is a root of the minimal polynomial. 5

6 The subspace ker(a λ j I) m j is called the generalised eigenspace corresponding to λ j and a non-zero vector x ker(a λ j I) m j is called a generalised eigenvector. The number m j is the smallest exponent m such that (A λ j I) m vanishes on ker(a λ j I) m j. Suppose for a contradiction that e.g. (A λ 1 I) m1 1 u = for all u ker(a λ 1 I) m 1. Writing x V as x = x 1 + x according to the decomposition V = ker(a λ 1 I) m 1 ker p(a), where p(z) = (z λ 2 ) m2 (z λ k ) m k, we would then obtain that (A λ 1 I) m 1 1 p(a)x = p(a)(a λ 1 I) m 1 1 x 1 + (A λ 1 I) m 1 1 p(a) x =, contradicting the denition of the minimal polynomial. If we select a basis {u j,1,..., u j,nj } for each generalised eigenspace, then the union {u 1,1,..., u 1,n1, u 2,1,..., u 2,n2,..., u k,1,..., u k,nk } will be a basis for V. Since each generalised eigenspace is invariant under the linear operator A, the matrix for A in this basis will have the block form B = B 1... where each A j is a n j n j square matrix. Moreover, A j only has one eigenvalue λ j. Set N j = A λ j I j, where I j is an n j n j unit matrix. Then N m j j = by the denition of the generalised eigenspaces. A linear operator N with the property that N m = for some m is called nilpotent. Finally, we note that the dimension of the generalised eigenspace ker(a λ j I) m j equals the algebraic multiplicity of the eigenvalue λ j, that is, n j = a j. This follows since B k, ( 1) n (λ λ 1 ) a1 (λ λ k ) a k = det(b λi) In summary we have proved the following result. = det(b 1 λi 1 ) det(b k λi k ) = ( 1) n (λ λ 1 ) n1 (λ λ k ) n k. Theorem 2.2. Let A be an n n matrix. There exists a basis for C n in which A has the block form B 1... and B j = λ j I j + N j, where λ 1,..., λ k are the distinct eigenvalues of A, I j is an a j a j unit matrix and N j is nilpotent. Before using this to compute the matrix exponential, we mention a very beautiful result whose proof follows easily from the previous theorem. B k, 6

7 Theorem 2.3 (Cayley-Hamilton). Let p char (z) = det(a zi) be the characteristic polynomial of A. Then p min p char, so that in particular p char (A) =. Proof. The exponent m j of the factor (z λ j ) m j in the minimal polynomial is the smallest exponent m such that N m j =. Let x be such that N m j j x = and N m j 1 j x. We then claim that the vectors x, N j x,..., N m j 1 j x are linearly independent. Indeed, multiplying the equation α x + α 1 N j x + + α mj 1N m j 1 j x = by N m j 1 j we obtain α N m j 1 j x = α N m j 1 j x + α 1 N m j j x + + α mj N 2m j 2 j x =, so that α =. Proceeding inductively, we obtain that α = α 1 = = α mj 1 =, proving the claim. It follows that m j a j and hence that the minimal polynomial divides the characteristic polynomial, since p min (z) = (z λ 1 ) m1 (z λ k ) m k and p char (z) = ( 1) n (z λ 1 ) a1 (z λ k ) a k. The nal statement follows from the fact that p min (A) =. 3 The matrix exponential Recall that the unique solution of the initial value problem { x = Ax, is given by x() = x, x(t) = e ta x. If B is the block form of A and A = T BT 1, we obtain that where and e tb = e ta = T e tb T 1, (3.1) e tb 1... e tb k, ) e tb j = e t(λ ji j +N j ) = e tλ ji j e tn j = e λ jt (I + tn j + + tm j 1 (m j 1)! N m j 1 j, 7

8 since N m j = for m m j. We now clearly see the advantage of this block form. In general, the solution of the initial-value problem will be a sum of terms of the form t m e λ jt. If A has a basis of eigenvectors, there will only be terms of the form e λ jt. The following example shows what happens for 2 2 matrices which are not diagonalisable. The same method can be applied to any such matrix. Example 3.1. Let A = ( ) The characteristic polynomial is (z + 1) 2, so 1 is the only eigenvalue. We have ( ) 2 4 N = A + I =, 1 2 from which it follows that the only eigenvectors are zu, with u = (2, 1) and z C. Since (A + I) 2 =, it follows that the generalised eigenspace is C 2 (this can also be realised directly). We nd that (( ) ( )) e ta = e t( I+N) = e t e tn = e t (I + tn) = e t + t ( ) (1 2t)e t 4te = t te t (1 + 2t)e t. For 3 3 matrices there are more possibilities. Example 3.2. Let 1 1 A = The characteristic polynomial of A is p char (z) = z 2 (z 2). Thus, A has the only eigenvalues λ 1 = and λ 2 = 2 with algebraic multiplicities a 1 = 2 and a 2 = 1, respectively. We nd that Ax = x = z(1,, 1), Ax = 2x x = z(, 1, ), z C. Thus u 1 = (1,, 1) and u 2 = (, 1, ) are eigenvectors corresponding to λ 1 and λ 2, respectively. The generalised eigenspace corresponding to λ 2 is simply the usual eigenspace ker(a 2I), but the one corresponding to λ 1 must be ker A m 1, with m 1 2. By the Cayley-Hamilton theorem, we must also have m 1 2, so m 1 = 2. Calculating A 2 = 4, 8

9 we nd e.g. the basis {u 1, u 1}, where u 1 = (1,, ), for ker A 2. Since Au 1 = u 1, the new matrix is 1 B = 2 and A = T BT 1 with T = 1 and T 1 = We have with and Hence, and Example 3.3. Let e tb 1 = I + tn 1 = ( e e tb tb 1 = e tb 2 ( ) 1 + t 1 e tb 2 = e 2t. ), 1 t e tb = 1. e 2t ( ) 1 = 1 + t t e ta = T e tb T 1 = e 2t. t 1 t A = ( ) 1 t 1 The characteristic polynomial of A is p char (z) = (z 2) 3. Thus, A has the only eigenvalue 2 with algebraic multiplicity 3. The generalised eigenspace is the whole of C 3. On the other hand, A 2I =, (A 2I) 2 =, so that p min (z) = (z 2) 2 and the generalised eigenspace is ker(a 2I) 2 = C 3. A is already in block form and (1 + t)e 2t te 2t te 2t e ta = e 2t (I+tN) = e 2t 1 + t = e 2t te 2t te 2t (1 t)e 2t 9

10 Example 3.4. Let 2 A = Again p char (z) = (z 2) 3 and thus 2 is the only eigenvalue. The generalised eigenspace is the whole of C 3. This time A 2I = 1, (A 2I) 2 = 1, (A 2I) 3 1 so that p min (z) = (z 2) 3 and the generalised eigenspace is ker(a 2I) 3 = C 3. Again, A is already in block form, but this time m = 3 so that e ta = e 2t (I + tn + t2 N ) = e2t 1 + t 1 + t e 2t = t2 2 e2t e 2t te 2t. te 2t e 2t The 4 4 case can be analysed in a similar way. In general, the computations will get more involved the higher n is. Most computer algebra systems have routines for computing the matrix exponential. In Maple this can be done using the command MatrixExponential from the LinearAlgebra package. 4 The Jordan normal form* Theorem 4.1. Let A be an n n matrix. There exists an invertible n n matrix T such that T 1 AT = J, where J is a block matrix, J = J 1... and each block J j is a square matrix of the form λ J j = λi + N =... 1, (4.1) λ where λ is an eigenvalue of A, I is a unit matrix and N has ones on the line directly above the diagonal and zeros everywhere else. J m 1

11 Remark. There is also an alternative version for real matrices see Teschl. By rst using Theorem 2.2, we can choose a basis in which A has the block form B 1... The theorem is proved by picking a basis for each generalised subspace ker(a λ j I) m j so that B j takes the form J j. By considering each generalised eigenspace separately, we can assume from the start that A only has one eigenvalue, which we call λ. Moreover, A = λi + N, where N is nilpotent. We let m be the smallest positive integer such that N m =. Suppose that m = n. This means that there is some vector u such that N n 1 u. By looking at the proof of Theorem 2.3 it follows that the vectors u, Nu,..., N n 1 u are linearly independent. {N n 1 u,..., Nu, u} is therefore a basis for V. The matrix for N in this basis is , which means that we are done. In general, a set of non-zero vectors u,..., N l 1 u, with N l u = is called a Jordan chain. We will prove the theorem in general by showing that there is a basis for V consisting of Jordan chains. Proof. We prove the theorem by induction on the dimension of V. Clearly the theorem holds if V has dimension 1. Suppose now that the theorem holds for all complex vector spaces of dimension less than n, where n 2, and assume that dim V = n. Since N is nilpotent it is not injective and therefore dim range N < n (by the dimension theorem). By the induction hypothesis, we can therefore nd a basis of Jordan chains B k. u i, Nu i,..., N l i 1 u i, i = 1,..., k, for range N. For each u i we can nd a v i V such that Nv i = u i (since u i range N). That is, each Jordan chain in the basis for range N can be extended by one element. We claim that the vectors v i, Nv i, N 2 v i,..., N l i v i, i = 1,..., k, (4.2) are linearly independent. Indeed, suppose that k i=1 l i j= α i,j N j v i =. (4.3) 11

12 Applying N to this equality, we nd that k l i 1 k α i,j N j u i = i=1 j= i=1 l i j= α i,j N j+1 v i =, which, by hypothesis implies that α i,j =, 1 i k, j l i 1. Looking at (4.3) this means that k k α i,li N li 1 u i = α i,li N l i v i =, i=1 which again implies that α i,li =, 1 i k, by our induction hypothesis. Extend the vectors in (4.2) to a basis for V by possibly adding vectors { w 1,..., w K }. For each i we have N w i range N, so we can nd an element ŵ i in the span of the vectors in (4.2) such that N w i = Nŵ i. But then w i = w i ŵ i ker N and the vectors v i, Nv i, N 2 v i,..., N l i v i, i = 1,..., k, w 1,..., w K constitute a basis for V consisting of Jordan chains (the elements w i are chains of length 1). The matrix J is not completely unique, since we e.g. can change the order of the Jordan blocks. It turns out that this is the only thing which is not unique. In other words both the number of blocks and their sizes are uniquely determined. Let us prove this. It suces to consider a nilpotent operator N : V V. Let β be the total number of blocks and β(k) the number of blocks of size k k. Then dim ker N = β, and dim ker N 2 diers from dim ker N by β β(1). In the same manner, we nd that dim ker N = β, i=1 dim ker N 2 = dim ker N + β β(1),. dim ker N k+1 = dim ker N k + β β(1) β(k). It follows by induction that each β(k) is uniquely determined by N. Note that the number of Jordan blocks in the matrix J equals the number of Jordan chains, so that there may be several Jordan blocks corresponding to the same eigenvalue. The sum of the lengths of the Jordan chains equals the dimension of the generalised eigenspace. Let p char (z) = det(a zi) be the characteristic polynomial of A. Recall that p char is independent of basis, so that p char (z) = det(j zi). Expanding repeatedly along the rst column we nd that p char (z) = ( 1) n (z λ 1 ) n1 (z λ k ) n k, where nj = dim(a λ j I) m j is the dimension of the generalised eigenspace corresponding to λ j. Thus n j = a j, the algebraic multiplicity of λ j. By the remarks above about the uniqueness of J, it follows that the geometric multiplicity g j of each eigenvalue equals the number of Jordan chains for that eigenvalue. 12

13 Example 4.1. Consider the matrix A = ( 3 ) from Example 3.1. We showed that it has the only eigenvalue 1 with corresponding eigenspace {zu: z C}, u = (2, 1). To nd the Jordan normal form we need to nd the Jordan chain corresponding to u. Since ( ) 2 4 A + I =, 1 2 the equation (A+I)u = u has the general solution u = ( 1, )+zu. We can therefore take {(2, 1), ( 1, )} as our Jordan basis. The Jordan normal form is ( ) 1 1 J = 1 The 3 3 matrix in Example 3.2 can be analysed in a similar way. The matrix in Example 3.3 requires more work. Example 4.2. The matrix A = in Example 3.3 has the only eigenvalue 2 with algebraic multiplicity 3. The geometric multiplicity is 2 since Ax = 2x x 1 + x 2 x 3 =. The eigenspace is e.g. spanned by the vectors (1, 1, ) and (1,, 1). It is immediately clear that a Jordan basis will consist of one Jordan chain of length 2 and one of length 1 and that the Jordan normal form is 2 1 J = 2. 2 However, we can't just take an arbitrary eigenvector u and solve the equation (A 2I)u = u since it's not certain that u range(a 2I). We therefore begin by computing A 2I =, Notice that range(a 2I) is spanned by the vector u 1 = (1,, 1). This is also an eigenvector (note that (A 2I) 2 = ). Next, we nd a solution of the equation (A 2I)u 1 = u 1, e.g. u 1 = (1,, ). Finally, we add an eigenvector which is not parallel to u 1, e.g. u 2 = (, 1, 1). Then {u 1, u 1, u 2 } is a Jordan basis. The matrix in Example 3.4 can be dealt with in a similar way by rst considering range(a 2I) 2. 13

14 Exercises 1. Compute e A by summing the power series when ( ) a) A = b) A = 2. 1 ( ) 1 2. Compute e ta by diagonalising the matrix, where A = Solve the initial-value problem x (t) = x(t), x() = Show that 5. Show that e A e A. (e A ) = e A. 6. Show that e S is unitary if S is skew symmetric, that is, S = S. 7. Show that the following identities (for all t R) imply AB = BA. a) Ae tb = e tb A, b) e ta e tb = e t(a+b). 8. Let A 1 = 1 1, A 2 = 1 1, A 3 = Calculate the generalised eigenspaces of each A j and nd a matrix T j such that T 1 j A j T j is in block form. What is the minimal polynomial of A j? 9. Calculate e ta j for the matrices A j in the previous exercise. 1. The matrix A = has the (algebraically) double eigenvalues 1 and 2. B = T 1 AT is in block form. What is B? Find a matrix T such that 14

15 11. Consider the initial value problem { x 1 = x 1 + 3x 2, x 2 = 3x 1 + x 2, x() = x. For which initial data x does the solution converge to zero as t? 12. Can you nd a general condition on the eigenvalues of A which guarantees that all solutions of the IVP x = Ax, x() = x. converge to zero as t? 13. The matrices A 1 and A 2 in Exercise 8 have the same eigenvalues. If you've solved Exercise 9 correctly, you will notice that all solutions of the IVP corresponding to A 1 are bounded for t while there are unbounded solutions of the IVP corresponding to A 2. Explain the dierence and try to formulate a general principle. 15

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5 JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

c Igor Zelenko, Fall

c Igor Zelenko, Fall c Igor Zelenko, Fall 2017 1 18: Repeated Eigenvalues: algebraic and geometric multiplicities of eigenvalues, generalized eigenvectors, and solution for systems of differential equation with repeated eigenvalues

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book] LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition

More information

Further linear algebra. Chapter IV. Jordan normal form.

Further linear algebra. Chapter IV. Jordan normal form. Further linear algebra. Chapter IV. Jordan normal form. Andrei Yafaev In what follows V is a vector space of dimension n and B is a basis of V. In this chapter we are concerned with linear maps T : V V.

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

The Jordan canonical form

The Jordan canonical form The Jordan canonical form Francisco Javier Sayas University of Delaware November 22, 213 The contents of these notes have been translated and slightly modified from a previous version in Spanish. Part

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues Lecture Notes: Eigenvalues and Eigenvectors Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Definitions Let A be an n n matrix. If there

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and

More information

Minimal Polynomials and Jordan Normal Forms

Minimal Polynomials and Jordan Normal Forms Minimal Polynomials and Jordan Normal Forms 1. Minimal Polynomials Let A be an n n real matrix. M1. There is a polynomial p such that p(a) =. Proof. The space M n n (R) of n n real matrices is an n 2 -dimensional

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

X i, AX i X i. (A λ) k x = 0.

X i, AX i X i. (A λ) k x = 0. Chapter 4 Spectral Theory In the previous chapter, we studied spacial operators: the self-adjoint operator and normal operators. In this chapter, we study a general linear map A that maps a finite dimensional

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then Homework 6 Solutions 1 Let V be the real vector space spanned by the functions e t, te t, t 2 e t, e 2t Find a Jordan canonical basis and a Jordan canonical form of T on V dened by T (f) = f Solution Note

More information

We have already seen that the main problem of linear algebra is solving systems of linear equations.

We have already seen that the main problem of linear algebra is solving systems of linear equations. Notes on Eigenvalues and Eigenvectors by Arunas Rudvalis Definition 1: Given a linear transformation T : R n R n a non-zero vector v in R n is called an eigenvector of T if Tv = λv for some real number

More information

Linear algebra II Tutorial solutions #1 A = x 1

Linear algebra II Tutorial solutions #1 A = x 1 Linear algebra II Tutorial solutions #. Find the eigenvalues and the eigenvectors of the matrix [ ] 5 2 A =. 4 3 Since tra = 8 and deta = 5 8 = 7, the characteristic polynomial is f(λ) = λ 2 (tra)λ+deta

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Linear Algebra Final Exam Solutions, December 13, 2008

Linear Algebra Final Exam Solutions, December 13, 2008 Linear Algebra Final Exam Solutions, December 13, 2008 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

(Almost) Jordan Form

(Almost) Jordan Form (Almost) Jordan Form These notes will demonstrate most of the basic steps for getting to Jordan canonical form of a complex matrix They will also get to to an important diagonal-nilpotent decomposition,

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G. Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

JORDAN AND RATIONAL CANONICAL FORMS

JORDAN AND RATIONAL CANONICAL FORMS JORDAN AND RATIONAL CANONICAL FORMS MATH 551 Throughout this note, let V be a n-dimensional vector space over a field k, and let φ: V V be a linear map Let B = {e 1,, e n } be a basis for V, and let A

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors November 3, 2016 1 Definition () The (complex) number λ is called an eigenvalue of the n n matrix A provided there exists a nonzero (complex) vector v such that Av = λv, in which case the vector v is called

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

1.4 Solvable Lie algebras

1.4 Solvable Lie algebras 1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen 1 Further Review continued Warm-up Let A, B M n and suppose det(a) = 0.Defineamatrixvaluedfunctionasfollows: F (t) =(A +

More information

Unit 5: Matrix diagonalization

Unit 5: Matrix diagonalization Unit 5: Matrix diagonalization Juan Luis Melero and Eduardo Eyras October 2018 1 Contents 1 Matrix diagonalization 3 1.1 Definitions............................. 3 1.1.1 Similar matrix.......................

More information

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 110 Linear Algebra Midterm 2 Review October 28, 2017 Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

A NOTE ON THE JORDAN CANONICAL FORM

A NOTE ON THE JORDAN CANONICAL FORM A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information