Generalized eigenvector - Wikipedia, the free encyclopedia

Size: px
Start display at page:

Download "Generalized eigenvector - Wikipedia, the free encyclopedia"

Transcription

1 1 of 30 18/03/ :00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that form a complete basis a matrix may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue λ is greater than its geometric multiplicity (the nullity of the matrix, or the dimension of its nullspace). In such cases, a generalized eigenvector of A is a nonzero vector v, which is associated with λ having algebraic multiplicity k 1, satisfying The set of all generalized eigenvectors for a given λ, together with the zero vector, form the generalized eigenspace for λ. Ordinary eigenvectors and eigenspaces are obtained for k=1. Contents 1 For defective matrices 2 Examples 2.1 Example Example 2 3 Other meanings of the term 4 The Nullity of (A λ I) k 4.1 Introduction 4.2 Existence of Eigenvalues 4.3 Constructive proof of Schur's triangular form 4.4 Nullity Theorem's Proof 5 Motivation of the Procedure 5.1 Introduction 5.2 Notation 5.3 Preliminary Observations 5.4 Recursive Procedure 5.5 Generalized Eigenspace Decomposition 5.6 Powers of a Matrix using generalized eigenvectors the minimal polynomial of a matrix using confluent Vandermonde matrices using difference equations 5.7 Chains of generalized eigenvectors Differential equations y = Ay Revisiting the powers of a matrix 5.8 Ordinary linear difference equations 6 References For defective matrices Generalized eigenvectors are needed to form a complete basis of a defective matrix, which is a matrix in which there are fewer linearly independent eigenvectors than eigenvalues (counting multiplicity). Over an

2 2 of 30 18/03/ :00 algebraically closed field, the generalized eigenvectors do allow choosing a complete basis, as follows from the Jordan form of a matrix. In particular, suppose that an eigenvalue λ of a matrix A has an algebraic multiplicity m but fewer corresponding eigenvectors. We form a sequence of m eigenvectors and generalized eigenvectors that are linearly independent and satisfy for some coefficients, for. It follows that The vectors can always be chosen, but are not uniquely determined by the above relations. If the geometric multiplicity (dimension of the eigenspace) of λ is p, one can choose the first p vectors to be eigenvectors, but the remaining m p vectors are only generalized eigenvectors. Examples Example 1 Suppose Then there is one eigenvalue λ=1 with an algebraic multiplicity m of 2. There are several ways to see that there will be one generalized eigenvector necessary. Easiest is to notice that this matrix is in Jordan normal form, but is not diagonal, meaning that this is not a diagonalizable matrix. Since there is 1 superdiagonal entry, there will be one generalized eigenvector (or you could note that the vector space is of dimension 2, so there can be only one generalized eigenvector). Alternatively, you could compute the dimension of the nullspace of to be p=1, and thus there are m-p=1 generalized eigenvectors. Computing the ordinary eigenvector is left to the reader (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector by solving Writing out the values: This simplifies to This simplifies to

3 3 of 30 18/03/ :00 And has no restrictions and thus can be any scalar. So the generalized eigenvector is, where the * indicates that any value is fine. Usually picking 0 is easiest. Example 2 The matrix has eigenvalues of and with algebraic multiplicities of and, but geometric multiplicities of and. The generalized eigenspaces of are calculated below.

4 4 of 30 18/03/ :00 This results in a basis for each of the generalized eigenspaces of. Together they span the space of all 5 dimensional column vectors. The Jordan Canonical Form is obtained. where Other meanings of the term The usage of generalized eigenfunction differs from this; it is part of the theory of rigged Hilbert spaces, so that for a linear operator on a function space this may be something different. One can also use the term generalized eigenvector for an eigenvector of the generalized eigenvalue problem The Nullity of (A λ I) k Introduction In this section it is shown, when is an eigenvalue of a matrix with algebraic multiplicity, then the null space of has dimension. Existence of Eigenvalues Consider a nxn matrix A. The determinant of A has the fundamental properties of being n linear and alternating. Additionally det(i) = 1, for I the nxn identity matrix. From the determinant's definition it can be seen that for a triangular matrix T = (t ij ) that

5 5 of 30 18/03/ :00 det(t) = (t ii ). There are three elementary row operations, scalar multiplication, interchange of two rows, and the addition of a scalar multiple of one row to another. Multiplication of a row of A by α results in a new matrix whose determinant is α det(a). Interchange of two rows changes the sign of the determinant, and the addition of a scalar multiple of one row to another does not affect the determinant. The following simple theorem holds, but requires a little proof. Theorem: The equation A x = 0 has a solution x 0, if and only if det(a) = 0. proof: Given the equation A x = 0 attempt to solve it using the elementary row operations of addition of a scalar multiple of one row to another and row interchanges only, until an equivalent equation U x = 0 has been reached, with U an upper triangular matrix. Since det(u) = ±det(a) and det(u) = (u ii ) we have that det(a) = 0 if and only if at least one u ii = 0. The back substitution procedure as performed after Gaussian Elimination will allow placing at least one non zero element in x when there is a u ii = 0. When all u ii 0 back substitution will require x = 0. Theorem: The equation A x = λ x has a solution x 0, if and only if det( λ I A) = 0. proof: The equation A x = λ x is equivalent to ( λ I A) x = 0. Constructive proof of Schur's triangular form The proof of the main result of this section will rely on the similarity transformation as stated and proven next. Theorem: Schur Transformation to Triangular Form Theorem For any n n matrix A, there exists a triangular matrix T and a unitary matrix Q, such that A Q = Q T. (The transformations are not unique, but are related.) Proof: Let λ 1, be an eigenvalue of the n n matrix A and x be an associated eigenvector, so that A x = λ 1 x. Normalize the length of x so that x = 1. Q should have x as its first column and have its columns an orthonormal basis for C n. Now, A Q = Q U 1, with U 1 of the form:

6 6 of 30 18/03/ :00 Let the induction hypothesis be that the theorem holds for all (n-1) (n-1) matrices. From the construction, so far, it holds for n = 2. Choose a unitary Q 0, so that U 0 Q 0 = Q 0 U 2, with U 2 of the upper triangular form: Define Q 1 by: Now:

7 7 of 30 18/03/ :00 Summarizing, U1 Q1 = Q1 U3 with: Now, A Q = Q U 1 and U 1 Q 1 = Q 1 U 3, where Q and Q 1 are unitary and U 3 is upper triangular. Thus A Q Q 1 = Q Q 1 U 3. Since the product of two unitary matrices is unitary, the proof is done. Nullity Theorem's Proof Since from A Q = Q U, one gets A = Q U Q T. It is easy to see (x I A) = Q (x I U) Q T and hence det(x I A) = det(x I U). So the characteristic polynomial of A is the same as that for U and is given by p(x) = (x λ 1 )(x λ 2 )... (x λ n ). (Q unitary) Observe, the construction used in the proof above, allows choosing any order for the eigenvalues of A that will end up as the diagonal elements of the upper triangular matrix U obtained. The algebraic mutiplicity of an eigenvalue is the count of the number of times it occurs on the diagonal.

8 8 of 30 18/03/ :00 Now. it can be supposed for a given eigenvalue λ, of algebraic multiplicity k, that U has been contrived so that λ occurs as the first k diagonal elements. Place (U λ I) in block form as below. The lower left block has only elements of zero. The β i = λ i λ 0 for i = k+1,..., n. It is easy to verify the following.

9 9 of 30 18/03/ :00 Where B is the kxk sub triangular matrix, with all elements on or below the diagonal equal to 0, and T is the (n-k)x(n-k) upper triangular matrix, taken from the blocks of (U λ I), as shown below. Now, almost trivially! That is B k has only elements of 0 and T k is triangular with all non zero diagonal elements. Just observe that if a column vector v = <v 1, v 2,..., v k > T, is mutiplied by B, then after the first multiplication the last, k'th, component is zero. After the second multiplication the second to last, k-1'th component is zero, also, and so on. The conclusion that (U λ I) k has rank n-k and nullity k follows. It is only left to observe, since (A λ I) k = Q (U λ I) k Q T, that (A λ I) k has rank n-k and nullity k, also. A unitary, or any other similarity transformation by a non-singular matrix preserves rank. The main result is now proven. Theorem: If λ is an eigenvalue of a matrix A with algebraic multiplicity k, then the null space of (A λ I) k has dimension k. An important observation is that raising the power of (A λ I) above k will not affect the rank and nullity any further.

10 10 of 30 18/03/ :00 Motivation of the Procedure Introduction In the section Existence of Eigenvalues it was shown that when a nxn matrix A, has an eigenvalue λ, of algebraic multiplicity k, then the null space of (A λ I) k, has dimension k. The Generalized Eigenspace of A, λ will be defined to be the null space of (A λ I) k. Many authors prefer to call this the kernel of (A λ I) k. Notice that if a nxn matrix has eigenvalues λ 1, λ 2,..., λ r with algebraic multiplicities k 1, k 2,..., k r, then k 1 + k k r = n. It will turn out that any two generalized eigenspaces of A, associated with different eigenvalues, will have a trivial intersection of {0}. From this it follows that the generalized eigenspaces of A combined span C n, the set of all n dimensional column vectors of complex numbers. The motivation for using a recursive procedure starting with the eigenvectors of A and solving for a basis of the generalized eigenspace of A, λ using the matix (A λ I), will be expounded on. Notation Some notation is introduced to help abbreviate statements. C n is the vector space of all n dimensional column vectors of complex numbers. The Null Space of A, N(A) = {x: A x = 0}. V W will mean V is a subset of W. V W will mean V is a proper subset of W. A(V) = {y: y = A x, for some x V}. W \ V will mean { x: x W and x is not in V}. The Range of A is A(C n ) and will be denoted by R(A). dim(v) will stand for the dimension of V. {0} will stand for the trivial subspace of C n. Preliminary Observations Throughout this discussion it is assumed that A is a nxn matrix of complex numbers. Since A m x = A (A m-1 x), the inclusions N(A) N(A 2 )... N(A m-1 ) N(A m ), are obvious. Since A m x = A m-1 (A x), the inclusions R(A) R(A 2 )... R(A m-1 ) R(A m ), are clear too.

11 11 of 30 18/03/ :00 Theorem: When the more trivial case N(A 2 ) = N(A), does not hold, there exists k 2, such that the inclusions, N(A) N(A 2 )... N(A k-1 ) N(A k ) = N(A k+1 ) =..., and R(A) R(A 2 )... R(A k-1 ) R(A k ) = R(A k+1 ) =..., are proper. proof: 0 dim(r(a m+1 )) dim(r(a m )) so eventually dim(r(a m+1 )) = dim(r(a m )), for some m. From the inclusion R(A m+1 ) R(A m ) it is seen that a basis for R(A m+1 ) is a basis for R(A m ) too. That is R(A m+1 ) = R(A m ). Since R(A m+1 ) = A(R(A m )), when R(A m+1 ) = R(A m ), it will be R(A m+2 ) = A(R(A m+1 )) = A(R(A m )) = R(A m+1 ). By the rank nullity theorem, it will also be the case that dim(n(a m+2 )) = dim(n(a m+1 )) = dim(n(a m )), for the same m. From the inclusions N(A m+2 ) N(A m+1 ) N(A m ), it is clear that a basis for N(A m+2 ) is also a basis for N(A m+1 ) and N(A m ). So N(A m+2 ) = N(A m+1 ) = N(A m ). Now, k is the first m for which this happens. Since certain expressions will occur many times in the following, some more notation will be introduced. A λ, k = (A λ I) k N λ, k = N((A λ I) k ) = N(A λ, k ) R λ, k = R((A λ I) k ) = R(A λ, k ) From the inclusions N λ, 1 N λ, 2... N λ, k-1 N λ, k = N λ, k+1 =..., N λ, k \ {0} = (N λ, m \ N λ, m-1 ), for m = 1,..., k and N λ, 0 = {0}, follows. When λ is an eigenvalue of A, in the statement above, k will not exceed the algebraic multiplicity of λ, and can be less. In fact when k would only be 1 is when there is a full set of linearly independent eigenvectors. Let's consider when k 2. Now, x N λ, m \ N λ, m-1, if and only if A λ, m x = 0, and A λ, m-1 x 0. Make the observation that A λ, m x = 0, and A λ, m-1 x 0, if and only if A λ, m-1 A λ, 1 x = 0, and A λ, m-2 A λ, 1 x 0. So, x N λ, m \ N λ, m-1, if and only if A λ, 1 x N λ, m-1 \ N λ, m-2. Recursive Procedure Consider a matrix A, with an eigenvalue λ of algebraic multiplicity k 2, such that there are not k linearly independent eigenvectors associated with λ. It is desired to extend the eigenvectors to a basis for N λ, k. That is a basis for the generalized eigenvectors associated with λ.

12 12 of 30 18/03/ :00 There exists some 2 r k, such that N λ, 1 N λ, 2... N λ, r-1 N λ, r = N λ, r+1 =..., N λ, r \ {0} = (N λ, m \ N λ, m-1 ), for m = 1,..., r and N λ, 0 = {0},. The eigenvectors are N λ, 1 \ {0}, so let x 1,..., x r1 be a basis for N λ, 1 \ {0}. Note that each N λ, m is a subspace and so a basis for N λ, m-1 can be extended to a basis for N λ, m. Because of this we can expect to find some r 2 = dim(n λ, 2 ) dim(n λ, 1 ) linearly independent vectors x r1 + 1,..., x r1 + r 2 such that x 1,..., x r1, x r1 + 1,..., x r1 + r 2 is a basis for N λ, 2 Now, x N λ, 2 \ N λ, 1, if and only if A λ, 1 x N λ, 1 \ {0}. Thus we can expect that for each x {x r1 + 1,..., x r1 + r 2 }, A λ, 1 x = α 1 x α r1 x r1, for some α 1,..., α r1, depending on x. Suppose we have reached the stage in the construction so that m-1 sets, {x 1,..., x r1 }, {x r1 + 1,..., x r1 + r 2 },..., {x r r m-2 + 1,..., x r r m-1 } such that x 1,..., x r1, x r1 + 1,..., x r1 + r 2,..., x r r m-2 + 1,..., x r r m-1 is a basis for N λ, m-1, have been found. We can expect to find some r m = dim(n λ, m ) dim(n λ, m-1 ) linearly independent vectors x r r m-1 + 1,..., x r r m such that x 1,..., x r1, x r1 + 1,..., x r1 + r 2,..., x r r m-1 + 1,..., x r r m is a basis for N λ, m Again, x N λ, m \ N λ, m-1, if and only if A λ, 1 x N λ, m-1 \ N λ, m-2. Thus we can expect that for each x {x r r m-1 + 1,..., x r r m }, A λ, 1 x = α 1 x α r r m-1 x r r m-1, for some α 1,..., α r r m-1, depending on x. Some of the {α r r m-2 + 1,..., α r r m-1 }, will be non zero, since A λ, 1 x must lie in N λ, m-1 \ N λ, m-2. The procedure is continued until m = r.

13 13 of 30 18/03/ :00 The α i are not truly arbitrary and must be chosen, accordingly, so that sums α 1 x 1 + α 2 x are in the range of A λ, 1. Generalized Eigenspace Decomposition As was stated in the Introduction, if a nxn matrix has eigenvalues λ 1, λ 2,..., λ r with algebraic multiplicities k 1, k 2,..., k r, then k 1 + k k r = n. When V 1 and V 2 are two subspaces, satisfying V 1 V 2 = {0}, their direct sum,, is defined and notated by V 1 V 2 = {v 1 + v 2 : v 1 V 1 and v 2 V 2 }. V 1 V 2 is also a subspace and dim(v 1 V 2 ) = dim(v 1 ) + dim(v 2 ). Since dim(n λi, k i ) = k i, for i = 1, 2,..., r, after it is shown that N λi, k i N λj, k j = {0}, for i j, we have the main result. Theorem: Generalized Eigenspace Decomposition Theorem C n = N λ1, k 1 N λ2, k 2... N λr, k r. This follows easily after we prove the theorem below. Theorem: Let λ be an eigenvalue of A and β λ. Then A β, r (N λ, m \ N λ, m-1 ) = N λ, m \ N λ, m-1, for any positive integers m and r. proof: If x N λ, 1 \ {0}, A λ, 1 x = (A λ I)x = 0, then A x = λ x and A β, 1 x = (A β I)x = (λ β)x. So A β, 1 x N λ, 1 \ {0} and A β, 1 (λ β) 1 x = x. It holds A β, 1 (N λ, 1 \ {0}) = N λ, 1 \ {0}. Now, x N λ, m \ N λ, m-1, if and only if A λ, m x = (A λ I)A λ, m-1 x = 0, and A λ, m-1 x 0. In the case, x N λ, m \ N λ, m-1, A λ, m-1 x N λ, 1 \ 0, and A β, 1 A λ, m-1 x = (λ β) A λ, m-1 x 0. The operators A β, 1 and A λ, m-1 commute. Thus A λ, m (A β, 1 x) = 0 and A λ, m-1 (A β, 1 x) 0, which means A β, 1 x N λ, m \ N λ, m-1. Now, let our induction hypothesis be, A β, 1 (N λ, m \ N λ, m-1 ) = N λ, m \ N λ, m-1,. The relation A β, 1 x = (λ β) x + A λ, 1 x holds.

14 14 of 30 18/03/ :00 For y N λ, m+1 \ N λ, m, let x = (λ β) -1 y + z. Then A β, 1 x = y + (λ β) -1 A λ, 1 y + (λ β) z + A λ, 1 z = y + (λ β) -1 A λ, 1 y + A β, 1 z. Now, A λ, 1 y N λ, m \ N λ, m-1 and, by the induction hypothesis, there exists z N λ, m \ N λ, m-1 that solves A β, 1 z = (λ β) -1 A λ, 1 y. It follows x N λ, m+1 \ N λ, m and solves A β, 1 x = y. So A β, 1 (N λ, m+1 \ N λ, m ) = N λ, m+1 \ N λ, m,. Repeatedly applying A β, r = A β, 1 A β, r-1 finishes the proof. In fact, from the theorem just proved, for i j, A λi, k i (N λj, k j ) = N λj, k j,. Now, suppose that N λi, k i N λj, k j {0}, for some i j. Choose x N λi, k i N λj, k j 0. Since x N λi, k i, it follows A λi, k i x = 0. Since x N λj, k j, it follows A λi, k i x 0, because A λi, k i preserves dimension on N λj, k j. So it must be N λi, k i N λj, k j = {0}, for i j. This concludes the proof of the Generalized Eigenspace Decomposition Theorem. Powers of a Matrix using generalized eigenvectors Assume A is a nxn matrix with eigenvalues λ 1, λ 2,..., λ r of algebraic multiplicities k 1, k 2,..., k r. For notational convenience A λ, 0 = I. Note that A β, 1 = (λ β)i + A λ, 1. and apply the binomial theorem. A β, s = ((λ β)i + A λ, 1 ) s = s m = 0 ( s m ) (λ β) s m A λ, m When λ is an eigenvalue of algebraic multiplicity k, and x N λ, k, then A λ, m x = 0, for m k, so in this case:

15 15 of 30 18/03/ :00 A β, s x = min(s, k 1) m = 0 ( s m ) (λ β) s m A λ, m x Since C n = N λ1, k 1 N λ2, k 2... N λr, k r, any x in C n can be expressed as x = x 1 + x x r, with each x i N λi, k i. Hence: A β, s x = r min(s, k i 1) ( s m ) (λ i β) s m A λi, m x i. i = 1 m = 0 The columns of A β, s are obtained by letting x vary across the standard basis vectors. The case A 0, s is the power A s of A. the minimal polynomial of a matrix Assume A is a nxn matrix with eigenvalues λ 1, λ 2,..., λ r of algebraic multiplicities k 1, k 2,..., k r. For each i define α(λ i ), the null index of λ i, to be the smallest positive integer α such that N λi, α = N λi, k i. It is often the case that α(λ i ) < k i. Then p(x) = (x λ i ) α(λ i) is the minimal polynomial for A. To see this note p(a) = A λi,α(λ i ) and the factors can be commuted in any order. So p(a) (N λj, k j ) = {0}, because A λj,α(λ j ) (N λj, k j ) = {0}. Being that C n = N λ1, k 1 N λ2, k 2... N λr, k r, it is clear p(a) = 0. Now p(x) can not be of less degree because A β, 1 (N λj, k j ) = N λj, k j, when β λ j, and so A λj,α(λ j ) must be a factor of p(a), for each j. using confluent Vandermonde matrices An alternative strategy is to use the characteristic polynomial of matrix A. Let p(x) = a 0 + a 1 x + a 2 x a n-1 x n-1 + x n be the characteristic polynomial of A. The minimal polynomial of A can be substituted for p(x) in this discussion, if it is known, and different, to reduce the degree n and the multiplicities of the eigenvalues.

16 16 of 30 18/03/ :00 Then p(a) = 0 and A n = (a 0 I + a 1 A + a 2 A a n-1 A n-1 ). So A n+m = b m, 0 I + b m, 1 A + b m, 2 A b m, n-1 A n-1, where the b m, 0, b m, 1, b m, 2,..., b m, n-1, satisfy the recurrence relation b m, 0 = a 0 b m-1, n-1, b m, 1 = b m-1, 0 a 1 b m-1, n-1, b m, 2 = b m-1, 1 a 2 b m-1, n-1,..., b m, n-1 = b m-1, n-2 a n-1 b m-1, n-1 with b 0, 0 = b 0, 1 = b 0, 2 =... = b 0, n-2 = 0, and b 0, n-1 = 1. This alone will reduce the number of multiplications needed to calculate a higher power of A by a factor of n 2, as compared to simply multiplying A n+m by A. In fact the b m, 0, b m, 1, b m, 2,..., b m, n-1, can be calculated by a formula. Consider first when A has distinct eigenvalues λ 1, λ 2,..., λ n. Since p(λ i ) = 0, for each i, the λ i satisfy the recurrence relation also. So: The matrix V in the equation is the well studied Vandermonde's, for which formulas for it's determinant and inverse are known. In the case that λ 2 = λ 1, consider instead when λ 1 is near λ 2, and subtract row 1 from row 2, which does not affect the determinant.

17 17 of 30 18/03/ :00 After dividing the second row by (λ 2 λ 1 ) the determinant will be affected by the removal of this factor and still be non-zero. Taking the limit as λ 1 λ 2, the new system has the second row differentiated. The new system has determinant: In the case that λ 3 = λ 2, also, consider like before when λ 2 is near λ 3, and subtract row 1 from row 3, which does not affect the determinant. Next divide row three by (λ 3 λ 2 ) and then subtract row 2 from the new row 3 and follow by dividing the resulting row 3 by (λ 3 λ 2 ) again. This will affect the determinant by removing a factor of (λ 3 λ 2 ) 2. Each element of row 3 is now of the form

18 18 of 30 18/03/ :00 The effect is to differentiate twice and multiply by one half. The new system has determinant: If it were that the multiplicity of the eigenvalue was even higher, then the next row would be differentiated three times and mutiplied by 1/3!. The progression is 1/s! f (s), with the constant coming from the coefficients of the derivatives in the Taylor expansion. This being done for each eigenvalue of algebraic multiplicity greater than 1. example The matrix has characteristic polynomial p(x) = (x 1) 2 (x 2) 3. The b m, 0, b m, 1, b m, 2, b m, 3, b m, 4, for which A 5+m = b m, 0 I + b m, 1 A + b m, 2 A 2 + b m, 3 A 3 + b m, 4 A 4,

19 19 of 30 18/03/ :00 satisfy the confluent Vandermonde system next. using difference equations Returning to the recurrence relation for b m, 0, b m, 1, b m, 2,..., b m, n-1, b m, 0 = a 0 b m-1, n-1, b m, 1 = b m-1, 0 a 1 b m-1, n-1, b m, 2 = b m-1, 1 a 2 b m-1, n-1,..., b m, n-1 = b m-1, n-2 a n-1 b m-1, n-1 with b 0, 0 = b 0, 1 = b 0, 2 =... = b 0, n-2 = 0, and b 0, n-1 = 1. Upon substituting the first relation into the second, b m, 1 = a 0 b m-2, n-1 a 1 b m-1, n-1, and now this one into the next b m, 2 = b m-1, 1 a 2 b m-1, n-1, b m, 2 = a 0 b m-3, n-1 a 1 b m-2, n-1 a 2 b m-1, n-1,..., and so on, the following difference equation is found. b m, n-1 = a 0 b m-n, n-1 a 1 b m-n+1, n-1 a 2 b m-n+2, n-1... a n-2 b m-2, n-1 a n-1 b m-1, n-1 with b 0, n-1 = b 1, n-1 = b 2, n-1 =... = b n-2, n-1 = 0, and b n-1, n-1 = 1. See the subsection on linear difference equations for more explanation. Chains of generalized eigenvectors Some notation and results from previous sections are restated. A is a nxn matrix of complex numbers. A λ, k = (A λ I) k N λ, k = N((A λ I) k ) = N(A λ, k ) For V 1 V 2 = {0}, V 1 V 2 = {v 1 + v 2 : v 1 V 1 and v 2 V 2 }.

20 20 of 30 18/03/ :00 Assume A has eigenvalues λ 1, λ 2,..., λ r of algebraic multiplicities k 1, k 2,..., k r. For each i define α(λ i ), the null index of λ i, to be the smallest positive integer α such that N λi, α = N λi, k i. It is always the case that α(λ i ) k i. When α(λ) 2, N λ, 1 N λ, 2... N λ, α-1 N λ, α = N λ, α+1 =..., N λ, α \ {0} = (N λ, m \ N λ, m-1 ), for m = 1,..., α and N λ, 0 = {0}. x N λ, m \ N λ, m-1, if and only if A λ, 1 x N λ, m-1 \ N λ, m-2 Define a chain of generalized eigenvectors to be a set { x 1, x 2,..., x m } such that x 1 N λ, m \ N λ, m-1, and x i+1 = A λ, 1 x i. Then x m 0 and A λ, 1 x m = 0. When x 1 N λ, 1 \ {0}, {x 1 } can be, for the sake of not requiring extra terminology, considered trivially a chain. When a disjoint collection of chains combined form a basis set for N λ, α(λ), they are often referred to as Jordan chains and are the vectors used for the columns of a transformation matrix in the Jordan canonical form. When a disjoint collection of chains that combined form a basis set, is needed that satisfy β i+1 x i+1 = A λ, 1 x i, for some scalars β i, chains as already defined can be scaled for this purpose. What will be proven here is that such a disjoint collection of chains can always be constructed. Before the proof is started, recall a few facts about direct sums. When the notation V 1 V 2 is used, it is assumed V 1 V 2 = {0}. For x = v 1 + v 2 with v 1 V 1 and v 2 V 2, then x = 0, if and only if v 1 = v 2 = 0. In the discussion below δ i = dim(n λ, i ) dim(n λ, i 1 ), with δ 1 = dim(n λ, 1 ). First consider when N λ, 2 \ N λ, 1 {0}, Then a basis for N λ, 1 can be extended to a basis for N λ, 2. If δ 2 = 1, then there exists x 1 N λ, 2 \ N λ, 1, such that N λ, 2 = N λ, 1 span{x 1 }. Let x 2 = A λ, 1 x 1. Then x 2 N λ, 1 \ {0}, with x 1 and x 2 linearly independent. If dim(n λ, 2 ) = 2, since {x 1, x 2 } is a chain we are through. Otherwise x 1, x 2 can be extended to a basis x 1, x 2,..., x δ1 for N λ, 2. The sets {x 1, x 2 }, {x 3 },..., {x δ1 } form a disjoint collection of chains. In the case that δ 2 > 1, then there exist

21 21 of 30 18/03/ :00 linearly independent x 1, x 2,..., x δ2 N λ, 2 \ N λ, 1, such that N λ, 2 = N λ, 1 span{x 1, x 2,..., x δ2 }. Let y i = A λ, 1 x i. Then y i N λ, 1 \ {0}, for i = 1, 2,..., δ 2. To see the y 1, y 2,..., y δ2 are linearly independent, assume that for some β 1, β 2,..., β δ2, that β 1 y 1 + β 2 y β δ2 y δ2 = 0, Then for x = β 1 x 1 + β 2 x β δ2 x δ2, x N λ, 1, and x span{x 1, x 2,..., x δ2 }, which implies that x = 0, and β 1 = β 2 =... = β δ2 = 0. Since span{y 1, y 2,..., y δ2 } N λ, 1, the vectors x 1, x 2,..., x δ2, y 1, y 2,..., y δ2 are a linearly independent set. If δ 2 = δ 1, then the sets {x 1, y 1 }, {x 2, y 2 },..., {x δ2, y δ2 } form a disjoint collection of chains that when combined are a basis set for N λ, 2. If δ 1 > δ 2, then x 1, x 2,..., x δ2, y 1, y 2,..., y δ2 can be extended to a basis for N λ, 2 by some vectors x δ2 +1,..., x δ1 in N λ, 1, so that {x 1, y 1 }, {x 2, y 2 },..., {x δ2, y δ2 }, {x δ2 +1},..., {x δ1 } forms a disjoint collection of chains. To reduce redundancy, in the next paragraph, when δ = 1 the notation x 1, x 2,..., x δ will be understood simply to mean just x 1 and when δ = 2 to mean x 1, x 2. So far it has been shown that, if linearly independent x 1, x 2,..., x δ2 N λ, 2 \ N λ, 1, are chosen, such that N λ, 2 = N λ, 1 span{x 1, x 2,..., x δ2 }, then there exists a disjoint collection of chains with each of the x 1, x 2,..., x δ2 being the first member or top of one of the chains. Furthermore, this collection of vectors, when combined, forms a basis for N λ, 2. Now, let the induction hypothesis be that, if linearly independent x 1, x 2,..., x δm N λ, m \ N λ, m 1, are chosen, such that N λ, m = N λ, m 1 span{x 1, x 2,..., x δm }, then there exists a disjoint collection of chains with each of the x 1, x 2,..., x δm being the first member or top of one of the chains. Furthermore, this collection of vectors, when combined, forms a basis for N λ, m. Consider m < α(λ). A basis for N λ, m can always be extended to a basis for N λ, m+1. So linearly independent x 1, x 2,..., x δm+1 N λ, m+1 \ N λ, m, such that N λ, m+1 = N λ, m span{x 1, x 2,..., x δm+1 }, can be chosen. Let y i = A λ, 1 x i. Then y i N λ, m \ N λ, m 1, for i = 1, 2,..., δ m+1. To see the y 1, y 2,..., y δm+1 are linearly independent, assume that for some β 1, β 2,..., β δm+1,

22 22 of 30 18/03/ :00 that β 1 y 1 + β 2 y β δm+1 y δm+1 = 0, Then for x = β 1 x 1 + β 2 x β δm+1 x δm+1, x N λ, 1, and x span{x 1, x 2,..., x δm+1 }, which implies that x = 0, and β 1 = β 2 =... = β δm+1 = 0. In addition, span{y 1, y 2,..., y δm+1 } N λ, m 1 = {0}. To see this assume that for some β 1, β 2,..., β δm+1, that β 1 y 1 + β 2 y β δm+1 y δm+1 N λ, m 1 Then for x = β 1 x 1 + β 2 x β δm+1 x δm+1, x N λ, m, and x span{x 1, x 2,..., x δm+1 }, which implies that x = 0, and β 1 = β 2 =... = β δm+1 = 0. The proof is nearly done. At this point suppose that b 1, b 2,..., b dm 1 is any basis for N λ, m 1. Then B = span{b 1, b 2,..., b dm 1 } span{y 1, y 2,..., y δm+1 } is a subspace of N λ, m. If B N λ, m, then b 1, b 2,..., b dm 1, y 1, y 2,..., y δm+1 can be extended to a basis for N λ, m, by some set of vectors z 1, z 2,..., z (δm δ m+1 ), in which case N λ, m = N λ, m 1 span{y 1, y 2,..., y δm+1 } span{z 1, z 2,..., z (δm δ m+1 )}. If δ m = δ m+1, then N λ, m = N λ, m 1 span{y 1, y 2,..., y δm+1 } or if δ m > δ m+1, then N λ, m = N λ, m 1 span{z 1, z 2,..., z (δm δ m+1 ), y 1, y 2,..., y δm+1 } In either case apply the induction hypothesis to get that there exists a disjoint collection of chains with each of the y 1, y 2,..., y δm+1 being the first member or top of one of the chains. Furthermore, this collection of vectors, when combined, forms a basis for N λ, m. Now, y i = A λ, 1 x i, for i = 1, 2,..., δ m+1, so each of the chains beginning with y i can be extended upwards into N λ, m+1 \ N λ, m to a chain beginning with x i. Since N λ, m+1 = N λ, m span{x 1, x 2,..., x δm+1 }, the combined vectors of the new chains form a basis for N λ, m+1. Differential equations y = Ay Let A be a n n matrix of complex numbers and λ an eigenvalue of A, with associated eigenvector x. Suppose y(t) is a n dimensional vector valued function, sufficiently smooth, so that y (t) is continuous. The restriction that y(t) be smooth can be relaxed somewhat, but is not the main focus of this discussion. The solutions to the equation y (t) = Ay(t) are sought. The first observation is that y(t) = e λt x will be a solution. When A does not have n linearly independent

23 23 of 30 18/03/ :00 eigenvectors, solutions of this kind will not provide the total of n needed for a fundamental basis set. In view of the existence of chains of generalized eigenvectors seek a solution of the form 'y(t) = e λt x 1 + t'e λt x 2, then y (t) =' λ e λt x 1 + e λt x 2 + λ t e λt x 2 =' e λt (λ x 1 + x 2 )'''' + t e λt (λx 2 ) and Ay(t) = e λt A x 1 + t e λt A x 2. In view of this, y(t) will be a solution to y (t) = Ay(t), when 'A x 1 = λ x 1 + x 2 ' and A x 2 = λ x 2. That is when (A λ I)x 1 = x 2 and (A λ I)x 2 = 0. Equivalently, when {x 1, x 2 } is a chain of generalized eigenvectors. Continuing with this reasoning seek a solution of the form y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x 3, then y (t) = λ e λt x 1 + e λt x 2 + λ t e λt x t e λt x 3 + λ t 2 e λt x 3 = e λt (λ x 1 + x 2 ) + t e λt (λ x x 3 ) + t 2 e λt (λ x 3 )' and Ay(t) = e λt A x 1 + t e λt A x 2 + t 2 e λt A x 3. Like before, y(t) will be a solution to y (t) = Ay(t), when 'A x 1 = λ x 1 + x 2 ', 'A x 2 = λ x x 3 ', and A x 3 = λ x 3. That is when (A λ I)x 1 = x 2, (A λ I)x 2 = 2 x 3, and (A λ I)x 3 = 0. Since it will hold (A λ I)(2 x 3 ) = 0, also, equivalently, when {x 1, x 2, 2 x 3 } is a chain of generalized eigenvectors. More generally, to find the progression, seek a solution of the form y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x 3 + t 3 e λt x t m 2 e λt x m 1 + t m 1 e λt x m, then y (t) = λ e λt x 1 + e λt x 2 + λ t e λt x 2 ' + 2 t e λt x 3 ' + λ t 2 e λt x 3 ' + 3 t 2 e λt x 4 ' + λ t 3 e λt x 4 ' +...' + (m 2)t m 3 e λt x m 1 ' + λ t m 2 e λt x m 1 ' + (m 1)t m 2 e λt x m ' + λ t m 1 e λt x m =' e λt (λ x 1 + x 2 ) + t e λt (λ x x 3 ) + t 2 e λt (λ x x 4 ) + t 3 e λt (λ x x 5 ) t m 3 e λt (λ x m 2 + (m 2) x m 1 ) + t m 2 e λt (λ x m 1 + (m 1) x m ) + t m 1 e λt (λ x m )' and Ay(t) = e λt A x 1 + t e λt A x 2 + t 2 e λt A x 3 + t 3 e λt A x t m 2 e λt A x m 1 + t m 1 e λt A x m. Again, y(t) will be a solution to y (t) = Ay(t), when 'A x 1 = λ x 1 + x 2 ', A x 2 = λ x x 3, A x 3 = λ x x 4, A x 4 = λ x x 5, A x m 2 = λ x m 2 + (m 2) x m 1, A x m 1 = λ x m 1 + (m 1) x m,

24 24 of 30 18/03/ :00 and A x m = λ x m. That is when (A λ I)x 1 = x 2, (A λ I)x 2 = 2 x 3, (A λ I)x 3 = 3 x 4, (A λ I)x 4 = 4 x 5,..., (A λ I)x m 2 = (m 2) x m 1, (A λ I)x m 1 = (m 1) x m, and (A λ I)x m = 0. Since it will hold (A λ I)((m 1)! x 3 ) = 0, also, equivalently, when {x 1, 1! x 2, 2! x 3, 3! x 4,..., (m 2)! x m 1, (m 1)! x m } is a chain of generalized eigenvectors. Now, the basis set for all solutions will be found through a disjoint collection of chains of generalized eigenvectors of the matrix A. Assume A has eigenvalues λ 1, λ 2,..., λ r of algebraic multiplicities k 1, k 2,..., k r. For a given eigenvalue λ i there is a collection of s, with s depending on i, disjoint chains of generalized eigenvectors C i,1 = { 1 z 1, 1 z 2,..., 1 z j1 }, C i,2 = { 2 z 1, 2 z 2,..., 2 z j2 },..., C i,js(i) = { s z 1, s z 2,..., s z js }, that when combined form a basis set for N λi, k i. The total number of vectors in this set will be j1 + j js = k i. Sets in this collection may have only one or two members so in this discussion understand the notation { β z 1, β z 2,..., β z jβ } will mean { β z 1 } when jβ = 1, and { β z 1, β z 2 } when jβ = 2, and so forth. Being that this notation is cumbersome with many indices, in the next paragraphs any particular C i,β, when more explanation is not needed, may just be notated as C = {z 1, z 2,..., z j }. For each such of these chain sets, C = {z 1, z 2,..., z j } the sets {x j }, {x j 1, x j }, {x j 2, x j 1, x j },..., {z 2, z 3,..., z j }, {z 1, z 2,..., z j } are also chains. This notation being understood to mean when C = {z 1 } just {z 1 }, when C = {z 1, z 2 } just {z 2 }, {z 1, z 2 } and when C = {z 1, z 2, z 2 } just {z 3 }, {z 2, z 3 }, {z 1, z 2, z 3 }, and so on. The conclusion of the top of the discussion was that y(t) = e λt x 1, is a solution when {x 1 } is a chain. y(t) = e λt x 1 + t e λt x 2, is a solution when {x 1, 1! x 2 } is a chain. y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x 3, is a solution when {x 1, 1! x 2,, 2! x 3 } is a chain. The progression continues to y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x 3 + t 3 e λt x t m 2 e λt x m 1 + t m 1 e λt x m,

25 25 of 30 18/03/ :00 is a solution when {x 1, 1! x 2, 2! x 3, 3! x 4,..., (m 2)! x m 1, (m 1)! x m }, is a chain of generalized eigenvectors. In light of the preceding calculations, all that must be done is to provide the proper scaling for each of the chains arising from the set C = {z 1, z 2,..., z j }. The progression for the solutions is given by y(t) = e λt z j, for chain {z j } y(t) = e λt z j 1 + (1 1!) t e λt z j, for chain {z j 1, 1!(1 1!) z j } y(t) = e λt z j 2 + (1 1!) t e λt z j 1 + (1 2!) t 2 e λt z j, for chain {z j 2, 1!(1 1!) z j 1, 2!(1 2!) z j } y(t) = e λt z j 3 + (1 1!) t e λt z j 2 + (1 2!) t 2 e λt z j 1 + (1 3!) t 3 e λt z j, for chain {z j 3, 1!(1 1!) z j 2, 2!(1 2!) z j 1, 3!(1 3!) z j }, and so on until, y(t) = e λt z 1 + (1 1!) t e λt z 2 + (1 2!) t 2 e λt z (1 (j 1)!) t j 1 e λt z j, for the chain of generalized eigenvectors, {z 1, 1!(1 1!) z 2, 2!(1 2!) z 3,..., (j 2)!(1 (j 2)!) x j 1, (j 1)!(1 (j 1)!) z j }. What is left to show is that when all the solutions constructed from the chain sets, as described, are considered, they form a fundamental set of solutions. To do this it has to be shown that there are n of them and that they are linearly independent. Reiterating, for a given eigenvalue λ i there is a collection of s, with s depending on i, disjoint chains of generalized eigenvectors C i,1 = { 1 z 1, 1 z 2,..., 1 z j1(i) }, C i,2 = { 2 z 1, 2 z 2,..., 2 z j2(i) },..., C i,js(i) = { s(i) z 1, s(i) z 2,..., s(i) z js(i) }, that when combined form a basis set for N λi, k i. The total number of vectors in this set will be j1(i) + j2(i) js(i) = k i. Thus the total number of all such basis vectors and so solutions is k 1 + k k r = n. Each solution is one of the forms y(t) = e λt x 1, y(t) = e λt x 1 + t e λt x 2, y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x 3, y(t) = e λt x 1 + t e λt x 2 + t 2 e λt x Now each basis vector v j, for j = 1, 2,..., n; of the combined set of generalized eigenvectors, occurs as x 1 in one of the expressions immediately above precisely once. That is, for each j, there is one y j (t) = e λt v j +... Since y j (0) = e λ0 v j = v j, the set of solutions are linearly independent at t = 0. Revisiting the powers of a matrix

26 As a notational convenience A λ, 0 = I. Note that A = λ I + A λ, 1. and apply the binomial theorem. A s = (λ I + A λ, 1 ) s = s r = 0 ( s r ) λ s r A λ, r Assume λ is an eigenvalue of A, and let { x 1, x 2,..., x m } be a chain of generalized eigenvectors such that x 1 N λ, m \ N λ, m-1, x i+1 = A λ, 1 x i,. x m 0, and A λ, 1 x m = 0. Then x r+1 = A λ, r x 1, for r = 0, 1,..., m-1. A s x 1 = s r = 0 ( s r ) λ s r A λ, r x 1 = s r = 0 ( s r ) λ s r x r+1 So for s m 1 A s x 1 = s r = 0 ( s r ) λ s r x r+1 and for s m 1, since A λ, m x 1 = 0, A s x 1 = m 1 r = 0 ( s r ) λ s r x r+1. Ordinary linear difference equations Ordinary linear difference equations are equations of the sort: y n = a y n 1 + b y n = a y n 1 + b y n 2 + c or more generally, y n = a m y n 1 + a m 1 y n a 2 y n m a 1 y n m + a 0 with initial conditions y 0, y 1, y 2,..., y m 2, y m 1. A case with a 1 = 0 can be excluded, since it represents an equation of less degree. They have a characteristic polynomial p(x) = x m a m x m 1 a m 1 x m 2... a 2 x a of 30 18/03/ :00

27 27 of 30 18/03/ :00 To solve a difference equation it is first observed, if y n and z n are both solutions, then (y n z n ) is a solution of the homogeneous equation: y n = a m y n 1 + a m 1 y n a 2 y n m a 1 y n m. So a particular solution to the difference equation must be found together with all solutions of the homogeneous equation to get the general solution for the difference equation. Another observation to make is that, if y n is a solution to the inhomogeneous equation, then z n = y n+1 y n is also a solution to the homogeneous equation. So all solutions of the homogeneous equation will be found first. When β is a root of p(x) = 0, then it is easily seen y n = β n is a solution to the homogeneous equation since y n a m y n 1 a m 1 y n 2... a 2 y n m + 1 a 1 y n m, becomes upon the substitution y n = β n, β n a m β n 1 a m 1 β n 2... a 2 β n m + 1 a 1 β n m = β n m (β m a m β m 1 a m 1 β m 2... a 2 β a 1 ) = β n m p(β) = 0. When β is a repeated root of p(x) = 0, then y n = nβ n 1 is a solution to the homogeneous equation since nβ n 1 a m (n 1)β n 2 a m 1 (n 2)β n 3... a 2 (n m + 1)β n m a 1 (n m)β n m 1 = (n m)β n m 1 (β m a m β m 1 a m 1 β m 2... a 2 β a 1 ) + β n m 1 (mβ m 1 (m 1)a m β m 2 (m 2)a m 1 β m a 3 β a 2 ) == (n m)β n m 1 p(β) + β n m 1 p (β) == 0. After reaching this point in the calculation the mystery is solved. Just notice when β is a root of p(x) = 0 with mutiplicity k, then for s = 1, 2,..., k 1 d s (β n m p(β))/dβ s = 0. Referring this back to the original equation β n a m β n 1 a m 1 β n 2... a 2 β n m + 1 a 1 β n m it is seen that y n = d s (β n )/dβ s are solutions to the homogeneous equation. For example, if β is a root of multiplicity 3, then y n = n(n 1)β n 2 is a solution. In any case this gives m linearly independent solutions to the homogeneous equation. To look for a particular solution first consider the simpliest equation.

28 28 of 30 18/03/ :00 y n = a y n 1 + b. It has a particular solution y p,n given by y p,0 = 0, y p,1 = b, y p,2 = (1 + a)b,..., y p,n = (1 + a + a a n 1 )b,...,. It's homogeneous equation y n = a y n 1 has solutions y n = a n y 0. So z n = y n+1 y n = a n b can be telescoped to get y n = (y n y n 1 ) + (y n 1 y n 2 ) (y 2 y 1 ) + (y 1 y 0 ) + y 0 = z n 1 + z n z 1 + z 0 + y 0 = (1 + a + a a n 1 )b, the particular solution with y 0 = 0. Now, returning to the general problem, the equation y n = a m y n 1 + a m 1 y n a 2 y n m a 1 y n m + a 0. When y p,n is a particular solution with y p,0 = 0, then z n = y p,n+1 y p,n is a solution to the homogeneous equation with z 0 = y p,1. So z n = y p,n+1 y p,n can be telescoped to get y p,n = (y p,n y p,n 1 ) + (y p,n 1 y p,n 2 ) (y p,2 y p,1 ) + (y p,1 y p,0 ) + y p,0 = z n 1 + z n z 1 + z 0 Considering y p,m = a m y p,m 1 + a m 1 y p,m a 2 y p,1 + a 1 y p,0 + a 0. and rewriting the equation in the z i z m 1 + z m z 1 + z 0 = (a m ) ( z m 2 + z m z 1 + z 0 ) + (a m 1 ) ( z m 3 + z m z 1 + z 0 ) + (a m 2 ) ( z m 4 + z m z 1 + z 0 ) + + (a 3 ) ( z 1 + z 0 ) + (a 2 ) ( z 0 ) + (a 0 ) and z m 1 = (a m 1) z m 2 + (a m + a m 1 1) z m 3 + (a m + a m 1 + a m 2 1) z m (a m + a m a 4 + a 3 1) z 1 + (a m + a m a 3 + a 2 1) z 0 + (a 0 ). Since a solution of the homogeneous equation can be found for any initial conditions z 0, z 1, z 2,..., z m 2, z m 1. reasoning conversely find such z i satisfying the equation, just before and define y p,n by the relation y p,0 = 0, y p,n = z n 1 + z n z 1 + z 0

29 One choice is, for example, z m 1 = a 0, z 0 = z 1 = z 2 =... = z m 2 = 0. This solution solves the problem for all initial values equal to zero. The general solution to the inhomogeneous equation is given by y n = y p,n + γ 1 w(1) n + γ 2 w(2) n γ m 1 w(m 1) n + γ m w(m) n where w(1) n, w(2) n,..., w(m 1) n, w(m) n are a basis for the homogeneous equation, and γ 1, γ 2,..., γ m 1, γ m are scalars. example y n = 8 y n 1 25 y n y n 3 28 y n y n with initial conditions y 0 = 0, y 1 = 0, y 2 = 0, y 3 = 0, and y 4 = 0. The characteristic polynomial for the equation is p(x) = x 5 8x x 3 38x x 8 = (x 1) 2 (x 2) 3. The homogeneous equation has independent solutions w1 n = 1 n = 1, w2 n = n 1 n 1 = n, and w3 n = 2 n, w4 n = n 2 n 1, w5 n = n(n 1) 2 n 2. The solution to the homogeneous equation z n = 3 w1 n w2 n + 3 w3 n 2 w4 n + ½ w5 n satisfies the initial conditions z 4 = 1, z 0 = z 1 = z 2 = z 3 = 0. A particular solution can be found by y p,0 = 0, y p,n = z n 1 + z n z 1 + z 0. Calculating sums: w1 = w1 n 1 + w1 n w1 1 + w1 0 = n. w2 = w2 n 1 + w2 n w2 1 + w2 0 = (n 1)n / 2. w3 = w3 n 1 + w3 n w3 1 + w3 0 = 2 n 1. Sums of these kinds are found by differentiating (x n 1) / (x 1). w4 = w4 n 1 + w4 n w4 1 + w4 0 = (n 2)2 n w5 = w5 n 1 + w5 n w5 1 + w5 0 = (n 2 5n + 8)2 n 2 2. Now, y p,n = 3 w1 n w2 n + 3 w3 n 2 w4 n + ½ w5 n solves the initial value problem of this example. 29 of 30 18/03/ :00

30 30 of 30 18/03/ :00 At this point it is worthwhile to notice that all the terms that are combinations of scalar multiples of basis elements can be removed. These are any multiples of 1, n, 2 n, n 2 n 1, and n 2 2 n 2. So instead the particular solution next, may be preferred. y p,n = ½ n 2. This solution has non zero initial values, which must be taken into account. y 0 = 0, y 1 = 1 2, y 2 = 2, y 3 = 9 2, and y 4 = 8. References Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Springer. ISBN Retrieved from " Categories: Linear algebra This page was last modified on 15 March 2013 at 11:25. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Eigenvalues and Eigenvectors Consider the equation A x = λ x, where A is an nxn matrix. We call x (must be non-zero) an eigenvector of A if this equation can be solved for some value of λ. We call λ an

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 110 Linear Algebra Midterm 2 Review October 28, 2017 Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8 Math 205, Summer I, 2016 Week 4b: Continued Chapter 5, Section 8 2 5.8 Diagonalization [reprint, week04: Eigenvalues and Eigenvectors] + diagonaliization 1. 5.8 Eigenspaces, Diagonalization A vector v

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. = SECTION 3.3. PROBLEM. The null space of a matrix A is: N(A) {X : AX }. Here are the calculations of AX for X a,b,c,d, and e. Aa [ ][ ] 3 3 [ ][ ] Ac 3 3 [ ] 3 3 [ ] 4+4 6+6 Ae [ ], Ab [ ][ ] 3 3 3 [ ]

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

Definition: An n x n matrix, A, is said to be diagonalizable if there exists a nonsingular matrix X and a diagonal matrix D such that X 1 A X DIGONLIZTION Definition: n n x n matrix, "", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X X D. Theorem: n n x n matrix, "", is diagonalizable

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems. Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013 Math Exam Sample Problems Solution Guide October, Note that the following provides a guide to the solutions on the sample problems, but in some cases the complete solution would require more work or justification

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Math 346 Notes on Linear Algebra

Math 346 Notes on Linear Algebra Math 346 Notes on Linear Algebra Ethan Akin Mathematics Department Fall, 2014 1 Vector Spaces Anton Chapter 4, Section 4.1 You should recall the definition of a vector as an object with magnitude and direction

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Spectrum (functional analysis) - Wikipedia, the free encyclopedia

Spectrum (functional analysis) - Wikipedia, the free encyclopedia 1 of 6 18/03/2013 19:45 Spectrum (functional analysis) From Wikipedia, the free encyclopedia In functional analysis, the concept of the spectrum of a bounded operator is a generalisation of the concept

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that

After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that 7.3 FINDING THE EIGENVECTORS OF A MATRIX After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that A v = λ v or (λi n A) v = 0 In other words, we have to find

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Algorithms to Compute Bases and the Rank of a Matrix

Algorithms to Compute Bases and the Rank of a Matrix Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F.

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F. Appendix A Linear Algebra A1 Fields A field F is a set of elements, together with two operations written as addition (+) and multiplication ( ), 1 satisfying the following properties [22]: 1 Closure: a

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Math Final December 2006 C. Robinson

Math Final December 2006 C. Robinson Math 285-1 Final December 2006 C. Robinson 2 5 8 5 1 2 0-1 0 1. (21 Points) The matrix A = 1 2 2 3 1 8 3 2 6 has the reduced echelon form U = 0 0 1 2 0 0 0 0 0 1. 2 6 1 0 0 0 0 0 a. Find a basis for the

More information