REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Similar documents
LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

PRACTICE PROBLEMS FOR THE FINAL

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Linear Algebra Highlights

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

2. Every linear system with the same number of equations as unknowns has a unique solution.

Conceptual Questions for Review

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Lecture 3: Linear Algebra Review, Part II

Lecture 15, 16: Diagonalization

MAT Linear Algebra Collection of sample exams

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

1 9/5 Matrices, vectors, and their applications

Practice problems for Exam 3 A =

MAT 1302B Mathematical Methods II

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

LINEAR ALGEBRA SUMMARY SHEET.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 3191 Applied Linear Algebra

Practice Final Exam. Solutions.

MTH 2032 SemesterII

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Chapter 6: Orthogonality

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

I. Multiple Choice Questions (Answer any eight)

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA REVIEW

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

SUMMARY OF MATH 1600

Math 205 A B Final Exam page 1 12/12/2012 Name

MATH 221, Spring Homework 10 Solutions

(v, w) = arccos( < v, w >

Section 6.4. The Gram Schmidt Process

MATH 22A: LINEAR ALGEBRA Chapter 4

TBP MATH33A Review Sheet. November 24, 2018

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

MAT1302F Mathematical Methods II Lecture 19

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MA 265 FINAL EXAM Fall 2012

1 Last time: least-squares problems

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Sample Final Exam: Solutions

MATH 310, REVIEW SHEET 2

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

7. Dimension and Structure.

. = V c = V [x]v (5.1) c 1. c k

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Practice Final Exam Solutions for Calculus II, Math 1502, December 5, 2013

LINEAR ALGEBRA QUESTION BANK

Linear Algebra Primer

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

P = A(A T A) 1 A T. A Om (m n)

Eigenvectors and Hermitian Operators

M340L Final Exam Solutions May 13, 1995

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapter 3 Transformations

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

235 Final exam review questions

Spring 2014 Math 272 Final Exam Review Sheet

Definition (T -invariant subspace) Example. Example

(v, w) = arccos( < v, w >

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

MATH 310, REVIEW SHEET

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Announcements Monday, November 06

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Recall : Eigenvalues and Eigenvectors

1 Last time: inverses

Linear Algebra. and

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

and let s calculate the image of some vectors under the transformation T.

Math 205, Summer I, Week 4b:

Solutions to practice questions for the final

Econ Slides from Lecture 7

Solutions to Final Exam

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

18.06 Quiz 2 April 7, 2010 Professor Strang

Math 21b. Review for Final Exam

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Transcription:

REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar if there is an invertible matrix P such that P AP B. A matrix D is diagonal if d i,j whenever i j. A matrix is diagonalizable if it is similar to a diagonal matrix. Note that every diagonal matrix is diagonalizable (just take P I), but a diagonalizable matrix may or may not be diagonal.. Suppose A is diagonalizable, with P AP D. Let d,..., d n be the diagonal entries of D, i.e. d j d j,j. Let p,..., p n be the columns of P. Rewriting our similarity equation as AP P D we have [Ap Ap Ap n [d p d p d n p n. This shows that Ap j d j p j, i.e. d j is an eigenvalue of A and p j is an eigenvector belonging to d j. Note that these eigenvectors and eigenvalues occur in the same order in the matrices P and D. For example if we have that and P 4 4 7 8, D 7 are eigenvectors belonging to λ and 7 8 is an eigenvector belonging to λ 7. We have d, d, and d 7. The eigenvalue λ appears twice since exactly two of the columns are eigenvectors belonging to it. The eigenvalue λ 7 appears once since exactly one of the columns is an eigenvector belonging to it. Since P is an invertible matrix its columns form a basis for R n. Thus we have the important fact that if A is diagonalizable, then there is a basis for R n consisting of eigenvectors of A. Each group of columns of P which are eigenvectors belonging to the same eigenvalue λ i is a linearly independent subset of E λi and so the number of elements in the group must be less than or equal to the dimension m i of E λi. But if we toss all the groups together we get n vectors (the columns of P ), so n m + m k n + n k n. The only way this can happen is for the number of vectors in each group to be exactly m i and for m i to equal n i.. The argument above works in reverse. Let B λi be the eigenbasis corresponding to λ i. Toss all of these sets together to get a set B. It can be shown that B is linearly independent. If each m i n i, then the number of elements of B is m + m k n + n k n, so B is a basis for R n consisting of eigenvectors of A. Let P be the matrix whose columns are the elements of B. Then P is invertible. Let D be the diagonal matrix whose diagonal entries are the eigenvalues of A repeated with the appropriate multiplicities and in the order corresponding to the columns of P. (Look at the example in the previous section.) Then AP P D, hence P AP D, and we have diagonalized A.

For example, if instead of being given P and D we are given that the eigenvalues of A are 4 7 λ and λ 7 and that B, and B 7 8, then we would have P 4 7 8, D 7 Note that in order to find D you do not need to find P! Just follow the procedure above to construct P and D at the same time. Note also that if some m i < n i, then A is not diagonalizable, so don t try to diagonalize it! You will not have enough eigenvectors in B to create P, so don t write down some silly matrix in an attempt to do so. For example, if we had λ and λ 7 and that B would just say that A is not diagonalizable. and B 7 7 8 we 4. There are some shortcuts for showing that a matrix is diagonalizable. If A has n distinct eigenvalues, then since each m i you are guaranteed of having enough vectors in B, so A is diagonalizable. (Of course the converse is false; if A does not have n distinct eigenvalues it may or may not be diagonalizable. You then have to do the hard work to find out which is true.). Suppose A is diagonalizable, with P AP D. Then A P DP, from which it follows that for any positive integer k we have A k P D k P. Now D k is easy to compute; it is just the diagonal matrix whose diagonal entries are the k th powers of the diagonal entries of D. This can be used to give you a general formula for A k. This in turn can sometimes be used to find the limit A of A k as k. (The limit does not always exist.) This can be applied to situations where you have some initial vector x and successive vectors x k+ Ax k and want to find the limit x as k ; it is just A x. The matrix A often arises in word problems where the next value of a certain vector is a linear function of its current value. Writing down A requires common sense and a careful reading of the problem. See the later section on Markov chains. ORTHOGONALITY. Recall that the dot product of two column vectors x x x. x n and y y y. y n in Rn is the number x y + x y + + x n y n. Note that this is the same number you get by taking the

product of the n matrix (row vector) x T [x, x,..., x n and the n matrix (column vector) y to get a matrix: x T y [x, x,..., x n y y. y n [x y + x y + + x n y n. Two vectors x and y are orthogonal or perpendicular if x y. Note that if these are column vectors in R n then this is the same as saying that x T y. ORTHOGONAL COMPLEMENTS. Given a subspace W of R n, the set W of all vectors x R n such that x is orthogonal to every vector in W is called the orthogonal complement of W. It is a subspace of R n. If the dimension of W is k, then the dimension of W is n k. (W ) W and W W {}.. If W span(w,..., w k ), then W is the set of all x such that w x,..., w k x. This is equivalent to saying that w T x,..., wk T x, and so W is just the nullspace of the matrix A whose rows are the vectors w T,..., wk T. Thus the usual technique gives you a basis for W. If {w,..., w k } is a basis for W, then the basis for W will have n k elements {v k+,..., v n }, and the union of these two sets will be a basis for R n. Here is an example. Suppose W span equations x x x and x x x,. To find W we must solve the. Thus we are finding the nullspace of [ [ the matrix. It has REF. From this we get that x, x t, and x t x t, so x x x t t. Thus W span. We have a basis for R consisting of,, and. Suppose that we are given an m n matrix A.. Recall that the row space row(a) is the subspace of R n spanned by the rows of A. From the discussion above we see that row(a) null(a), where null(a) is the nullspace of A. Recall that the column space col(a) is the subspace of R m spanned by the columns of A. Then col(a) null(a T ), the nullspace of A T.

row(a), col(a), null(a), and null(a T ) are called the four fundamental subspaces associated to A. ORTHOGONAL PROJECTIONS. Given a vector v in R n, there are unique vectors w W and w W such that v w+w. To find them you first find bases for W and W as in the previous paragraph. You solve the equation r w + + r k w k + r k+ v k+ + + r n v n v for r,..., r n in the usual way. Then w r w + + r k w k, w r k+ v k+ + r n v n Here is an example. Let W be as in the previous example, and let v equation r + r + r to get r, r and r. This gives us that w w. +. We solve the 4 4 and w is called the orthogonal projection of v onto W. Note that if {v,..., v k } is an orthogonal basis for W (defined below), then there is a much easier procedure, described below, which you should use. If it is not orthogonal, then you have to fall back on the method just described. ORTHOGONAL AND ORTHONORMAL SETS. A set {v,..., v k } of vectors is orthogonal if v i v j for all i j. A set {q,..., q k } of vectors is orthonormal if it is orthogonal and each vector has length, i.e. q i. Any orthogonal set of non-zero vectors can be transformed into an orthonormal set by dividing each vector by its length, i.e. q i v i / v i.. Orthogonal sets of non-zero vectors are automatically linearly independent. So if you have an orthogonal spanning set of non-zero vectors for a subspace W of R n it is automatically a basis for W. There is a bonus. It is very easy to find the coordinates of a vector with respect to an orthogonal basis. If w W, then ( ) ( ) w v w vk w v + + v k v v v k v k For an orthonormal basis things are even simpler. w (w q )q + + (w q k )q k 4

. If you have an orthogonal basis {v,..., v k } for W, then it is easy to compute the orthogonal projection w of a vector v. Recall that v w + w, where w is in W. Then v v i (w + w ) v i w v i + w v i w v i + w v i So, substituting v v i for w v i into the formula above for w we get Here is an example. Let W span w ( ) ( ) ( ) v v v vk w v + + v k v v v k v k + ( ), + / / / 4 4 and v + / / /. Then / / / The formula for w using an orthonormal basis for W is likewise obtained by replacing w q i by v q i in the formula from the previous section. THE GRAM-SCHMIDT PROCESS. Suppose {x,..., x k } is any basis for W. It can be transformed to an orthogonal basis for W by the following Gram-Schmidt process. v x. p is the orthogonal projection of x onto the subspace W span(v ). v x p.

p is the orthogonal projection of x onto the subspace W span(v, v ). v x p. Continue in this pattern until you have used up all the original vectors. The projections p i are computed using the formula for w given earlier, applied to the vector x i and the orthogonal basis {v,..., v i }. Here is an example. Let x v x. ( ) x v p v 4. v v v x p 4 ( ) x v p v + v v v x p ( x v, x ) v 7 v v / / /.., and x + / / / / /. The arithmetic can be made easier by rescaling the v i ; as soon as you create a v i, see whether or not it contains fractions. If it does, then replace it by a new v i obtained by multiplying by some non-zero integer to cancel the denominators; then proceed to use the new v i in further calculations. In our example we would thus multiply the old v by to obtain a new v we would have ( ) x v p v + v v v x p ( x v ) v 7 v v + Do not rescale the p i! This would result in the v i not being orthogonal..... Then

. Sometimes one wants to ultimately get an orthonormal basis {q,..., q k }. To do this you take the v i above and divide by their lengths to get q i v i / v i. Be sure to read the problem carefully to determine whether an orthogonal or orthonormal basis is required as the final answer. ORTHOGONAL MATRICES. A matrix Q is orthogonal if the columns of Q form an orthonormal set. This is equivalent to Q Q T. It is also equivalent to the set of rows of Q being an orthonormal set. Warning: In order to be an orthogonal matrix it is not enough for the columns (respectively rows) to be an orthogonal set; they must satisfy the stronger condition of being orthonormal. This is an unfortunate inconsistency in the terminology of linear algebra, but it has been firmly established for many decades, and we are stuck with it.. If Q is an orthogonal matrix then multiplication by it preserves dot products, lengths, and angles, i.e. Qx Qy x y, Qx x, and the angle between Qx and Qy is equal to the angle between x and y. THE QR FACTORIZATION. Let A be an n n matrix whose set of columns {a,... a n } is linearly independent. Then there is an orthogonal matrix Q and an upper triangular matrix R such that A QR. For example, A [ [ / / / / [ / / QR. The way this arises is as follows: Gram-Schmidt the set B {a,... a n } to get an orthogonal set B {v,... v n }. Then normalize this new set to get an orthonormal set B {q,... q n }. Each q i is a multiple of v i, which in turn is a linear combination of the a j s, so each q i is a linear combination of the a j s. This process can be reversed to express each a i as a linear combination of the q j s. In matrix form this gives A QR where the entries of R are the coefficients in these linear combinations.. How do you find Q? Use {q,... q n } as the columns of Q, so Q [q q q n. 4. How do you find R? Since you want A QR, multiply both sides of this equation on the left by Q. Since Q is orthogonal we have that Q Q T, and so R Q T A. [ {[ [ }. So, if we were given A, we would first Gram-Schmidt the columns, {[ [ } / of A to get the orthogonal set,. To make the arithmetic easier it would / probably {[ be [ a good} idea to rescale the second vector by multiplying by to get the orthogonal set,. Each of these vectors has length, so dividing by the length gives 7

the orthonormal set [ / / / / {[ / / [ /, / }. We then use these as the columns of Q. We next transpose Q and multiply matrices to get [ R Q T A / / / / [ [ / /. Note that if you were given Q you do not need to do the Gram-Schmidt and normalization part of the process; just compute R Q T A as above. 7. Finally, when you are asked for a factorization GIVE A FACTORIZATION! This means actually write out the two matrices side by side in the correct order. Your final answer should look like [ / / / / [ / / ORTHOGONAL DIAGONALIZATION. A matrix A is orthogonally diagonalizable if there is an orthogonal matrix Q and a diagonal matrix D such that Q AQ D. The basic fact is that A is orthogonally diagonalizable if and only if A is symmetric. So, notice that while every orthogonally diagonalizable matrix is diagonalizable, not every diagonalizable matrix is orthogonally diagonalizable, since there are examples of diagonalizable matrices which are not symmetric.. If A is symmetric, hence orthogonally diagonalizable, then the eigenspaces E λi and E λj corresponding to distinct eigenvalues λ i λ j are perpendicular to each other, i.e. every vector in one eigenspace is perpendicular to every vector in the other eigenspace (i.e. their dot product is zero).. Here is how you orthogonally diagonalize a symmetric matrix A. Find the eigenvalues λ,..., λ k and the eigenbases B λ,..., B λk in the usual way. Now you do something new. First use the Gram-Schmidt process to transform each basis B λi to an orthogonal basis B λ i, then divide each of the resulting vectors by its length to get an orthonormal basis B λ i. Note that if the basis has just one element, there is nothing to do in the first of these steps, so you proceed to the second step. If there is more than one element in the basis, then you do have something to do in the first step; don t skip it and go to the second step. Also, note that you are doing this two step procedure on each of the eigenbases individually. Don t toss all of the eigenbases together and then apply the procedure; this would still give the right answer, but it would make the computation much longer and more complicated! So apply the two-step procedure to B λ to get an orthonormal eigenbasis B λ for E λ, then apply the procedure to B λ to get an orthonormal eigenbasis B λ for E λ, and so on until you have done this for all k eigenvalues. Then you create the matrix Q using these bases the same way as before; you toss all the B λ i together and use these vectors as the n columns of Q. You create the diagonal matrix D the same way as before, putting the λ i down the diagonal, having as many copies of λ i as there are vectors in B λ i and having them in the same order as the columns of Q. 4. Here is an example. Suppose the symmetric matrix A has eigenvalues λ and λ and that the corresponding eigenbases are B, and B. 8

We first apply the Gram-Schmidt procedure to B. We start with B {x, x },. v x. ( ) x v p v 4. v v v x p 4. We now have an orthogonal basis B {v, v }, for E. We next divide each of these vectors by its length to get a unit vector. v + +, so we get q v v. v 9 + 9 + 4 9 9, so we get q v v. We now have an orthonormal basis B {q, q },. We make two observations at this point. First, note that as soon as we obtained the vector v by multiplying by to get the new v, B {q, q }, we could have rescaled it, so that we would have B {v, v }. We then have v and v + + 4, and so get,. 9

Second, note that we did NOT go on to toss the vector x into our Gram-Schmidt machine along with x and x. Remember that we are processing ONE eigenbasis at a time, NOT all of them together. The vector x is not an element of our basis B so we do not include it in our processing of B to get B and B. Now we move on to the eigenbasis B {x } here the Gram-Schmidt process reduces just to setting v x B B {v }. Next we compute v + and set q v v orthonormal basis B {q } Q [q q q. Finally we assemble the matrices Q and D. D SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS. Since there is only one vector, so we have. Thus we have the. Suppose the variable x is a function of the variable t. We write this as x x(t) and let x x (t) be the derivative dx of x with respect to t. dt. The differential equation x ax has general solution x ce at, where c is a constant.. Now suppose that we have several variables x..., x n which are functions of t. We can x x assemble these into a vector x. and let x.. x n x n 4. If we have several differential equations of the formx d x,..., x n d n x n, we can write them in the form x Dx where D d d n is a diagonal matrix. Note that these equations have nothing to do with each other, and we have soutions x c e d t,..., x n c n e dnt. This is called an uncoupled system.. Now suppose that we have a system of the form x Ax, where A is not a diagonal matrix. This is called a coupled system. We wish to solve it by changing variables to get an uncoupled system. We can do this if A is diagonalizable.

We have that P AP D, where P is an invertible matrix and D is a diagonal matrix. We y make a change of variables by setting x P y, where y.. It turns out that x P y, and substituting these into x Ax gives P y AP y and so y P AP y Dy, which is the desired uncoupled system.. For example, suppose we have the system x 4x + x, x 8x + x. This has matrix form [ [ [ x 4 x 8 The matrix A [ [ 4 8, respectively. Therefore A is diagonalizable with D x x y n [ has eigenvalues and with associated eigenvectors and [ [ and P. We set x P y. This gives us the uncoupled system y y, y y, which has solution y c e t, y c e t. To get the solutions to the original system we use x P y to get [ x x [ [ c e t c e t [ (c e t + c e t ) (c e t + c e t ) Note that this can be written in the convenient form [ x x c e t [ + c e t [ 7. Sometimes one wants a solution which satisfies certain initial conditions, such as x () and x (). Setting t, x and x gives [ [ [ c + c This is just a matter of solving the system of linear equations with augmented matrix [ This has RREF, so we have c and c, so [ x [. x e t [ e t [ MARKOV CHAINS. A good way to understand Markov chains is to recall the example we used in class. See the lecture notes # and # for some of the details omitted below. Suppose the population of the U.S. is constant, with no people leaving or entering the country. Start with x being the population of California and y being the population of the rest of

the country (outside California). Suppose that each year % of the population outside CA moves into CA, while % of the population inside CA moves out. Let x and y be the populations inside and outside CA one year later. Since % of the population of CA has moved out, we have that 8% remain. Thus one part of x is.8x. The other part consists of the % of the people outside who have moved in, i.e..y. Thus the total is x.8x +.y. Since % of the CA population has moved out, they give one part.x of the population y outside CA. The other part consists of the 9% of the population outside who have remained outside, giving.9y. Thus the total population outside is y.x +.9y. [ [ [ x.8. x We can describe the situation by the matrix equation y..9 y should be able to translate a word problem like this into this matrix form.. You Note that each column of this matrix sums to. Such a matrix is called a stochastic matrix. [ x [ [ x.8. Let x, x y, and A. Then we can desribe this transition y..9 as x Ax. Since we are assuming that the same pattern happens year after year we can describe the population two years after the start by x Ax AAx A x. In general, after k years we have x k A k x.. In general, computing a high power of a matrix is difficult, but there is one case in which it is easy. Suppose A is diagonalizable, with P AP D. Then A P DP, from which it follows that for any positive integer k we have A k P D k P. (We have the pairs P P cancelling out.) Now D k is easy to compute; it is just the diagonal matrix whose diagonal entries are the k th powers of the diagonal entries of D. This can be used to give you a general formula for A k.. This in turn can be used to try to find the limit A of A k as k. One would have A P D P. Of course for this to work we need for the powers d k i which occur on the diagonal of D k to have limits. If d i <, then d k i, while if d i, then d k i. 4. We return to our example. If you do the usual eigenstuff for our A you get p A (λ) λ.7λ +.7 (λ )(λ.7), {[ so} the eigenvalues {[ of A } are λ and λ.7. We find the eigenbases to be B and B.7. [ [ This gives P and D. Thus as k we have.7 k and [.7 k. Thus D. We will have A P D P [ [ [ / / / / [ [ / /

[ / / / / So in the long run we have the population vector x A x [ (/)(x + y ). (/)(x + y ) [ / / / / [ x y This says that in the long run one third of the total population is inside CA and the other two thirds is outside.