DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Similar documents
DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Transformations

Dimension. Eigenvalue and eigenvector

Math Matrix Algebra

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors

Math 3191 Applied Linear Algebra

Math 315: Linear Algebra Solutions to Assignment 7

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Diagonalization. Hung-yi Lee

1. Select the unique answer (choice) for each problem. Write only the answer.

Diagonalization of Matrix

Lecture 15, 16: Diagonalization

Lecture 3 Eigenvalues and Eigenvectors

MAT 1302B Mathematical Methods II

MA 265 FINAL EXAM Fall 2012

Linear algebra II Tutorial solutions #1 A = x 1

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Announcements Monday, November 06

Chapter 5 Eigenvalues and Eigenvectors

Solutions to Final Exam

Study Guide for Linear Algebra Exam 2

Econ Slides from Lecture 7

1 Last time: least-squares problems

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Symmetric and anti symmetric matrices

and let s calculate the image of some vectors under the transformation T.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

CAAM 335 Matrix Analysis

City Suburbs. : population distribution after m years

2 Eigenvectors and Eigenvalues in abstract spaces.

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

235 Final exam review questions

Definition (T -invariant subspace) Example. Example

Diagonalization. P. Danziger. u B = A 1. B u S.

Eigenvalues and Eigenvectors

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Eigenvalues and Eigenvectors

Announcements Wednesday, November 01

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Announcements Wednesday, November 01

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

2. Every linear system with the same number of equations as unknowns has a unique solution.

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

EIGENVALUES AND EIGENVECTORS

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Math 205, Summer I, Week 4b:

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

4. Linear transformations as a vector space 17

Math Final December 2006 C. Robinson

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Eigenvalues, Eigenvectors, and Diagonalization

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Calculating determinants for larger matrices

MATH 1553-C MIDTERM EXAMINATION 3

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Summer Session Practice Final Exam

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Linear Algebra II Lecture 22

Recall the convention that, for us, all vectors are column vectors.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

CSL361 Problem set 4: Basic linear algebra

Properties of Linear Transformations from R n to R m

The Jordan Normal Form and its Applications

Lecture 12: Diagonalization

Math 20F Practice Final Solutions. Jor-el Briones

MATH 369 Linear Algebra

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Topic 1: Matrix diagonalization

LINEAR ALGEBRA REVIEW

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Generalized Eigenvectors and Jordan Form

Eigenvalues and Eigenvectors A =

Eigenvectors. Prop-Defn

Math 314H Solutions to Homework # 3

Mathematical Methods for Engineers 1 (AMS10/10A)

Chapter 4 & 5: Vector Spaces & Linear Transformations

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

Unit 5. Matrix diagonaliza1on

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Math 240 Calculus III

MATH 221, Spring Homework 10 Solutions

Linear System Theory

Name: Final Exam MATH 3320

Transcription:

DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark

Outline 1. More on 2. 3. 2

Resume Linear transformations and proofs that a given mapping is linear range and null space, and rank and nullity of a transformation, rank-nullity theorem two-way relationship between matrices and linear transformations change from standard to arbitrary basis change of basis from B to B 3

Outline 1. More on 2. 3. 4

Change of Basis for a Lin. Transf. We saw how to find A for a transformation T : R n R m using standard basis in both R n and R m. Now: is there a matrix that represents T wrt two arbitrary bases B and B? Theorem Let T : R n R m be a linear transformation and B = {v 1, v 2,..., v n } and B = {v 1, v 2,..., v n} be bases of R n and R m. Then for all x R n, [T (x)] B = M[x] B where M = A [B,B ] is the m n matrix with the ith column equal to [T (v i )] B, the coordinate vector of T (v i ) wrt the basis B. Proof: change B to standard x = P n n B [x] B x R n perform linear transformation T (x) = Ax = AP n n B [x] B in standard coordinates change to basis B [u] B = (P m m B ) 1 u u R m [T (x)] B = (P m m B ) 1 AP n n B [x] B M = (P m m B ) 1 AP n n B 5

How is M done? P B = [v 1 v 2... v n ] AP B = A[v 1 v 2... v n ] = [Av 1 Av 2... Av n ] Av i = T (v i ): AP B = [T (v 1 ) T (v 2 )... T (v n )] M = P 1 B AP B = P 1 B = [P 1 B T (v 1) P 1 B T (v 2)... P 1 B T (v n)] M = [[T (v 1 )] B [T (v 2 )] B... [T (v n )] B ] Hence, if we change the basis from the standard basis of R n and R m the matrix representation of T changes 6

Similarity Particular case m = n: Theorem Let T : R n R n be a linear transformation and B = {x 1, x 2,..., x n } be a basis R n. Let A be the matrix corresponding to T in standard coordinates: T (x) = Ax. Let P = [ x 1 x 2 x n ] be the matrix whose columns are the vectors of B. Then for all x R n, [T (x)] B = P 1 AP[x] B Or, the matrix A [B,B] = P 1 AP performs the same linear transformation as the matrix A but expressed it in terms of the basis B. 7

Similarity Definition A square matrix C is similar (represent the same linear transformation) to the matrix A if there is an invertible matrix P such that C = P 1 AP. Similarity defines an equivalence relation: (reflexive) a matrix A is similar to itself (symmetric) if C is similar to A, then A is similar to C C = P 1 AP, A = Q 1 CQ, Q = P 1 (transitive) if D is similar to C, and C to A, then D is similar to A 8

Example y y 1 1 2 1 2 x 2 1 2 x x 2 + y 2 = 1 circle in standard form x 2 + 4y 2 = 4 ellipse in standard form 5x 2 + 5y 2 6xy = 2??? Try rotating π/4 anticlockwise [ ] [ ] 1 2 cos θ sin θ 1 A T = = 2 sin θ cos θ 1 2 1 = P 2 v = P[v] B [ x y] [ ] 1 2 [X 1 ] = 2 1 2 1 2 Y X 2 + 4Y 2 = 1

Example Let T : R 2 R 2 : ([ [ ] x x + 3y T = y]) x + 5y What is its effect on the xy-plane? Let s change the basis to {[ [ 1 3 B = {v 1, v 2 } =, 1] 1]} Find the matrix of T in this basis: C = P 1 AP, A matrix of T in standard basis, P is transition matrix from B to standard C = P 1 AP = 1 [ ] [ ] [ ] [ ] 1 3 1 3 1 3 4 0 = 2 1 1 1 5 1 1 0 2 10

Example (cntd) the B coordinates of the B basis vectors are [ [ 1 0 [v 1 ] B =, [v 0] 2 ] B = 1] B B so in B coordinates T is a stretch in the direction v 1 by 4 and in dir. v 2 by 2: [ [ ] [ 4 0 1 4 [T (v 1 )] B = = = 4[v 0 2] 0 0] 1 ] B B B The effect of T is however the same no matter what basis, only the matrices change! So also in the standard coordinates we must have: Av 1 = 4v 1 Av 2 = 2v 2 11

Resume Matrix representation of a transformation with respect to two given basis Similarity of square matrices 12

Outline 1. More on 2. 3. 13

Eigenvalues and Eigenvectors (All matrices from now on are square n n matrices and all vectors in R n ) Definition Let A be a square matrix. The number λ is said to be an eigenvalue of A if for some non-zero vector x, Ax = λx Any non-zero vector x for which this equation holds is called eigenvector for eigenvalue λ or eigenvector of A corresponding to eigenvalue λ 14

Finding Eigenvalues Determine solutions to the matrix equation Ax = λx Let s put it in standard form, using λx = λi x: (A λi )x = 0 Bx = 0 has solutions other than x = 0 precisely when det(b) = 0. hence we want det(a λi ) = 0: Definition (Charachterisitc polynomial) The polynomial A λi is called the characteristic polynomial of A, and the equation A λi = 0 is called the characteristic equation of A. 15

Example A = A λi = [ ] 7 15 2 4 [ ] [ ] [ ] 7 15 1 0 7 λ 15 λ = 2 4 0 1 2 4 λ The characteristic polynomial is A λi = 7 λ 15 2 4 λ = (7 λ)( 4 λ) + 30 = λ 2 3λ + 2 The characteristic equation is λ 2 3λ + 2 = (λ 1)(λ 2) = 0 hence 1 and 2 are the only eigenvalues of A 16

Finding Eigenvectors Find non-trivial solution to (A λi )x = 0 corresponding to λ zero vectors are not eigenvectors! Example A = [ ] 7 15 2 4 Eigenvector for λ = 1: [ ] 6 15 A I = 2 5 Eigenvector for λ = 2: [ ] 5 15 A 2I = 2 6 RREF RREF [ ] 1 5 2 0 0 [ ] 1 3 0 0 v = t [ ] 5, t R 2 [ 3 v = t, t R 1] 17

Example 4 0 4 A = 0 4 4 4 4 8 The characteristic equation is 4 λ 0 4 A λi = 0 4 λ 4 4 4 8 λ = (4 λ)(( 4 λ)(8 λ) 16) + 4( 4(4 λ)) = (4 λ)(( 4 λ)(8 λ) 16) 16(4 λ) = (4 λ)(( 4 λ)(8 λ) 16 16) = (4 λ)λ(λ 12) hence the eigenvalues are 4, 0, 12. Eigenvector for λ = 4, solve (A 4I )x = 0: 4 4 0 4 1 1 0 A 4I = 0 4 4 4 RREF 0 0 1 4 4 8 4 0 0 0 1 v = t 1, t R 0 18

Example 3 1 2 A = 1 1 1 1 1 0 The characteristic equation is 3 λ 1 2 A λi = 1 1 λ 1 1 1 λ = ( 3 λ)(λ 2 + λ 1) + ( λ 1) 2(2 + λ) = (λ 3 + 4λ 2 + 5λ + 2) if we discover that 1 is a solution then (λ + 1) is a factor of the polynomial: (λ + 1)(aλ 2 + bλ + c) from which we can find a = 1, c = 2, b = 3 and (λ + 1)(λ + 2)(λ + 1) = (λ + 1) 2 (λ + 2) the eigenvalue 1 has multiplicity 2 19

Eigenspaces The set of eigenvectors corresponding to the eigenvalue λ together with the zero vector 0, is a subspace of R n. because it corresponds with null space N(A λi ) Definition (Eigenspace) If A is an n n matrix and λ is an eigenvalue of A, then the eigenspace of the eigenvalue λ is the nullspace N(A λi ) of R n. the set S = {x Ax = λx} is always a subspace but only if λ is an eigenvalue then dim(s) 1. 20

Eigenvalues and the Matrix Links between eigenvalues and properties of the matrix let A be an n n matrix, then the characteristic polynomial has degree n: p(λ) = A λi = ( 1) n (λ n + a n 1 λ n 1 + + a 0 ) in terms of eigenvalues λ 1, λ 2,..., λ n the characteristic polynomial is: p(λ) = A λi = ( 1) n (λ λ 1 )(λ λ 2 ) (λ λ n ) Theorem The determinant of an n n matrix A is equal to the product of its eigenvalues. Proof: if λ = 0 in the first point above, then p(0) = A = ( 1) n a 0 = ( 1) n ( 1) n λ 1 λ 2... λ n = λ 1 λ 2... λ n 21

The trace of a square matrix A is the sum of the entries on its main diagonal. Theorem The trace of an n n matrix is equal to the sum of its eigenvalues. Proof: A λi = ( 1) n (λ n + a n 1 λ n 1 + + a 0 ) = ( 1) n (λ λ 1 )(λ λ 2 ) (λ λ n ) the proof follows by comparing the coefficients of ( λ) n 1 22

Recall: Square matrices are similar if there is an invertible matrix P such that P 1 AP = M. Definition (Diagonalizable matrix) The matrix A is diagonalizable if it is similar to a diagonal matrix; that is, if there is a diagonal matrix D and an invertible matrix P such that P 1 AP = D Example A = P = [ ] 7 15 2 4 [ ] 5 3 2 1 P 1 AP = D = P 1 = [ ] 1 0 0 2 [ ] 1 3 2 5 How was such a matrix P found? When a matrix is diagonalizable? 23

General Method Let s assume A is diagonalizable, then P 1 AP = D where λ 1 0 0 0 λ 2 0 D = diag(λ 1, λ 2,..., λ n ) = 0 0... 0 0 0 λ n AP = PD AP = A [ v 1 v n ] = [ Av1 Av n ] λ 1 0 0 PD = [ ] 0 λ 2 0 v 1 v n 0 0... 0 = [ ] λ 1 v 1 λ n v n 0 0 λ n Hence: Av 1 = λ 1 v 1, Av 2 = λ 2 v 2, Av n = λ n v n 24

since P 1 exists then none of the above Av i = λ i v i has 0 as a solution or else P would have a zero column. this is equivalent to λ i and v i are eigenvalues and eigenvectors and that they are linearly independent. the converse is also true: P 1 is invertible and Av = λv implies that P 1 AP = P 1 PD = D Theorem An n n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. Theorem An n n matrix A is diagonalizable if and only if there is a basis of R n consisting only of eigenvectors of A. 25

Example A = [ ] 7 15 2 4 and 1 and 2 are the eigenvalues with eigenvectors: [ [ 5 3 v 1 = v 2] 2 = 1] P = [ [ ] ] 5 3 v 1 v 2 = 2 1 26

Example 4 0 4 A = 0 4 4 4 4 8 has eigenvalues 0, 4, 12 and corresponding eigenvectors: 1 v 1 = 1, 1 v 2 = 1, 1 v 3 = 1 0 1 2 1 1 1 4 0 0 P = 1 1 1 D = 0 0 0 0 1 2 0 0 12 We can choose any order, provided we are consistent: 1 1 1 0 0 0 P = 1 1 1 D = 0 4 0 1 0 2 0 0 12 27

Geometrical Interpretation Let s look at A as the matrix representing a linear transformation T = T A in standard coordinates, ie, T (x) = Ax. let s assume A has a set of linearly independent vectors B = {v 1, v 2,..., v n } corresponding to the eigenvalues λ 1, λ 2,..., λ n, then B is a basis of R n. what is the matrix representing T wrt the basis B? A [B,B] = P 1 AP where P = [ v 1 v 2 v n ] (check earlier theorem today) hence, the matrices A and A [B,B] are similar, they represent the same linear transformation: A in the standard basis A [B,B] in the basis B of eigenvectors of A A [B,B] = [ [T (v 1 )] B [T (v 2 )] B [T (v n )] B ] for those vectors in particular T (v i ) = Av i = λ i v i hence diagonal matrix A [B,B] = D 28

What does this tell us about the linear transformation T A? b 1 For any x R n b 2 [x] B =. b n its image in T is easy to calculate in B coordinates: λ 1 0 0 b 1 λ 1 b 1 0 λ 2 0 [T (x)] B = 0 0. b 2 λ 2 b 2. =. 0.. 0 0 λ n b n λ n b n B B B it is a stretch in the direction of the eigenvector v i by a factor λ i! the line x = tv i, t R is fixed by the linear transformation T in the sense that every point on the line is stretched to another point on the same line. 29

Similar Matrices Geometric interpretation Let A and B = P 1 AP, ie, be similar. geometrically: T A is a linear transformation in standard coordinates T B is the same linear transformation T in coordinates wrt the basis given by the columns of P. we have seen that T has the intrinsic property of fixed lines and stretches. This property does not depend on the coordinate system used to express the vectors. Hence: Theorem Similar matrices have the same eigenvalues, and the same corresponding eigenvectors expressed in coordinates with respect to different bases. Algebraically: A and B have same polynomial and hence eigenvalues B λi = P 1 AP λi = P 1 AP λp 1 IP = P 1 (A λi )P = P 1 A λi P = A λi 30

P transition matrix from the basis S to the standard coords to coords v = P[v] S [v] S = P 1 v Using Av = λv: B[v] S = P 1 AP[v] S = P 1 Av = P 1 λv = λp 1 v = λ[v] S hence [v] S is eigenvector of B corresponding to eigenvalue λ 31

Diagonalizable matrices Example A = [ ] 4 1 1 2 has characteristic polynomial λ 2 6λ + 9 = (λ 3) 2. The eigenvectors are: [ ] [ ] [ ] 1 1 x1 0 = 1 1 0 v = [ 1, 1] T x 2 hence any two eigenvectors are scalar multiple of each others and are linearly dependent. The matrix A is therefore not diagonalizable. 32

Example A = [ ] 0 1 1 0 has characteristic equation λ 2 + 1 and hence it has no real eigenvalues. 33

Theorem If an n n matrix A has n different eigenvalues then (it has a set of n linearly independent eigenvectors) is diagonalizable. Proof by contradiction n lin indep. is necessary condition but n different eigenvalues not. Example 3 1 1 A = 0 2 0 1 1 3 the characteristic polynomial is (λ 2) 2 (λ 4). Hence 2 has multiplicity 2. Can we find two corresponding linearly independent vectors? 34

Example (cntd) 1 1 1 1 1 1 (A 2I ) = 0 0 0 RREF 0 0 0 1 1 1 0 0 0 1 1 x = s 1 + t 0 = sv 1 + tv 2 0 1 s, t R the two vectors are lin. indep. 1 1 1 1 0 1 1 (A 4I ) = 0 2 0 RREF 0 1 0 v 3 = 0 1 1 1 0 0 0 1 1 1 1 4 0 0 P = 0 1 0 P 1 AP = 0 2 0 1 0 1 0 0 2 35

Example 3 1 2 A = 1 1 1 1 1 0 Eigenvalue λ 1 = 1 has multiplicity 2; λ 2 = 2. 2 1 2 1 0 1 (A + I ) = 1 0 1 RREF 0 1 0 1 1 1 0 0 0 The rank is 2. The null space (A + I ) therefore has dimension 1 (rank-nullity theorem). We find only one linearly independent vector: x = [ 1, 0, 1] T. Hence the matrix A cannot be diagonalized. 36

Multiplicity Definition (Algebraic and geometric multiplicity) An eigenvalue λ 0 of a matrix A has algebraic multiplicity k if k is the largest integer such that (λ λ 0 ) k is a factor of the characteristic polynomial geometric multiplicity k if k is the dimension of the eigenspace of λ 0, ie, dim(n(a λ 0 I )) Theorem For any eigenvalue of a square matrix, the geometric multiplicity is no more than the algebraic multiplicity Theorem A matrix is diagonalizable if and only if all its eigenvalues are real numbers and, for each eigenvalue, its geometric multiplicity equals the algebraic multiplicity. 37

Summary Characteristic polynomial and characteristic equation of a matrix eigenvalues, eigenvectors, diagonalization finding eigenvalues and eigenvectors eigenspace eigenvalues are related to determinant and trace of a matrix diagonalize a diagonalizable matrix conditions for digonalizability diagonalization as a change of basis, similarity geometric effect of linear transformation via diagonalization 38

Outline 1. More on 2. 3. 39

Uses of find powers of matrices solving systems of simultaneous linear difference equations Markov chains systems of differential equations 40

Powers of Matrices A n = AAA A }{{} n times If we can write: P 1 AP = D then A = PDP 1 A n = } AAA {{ A } n times = (PDP 1 )(PDP 1 )(PDP 1 ) (PDP 1 ) }{{} n times = PD(P 1 P)D(P 1 P)D(P 1 P) DP 1 = P } DDD {{ D } P 1 n times = PD n P 1 then closed formula to calculate the power of a matrix. 41

Difference equations A difference equation is an equation linking terms of a sequence to previous terms, eg: x t+1 = 5x t 1 is a first order difference equation. a first order difference equation can be fully determined if we know the first term of the sequence (initial condition) a solution is an expression of the terms x t x t+1 = ax t = x t = a t x 0 42

System of Difference equations Suppose the sequences x t and y t are related as follows: x 0 = 1, y 0 = 1 for t 0 x t+1 y t+1 = 7x t 15y t = 2x t 4y t Coupled system of difference equations. Let x t = [ xt y t ] then x t+1 = Ax t and 0 = [1, 1] T and [ ] 7 15 A = 2 4 Then: x 1 = Ax 0 x 2 = Ax 1 = A(Ax 0 ) = A 2 x 0 x 3 = Ax 2 = A(A 2 x 0 ) = A 3 x 0. x t = A t x 0 43

Markov Chains Suppose two supermarkets compete for customers in a region with 20000 shoppers. Assume no shopper goes to both supermarkets in a week. The table gives the probability that a shopper will change from one to another supermarket: From A From B From none To A 0.70 0.15 0.30 To B 0.20 0.80 0.20 To none 0.10 0.05 0.50 (note that probabilities in the columns add up to 1) Suppose that at the end of week 0 it is known that 10000 went to A, 8000 to B and 2000 to none. Can we predict the number of shoppers at each supermarket in any future week t? And the long-term distribution? 44

Formulation as a system of difference equations: Let x t be the percentage of shoppers going in the two supermarkets or none then we have the difference equation: x t = Ax t 1 0.70 0.15 0.30 A = 0.20 0.80 0.20, x t = [ ] x t y t z t 0.10 0.05 0.50 a Markov chain (or process) is a closed system of a fixed population distributed into n diffrerent states, transitioning between the states during specific time intervals. The transition probabilities are known in a transition matrix A (coefficients all non-negative + sum of entries in the columns is 1) state vector x t, entries sum to 1. 45

A solution is given by (assuming A is diagonalizable): x t = A t x 0 = (PD t P 1 )x 0 let x 0 = Pz 0 and z 0 = P 1 x 0 = [ ] T b 1 b 2 b n be the representation of x 0 in the basis of eigenvectors, then: x t = PD t P 1 x 0 = b 1 λ t 1v 1 + b 2 λ t 2v 2 + + b n λ t nv n x t = b 1 (1) t v 1 + b 2 (0.6) t v 2 + + b n (0.4) t v n lim t 1 t = 1, lim t 0.6 t = 0 hence the long-term distribution is 3 0.375 q = b 1 v 1 = 0.125 4 = 0.500 1 0.125 Th.: if A is the transition matrix of a regular Markov chain, then λ = 1 is an eigenvalue of multiplicity 1 and all other eigenvalues satisfy λ < 1 46