Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Similar documents
Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part B: Second and Higher-Order Linear Difference Equations in One Variable

Eigenvalues and Eigenvectors

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Recall : Eigenvalues and Eigenvectors

Lecture 15, 16: Diagonalization

4. Linear transformations as a vector space 17

Definition (T -invariant subspace) Example. Example

Diagonalization of Matrix

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Study Guide for Linear Algebra Exam 2

5 Compact linear operators

Numerical Methods I Eigenvalue Problems

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 3191 Applied Linear Algebra

Calculating determinants for larger matrices

Lecture Summaries for Linear Algebra M51A

Lecture 3 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 1553, Introduction to Linear Algebra

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

MTH 464: Computational Linear Algebra

Math 2331 Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

Further Mathematical Methods (Linear Algebra)

Name: Final Exam MATH 3320

Econ Slides from Lecture 7

Chapter 3. Determinants and Eigenvalues

Linear Algebra Review

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

MAC Module 12 Eigenvalues and Eigenvectors

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Numerical Methods - Numerical Linear Algebra

Mathematical foundations - linear algebra

Eigenvalues and Eigenvectors A =

MATH 583A REVIEW SESSION #1

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Generalized eigenvector - Wikipedia, the free encyclopedia

4 Matrix Diagonalization and Eigensystems

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

Linear Algebra Primer

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Eigenvalues and Eigenvectors

Review of Some Concepts from Linear Algebra: Part 2

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Lecture 1: Review of linear algebra

Lecture 7. Econ August 18

MATH 221, Spring Homework 10 Solutions

MAT 1302B Mathematical Methods II

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 310, REVIEW SHEET 2

6 Inner Product Spaces

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Linear Algebra II Lecture 13

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition

CS 246 Review of Linear Algebra 01/17/19

w T 1 w T 2. w T n 0 if i j 1 if i = j

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Linear Algebra Lecture Notes-II

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Functional Analysis Review

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Linear Algebra: Matrix Eigenvalue Problems

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

Linear Algebra- Final Exam Review

MATH 1553-C MIDTERM EXAMINATION 3

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

The following definition is fundamental.

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Foundations of Matrix Analysis

Generalized Eigenvectors and Jordan Form

Lecture 10 - Eigenvalues problem

Topic 1: Matrix diagonalization

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

and let s calculate the image of some vectors under the transformation T.

Mathematical foundations - linear algebra

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Chapter 4 Euclid Space

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Quantum Computing Lecture 2. Review of Linear Algebra

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Eigenvalues and Eigenvectors

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Math 205, Summer I, Week 4b:

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

COMP 558 lecture 18 Nov. 15, 2010

Maths for Signals and Systems Linear Algebra in Engineering

B5.6 Nonlinear Systems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra in Actuarial Science: Slides to the lecture

Numerical Linear Algebra Homework Assignment - Week 2

Transcription:

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September 24th typeset from diffeqslides17c.tex

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 2 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

Systems of Linear Difference Equations Many empirical economic models involve simultaneous time series for several different variables. Consider a first-order linear difference equation x t+1 A t x t = f t for an n-dimensional process T t x t R n, where each matrix A t is n n. We will prove by induction on t that for t = 0, 1, 2,... there exist suitable n n matrices P t,k (k = 0, 1, 2,..., t) such that, given any possible value of the initial state vector x 0 and of the forcing terms f t (t = 0, 1, 2,...), the unique solution can be expressed as x t = P t,0 x 0 + t k=1 P t,kf k 1 The proof, of course, will also involve deriving a recurrence relation for these matrices. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 4 of 45 Early Terms of the Matrix Solution Because x 0 = P 0,0 x 0 = x 0, the first term is obviously P 0, 0 = I when t = 0. Next x 1 = A 0 x 0 + f 0 when t = 1 implies that P 1, 0 = A 0, P 1, 1 = I. Next, the solution for t = 2 is x 2 = A 1 x 1 + f 1 = A 1 A 0 x 0 + A 1 f 0 + f 1 This formula matches the formula when t = 2 provided that: P 2, 0 = A 1 A 0 ; P 2, 1 = A 1 ; P 2, 2 = I. x t = P t,0 x 0 + t k=1 P t,kf k 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 5 of 45 Matrix Solution Now, substituting the two expansions x t = P t,0 x 0 + t k=1 P t,kf k 1 and x t+1 = P t+1,0 x 0 + t+1 k=1 P t+1,kf k 1 into both sides of the original equation x t+1 = A t x t + f t gives P t+1,0 x 0 + t+1 P t+1,kf k 1 k=1 ( = A t P t,0 x 0 + t ) P t,kf k 1 + f t k=1 Equating the matrix coefficients of x 0 and of each f k 1 in this equation implies that for general t one has P t+1,k = A t P t,k for k = 0, 1,..., t, with P t+1,t+1 = I

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 6 of 45 Matrix Solution, II The equation P t+1,k = A t P t,k for k = 0, 1,..., t implies that P t, 0 = A t 1 A t 2 A 0 when k = 0 P t, k = A t 1 A t 2 A k when k = 1, 2,..., t or, after defining the product of the empty set of matrices as I, P t, k = t k Inserting these into our formula implies that ( t x t = x t = P t,0 x 0 + t s=1 A t s ) x 0 + t s=1 A t s k=1 P t,kf k 1 k=1 ( t k s=1 A t s ) f k 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 7 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 8 of 45 Complementary Solutions to the Homogeneous Equation We are considering the general first-order linear difference equation x t+1 A t x t = f t in R n, where each A t is an n n matrix. The associated homogeneous equation takes the form x t A t x t 1 = 0 (for all t N) Its general solution is the n-dimensional linear subspace of functions N t x t R n satisfying ( t ) x t = x 0 (for all t N) s=1 A s where x 0 R n is an arbitrary constant vector.

From Particular to General Solutions The homogeneous equation takes the form x t+1 A t x t = 0 An associated inhomogeneous equation takes the form x t+1 A t x t = f t for a general vector forcing term f t R n. Let x P t denote a particular solution of the inhomogeneous equation and x G t any alternative general solution of the same equation. Our assumptions imply that, for each t = 1, 2,..., one has x P t+1 A t x P t = f t and x G t+1 A t x G t = f t Subtracting the first equation from the second implies that x G t+1 x P t+1 A t (x G t x P t ) = 0 This shows that x H t := x G t x P t solves the homogeneous equation. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 9 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 10 of 45 Characterizing the General Solution So the general solution x G t of the inhomogeneous equation x t+1 A t x t = f t with forcing term f t is the sum x P t + x H t of any particular solution x P t of the inhomogeneous equation; the general complementary solution x H t of the homogeneous equation x t+1 A t x t = 0.

Linearity in the Forcing Term Theorem Suppose that x P t and y P t are particular solutions of the two respective difference equations x t+1 A t x t 1 = d t and y t+1 A t y t 1 = e t Then, for any scalars α and β, the linear combination z P t := αx P t + βy P t is a particular solution of the equation z t+1 A t z t 1 = αd t + βe t. This can be proved by routine algebra. Consider any equation of the form x t+1 A t x t 1 = f t whose right-hand side is a linear combination f t = n k=1 α kft k of the n forcing vectors (ft 1,..., fn). 1 The theorem implies that a particular solution is the corresponding linear combination x P t = n k=1 α kx Pk t of particular solutions to the n equations x t+1 A t x t 1 = ft k. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 11 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 12 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 13 of 45 The Autonomous Case The general first-order equation in R n can be written as x t+1 = F t (x t ) where T R n (t, x) F t (x) R n. In the autonomous case, the function (t, x) F t (x) reduces to x F(x), independent of t. In the linear case with constant coefficients, the function x F(x) takes the affine form F(x) = Ax + f. That is, x t+1 = Ax t + f. In our previous formula, products like t k s=1 A t s reduce to powers A t k. Specifically, P t, k = A t k, where A 0 = I. The solution to x t+1 = Ax t + f is therefore x t = A t x 0 + t k=1 At k f

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 14 of 45 Summing the Geometric Series Recall the trick for finding s t := 1 + a + a 2 + + a t 1 is to multiply each side by 1 a. Because all terms except the first and last cancel, this trick yields the equation (1 a)s t = 1 a t. Hence s t = (1 a) 1 (1 a t ) provided that a 1. Applying the same trick to S t := I + A + A 2 + + A t 1 yields the two matrix equations (I A)S t = I A t = S t (I A). Provided that (I A) 1 exists, we can pre-multiply (I A)S t = I A t and post-multiply I A t = S t (I A) on each side by this inverse to get S t = (I A) 1 (I A t ) = (I A t )(I A) 1 This leads to the solution x t = A t x 0 + S t f = A t x 0 + (I A t )(I A) 1 f

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 15 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

First-Order Linear Equation with a Constant Matrix Recall that the solution to the general first-order linear equation x t A t x t 1 = f t takes the form ( t x t = s=1 A t s ) x 0 + t k=1 ( t k s=1 A t s ) f k 1 From now on, we restrict attention to a constant coefficient matrix A t = A, independent of t. Then the solution reduces to x t = A t x 0 + t s=1 At s f s 1 Indeed, this is easily verified by induction. One particular solution, of course, comes from taking x 0 = 0, implying that x t = t s=1 At s f s 1 Now we will start to analyse this particular solution for some special forcing terms f t. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 16 of 45

Special Case The special case we consider is when there exists a fixed vector f 0 R n \ {0} such that x t A t x t 1 = f t = µ t f 0 for the discrete exponential or power function Z + t µ t. Then the particular solution satisfying x 0 = 0 is x t = S t f 0 where S t := t k=1 µk 1 A t k. Note that S t (A µi) = t k=1 (µk 1 A t k+1 µ k A t k ) = A t µ t I In the non-degenerate case when µ is not an eigenvalue of A, so A µi is non-singular, it follows that S t = (A t µ t I)(A µi) 1 Then the particular solution we are looking for takes the form x P t = (A t µ t I)f for the particular fixed vector f := (A µi) 1 f 0. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 17 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 18 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 19 of 45 Characteristic Roots and Eigenvalues Recall the characteristic equation A λi = 0. It is a polynomial equation of degree n in the unknown scalar λ. By the fundamental theorem of algebra, it has a set {λ 1, λ 2,..., λ n } of n characteristic roots, some of which may be repeated. These roots may be real, or appear in conjugate pairs λ = α ± iβ C where α, β R. Because the λ i are characteristic roots, one has A λi = n i=1 (λ i λ) When λ solves A λi = 0, there is a non-trivial eigenspace E λ of eigenvectors x 0 that solve the equation Ax = λx. Then λ is an eigenvalue.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 20 of 45 Linearly Independent Eigenvectors In the matrix algebra lectures, we proved this result: Theorem Let A be an n n matrix, with a collection λ 1, λ 2,..., λ m of m n distinct eigenvalues. Suppose the non-zero vectors u 1, u 2,..., u m in C n are corresponding eigenvectors satisfying Au k = λ k u k for k = 1, 2,..., m Then the set {u 1, u 2,..., u m } must be linearly independent. We also discussed similar and diagonalizable matrices.

An Eigenvector Matrix Suppose the n n matrix A has the maximum possible number of n linearly independent eigenvectors, namely {u j } n j=1. A sufficient, but not necessary, condition for this is that A λi = 0 has n distinct characteristic roots. Define the n n eigenvector matrix V = (u j ) n j=1 whose columns are the linearly independent eigenvectors. By definition of eigenvalue and eigenvector, for j = 1, 2,..., n one has Au j = λ j u j. The j column of the n n matrix AV is Au j, which equals λ j u j. But with Λ := diag(λ 1, λ 2,..., λ n ), the elements of Λ satisfy (Λ) kj = δ kj λ j. So the elements of VΛ satisfy (VΛ) ij = n k=1 (V) ikδ kj λ j = (V) ij λ j = λ j (u j ) i = (Au j ) i It follows that AV = VΛ because all the elements are all equal. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 21 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 22 of 45 Diagonalization Recall the hypothesis that the n n matrix A has n linearly independent eigenvectors{u j } n j=1. So the eigenvector matrix V is invertible. We proved on the last slide that AV = VΛ. Pre-multiplying this equation by V 1 yields V 1 AV = Λ, which gives a diagonalization of A. Furthermore, post-multiplying AV = VΛ by the inverse matrix V 1 yields A = VΛV 1. This is a decomposition of A into the product of: 1. the eigenvector matrix V; 2. the diagonal eigenvalue matrix Λ; 3. the inverse eigenvector matrix V 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 23 of 45 A Non-Diagonalizable 2 2 Matrix Example ( ) 0 1 The non-symmetric matrix A = cannot be diagonalized. 0 0 Its characteristic equation is 0 = A λi = λ 1 0 λ = λ2. It follows that λ = 0 is the unique eigenvalue. ( ) ( ) ( ) ( ) x1 0 1 x1 x2 The eigenvalue equation is 0 = = x 2 0 0 x 2 0 or x 2 = 0, whose only solutions take the form x 2 (1, 0). Thus, every eigenvector is a non-zero multiple of the column vector (1, 0). This makes it impossible to find any set of two linearly independent eigenvectors.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 24 of 45 A Non-Diagonalizable n n Matrix: Specification The following n n matrix also has a unique eigenvalue, whose eigenspace is of dimension 1. Example Consider the non-symmetric n n matrix A whose elements in the first n 1 rows satisfy a ij = δ i,j 1 for i = 1, 2,..., n 1 but whose last row is 0. Such a matrix is upper triangular, and takes the special form 0 1 0... 0 0 0 1... 0 ( ) A =...... 0 In 1. = 0 0 0 0 0... 1 0 0 0... 0 in which the elements in the first n 1 rows and last n 1 columns make up the identity matrix.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 25 of 45 A Non-Diagonalizable n n Matrix: Analysis Because A λi is also upper triangular, its characteristic equation is 0 = A λi = ( λ) n. This has λ = 0 as an n-fold repeated root. So λ = 0 is the unique eigenvalue. The eigenvalue equation Ax = λx with λ = 0 takes the form Ax = 0 or 0 = n j=1 δ i,j 1x j = x i+1 (i = 1, 2,..., n 1) with an extra nth equation of the form 0 = 0. The only solutions take the form x j = 0 for j = 2,..., n, with x 1 arbitrary. So all the eigenvectors of A are non-zero multiples of e 1 = (1, 0,..., 0), implying that there is just one eigenspace, which has dimension 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 26 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

Uncoupling via Diagonalization Consider the matrix difference equation x t = Ax t 1 + f t for t = 1, 2,..., with x 0 given. The extra forcing term f t makes the equation inhomogeneous (unless f t = 0 for all t). Consider the case when the n n matrix A has n distinct eigenvalues, or at least a set of n linearly independent eigenvectors making up the columns of an invertible eigenvector matrix V. Define a new vector y t = V 1 x t for each t. This new vector satisfies the transformed matrix difference equation y t = V 1 x t = V 1 (Ax t 1 + f t ) = V 1 AVy t 1 + e t where e t denotes the transformed forcing term V 1 f t. The diagonalization V 1 AV = Λ reduces this equation to the uncoupled matrix difference equation y t = Λy t 1 + e t with initial condition y 0 = V 1 x 0. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 27 of 45

Transforming the Uncoupled Equations Consider the uncoupled matrix difference equation y t = Λy t 1 + e t where Λ = diag(λ 1,..., λ n ). Note that, if there is any i for which λ i = 0, then the solution y t = (y ti ) n i=1 must satisfy y ti = e ti for all t = 1, 2,.... So we eliminate all i such that λ i = 0, and assume from now on that λ i 0 for all i. This assumption allows us to define the transformed vector z t := Λ t y t where Λ t = [diag(λ 1,..., λ n )] t = diag(λ t 1,..., λ t n ) = (Λ 1 ) t With this transformation, evidently z t = Λ t y t = Λ t (Λy t 1 + e t ) = Λ 1 t y t 1 + Λ t e t = z t 1 + w t where w t is the transformed forcing term Λ t e t. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 28 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 29 of 45 The Decoupled Solution The solution of z t = z t 1 + w t is obviously z t = z 0 + t s=1 w s Inverting the previous transformation z t = Λ t y t, we see that ( y t = Λ t z t = Λ t z 0 + t ) But z 0 = y 0 and w s = Λ s e s, so one has y t = Λ t y 0 + t s=1 w s s=1 Λt s e s Now, each power Λ k is the diagonal matrix diag(λ k 1,..., λk n). So, for each separate component y ti of y t and corresponding component w si of w s, this solution can be written in the obviously uncoupled form y ti = (λ i ) t y 0i + t s=1 (λ i) t s w si (for i = 1, 2,... n)

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 30 of 45 The Recoupled Solution Finally, inverting also the previous transformation y t = V 1 x t, while noting that e s = V 1 f s, one has x t = Vy t = VΛ t V 1 x 0 + t s=1 VΛt s V 1 f s as the solution of the original equation x t = VΛV 1 x t 1 + f t.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 31 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 32 of 45 Stationary States Given an autonomous equation x t+1 = F(x t ), a stationary state is a fixed point x R n of the mapping R n x F(x) R n. It earns its name because if x s = x for any finite s, then x t = x for all t = s, s + 1,.... Wherever it exists, the solution of the autonomous equation can be written as a function x t = Φ t s (x s ) (t = s, s + 1,...) of the state x s at time s, as well as of the number of periods t s that the function F must be iterated in order to determine the state x t at time t. Indeed, the sequence of functions Φ k : R n R n (k N) is defined iteratively by Φ k (x) = F(Φ k 1 (x)) for all x. Note that any stationary state x is a fixed point of each mapping Φ k in the sequence, as well as of Φ 1 F.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 33 of 45 Local and Global Stability The stationary state x is: globally stable if Φ k (x 0 ) x as k, regardless of the initial state x 0 ; locally stable if there is an (open) neighbourhood N R n of x such that whenever x 0 N one has Φ k (x 0 ) x as k. We begin by studying linear systems, for which local stability is equivalent to global stability. Later, we will consider the local stability of non-linear systems.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 34 of 45 Stability in the Linear Case Recall that the autonomous linear equation takes the form x t+1 = Ax t + d. The vector x R n is a stationary state if and only if x t = x = x t+1 = x, which is true if and only if x = Ax + d, or iff x solves the linear equation (I A)x = d. Of course, if the matrix I A is singular, then there could either be no stationary state, or a continuum of stationary states. For simplicity, we assume that I A has an inverse. Then there is a unique stationary state x = (I A) 1 d.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 35 of 45 Homogenizing the Linear Equation Given the equation x t+1 = Ax t + d and the stationary state x = (I A) 1 d, define the new state as the deviation y := x x of the state x from the stationary state x. This transforms the original equation x t+1 = Ax t + d to y t+1 + x = A(y t + x ) + d = Ay t + Ax + d Because the stationary state satisfies x = Ax + d, this reduces the original equation x t+1 = Ax t + d to the homogeneous equation y t+1 = Ay t, whose obvious solution is y t = A t y 0.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 36 of 45 Stability in the Diagonal Case Suppose that A is the diagonal matrix Λ = diag(λ 1, λ 2,..., λ n ). Then the powers are easy: A t = Λ t = diag(λ t 1, λ t 2,..., λ t n) The homogenized vector equation y t = Ay t 1 can be expressed component by component as the set y i,t = λ i y i,t 1 (i = 1, 2,..., n) of n uncoupled difference equations in one variable. The solution of y t = Ay t 1 with y 0 = z = (z 1, z 2,..., z n ) is then y t = (λ t 1 z 1, λ t 2 z 2,..., λ t nz n ). Hence y t 0 holds for all y 0 if and only if, for i = 1, 2,..., n, the modulus λ i of each root λ i satisfies λ i < 1. Recall that when λ = α ± iβ, the modulus is λ := α 2 + β 2.

Warning Example Consider the 2 2 matrix A = ( 1 ) 2 0. 0 2 The solution of the difference equation y t = Ay t 1 with y 0 = z = (z 1, z 2 ) is then ( 1 ) t ( ) ( ) ( ) ( y t = 2 0 z1 2 t 0 z1 2 = 0 2 z 2 0 2 t = t ) z 1 z 2 2 t z 2 Then y t 0 as t provided that z 2 = 0. But the norm y t + whenever z 2 0. In this case one says that A exhibits saddle point stability because starting with z 2 = 0 allows convergence, but starting with z 2 0 ensures divergence. This explains why one says that the n n matrix A is stable just in case A t y 0 for all y R n. The Fibonacci equation x t+1 = x t + x t 1 also exhibits saddle point stability. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 37 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 38 of 45 A Condition for Stability The solution y t = A t y 0 of the homogeneous equation y t+1 = Ay t is globally stable just in case A t y 0 0 or A t y 0 0 as t, regardless of y 0. This holds if and only if A t 0 n n in the sense that all n 2 elements of the n n matrix A t converge to 0 as n. In case the matrix A is the diagonal matrix Λ = diag(λ 1, λ 2,..., λ n ), stability holds if and only if λ i < 1 for i = 1, 2,..., n. Suppose the matrix A is the diagonalizable matrix VΛV 1, where V is a matrix of linearly independent eigenvectors, and the diagonal elements of the diagonal matrix Λ are eigenvalues. Then A t = VΛ t V 1 0 if and only if Λ t 0, which is true if and only if λ i < 1 for i = 1, 2,..., n.

The Classic Stability Condition Definition The n n matrix A is stable just in case A t converges element by element to the zero matrix 0 n n as t. Theorem The n n matrix A is stable if and only if each of its eigenvalues λ (real or complex) has modulus λ < 1. We have already proved this result in case A is diagonalizable. But the same stability condition applies even if A is not diagonalizable. For such a general matrix we will only prove necessity only if. Let λ denote the eigenvalue whose modulus λ is largest, and let x 0 be an associated eigenvector. In case λ 1, the solution x t = A t x = λ t x satisfies x t = λ t x x 0, so A is unstable. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 39 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 40 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

Local Stability Consider the autonomous non-linear system x t+1 = F(x t ) with steady state x. Let ( ) J(x ) = F (x Fi ) = x j ij (x ) denote the n n Jacobian matrix of partial derivatives evaluated at the steady state x. Theorem Suppose that the elements of the Jacobian matrix J(x ) are continuous in a neighbourhood of the steady state x. Let λ denote the eigenvalue of J(x ) whose modulus is largest. The system is locally stable about the steady state x : if λ < 1; only if λ 1. In case λ = 1, the system may or may not be locally stable. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 41 of 45

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 42 of 45 Complete Metric Spaces Let (X, d) denote any metric space. Definition A Cauchy sequence (x n ) n N in X is a sequence with the property that, for every ɛ > 0, there exists an N ɛ N for which m, n > N ɛ = d(x m, x n ) < ɛ. Definition A metric space (X, d) is complete just in case all its Cauchy sequences converge. Example Recall that one definition of the real line R is as the smallest complete metric space that includes the set Q of rational numbers with metric given by d(r, r ) = r r for all r, r Q.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 43 of 45 Global Stability: Contraction Mapping Theorem Definition The function F : X X is a contraction mapping on the metric space (X, d) just in case there is a positive contraction factor K < 1 such that d(f (x), F (y)) K d(x, y) for all x, y X. Theorem If F : X X is a contraction mapping on the complete metric space (X, d), then for any x 0 X the process defined by x t = F (x t 1 ) for all t N has a unique steady state x X that is globally stable.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 44 of 45 Cauchy Sequence Because F : X X is a contraction mapping with contraction factor K, and x t = F (x t 1 ) for all t N, one has d(x t+1, x t ) = d(f (x t ), F (x t 1 )) Kd(x t, x t 1 ). It follows by induction on t that d(x t+1, x t ) K t d(x 1, x 0 ). If n > m, then repeated application of the triangle inequality gives d(x m, x n ) n m r=1 d(x m+r 1, x m+r ) n m r=1 K m+r 1 d(x 1, x 0 ) = K m K n 1 K d(x 1, x 0 ) < K m 1 K d(x 1, x 0 ) Hence d(x m, x n ) < ɛ provided that K m ɛ(1 K)/d(x 1, x 0 ) or, since ln K < 0, if m (1/ ln K)[ln ɛ(1 K) ln d(x 1, x 0 )]. This proves that (x t ) t N is a Cauchy sequence.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 45 of 45 Completing the Proof Because (x t ) t N is a Cauchy sequence, the hypothesis that (X, d) is a complete metric space implies that there is a limit point x X such that x t x as t. Then, by the triangle inequality and the contraction property, d(f (x ), x ) d(f (x ), x t+1 ) + d(x t+1, x ) Kd(x, x t ) + d(x t+1, x ) 0 as t, implying that d(f (x ), x ) = 0. Because (X, d) is a metric space, it follows that F (x ) = x, so the limit point x X is a steady state. Finally, if x X is any steady state, then d(x, x) = d(f (x ), F ( x)) Kd(x, x). Hence (1 K)d(x, x) 0, implying that d(x, x) = 0 and so x = x.