Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Similar documents
Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part B: Second and Higher-Order Linear Difference Equations in One Variable

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture Notes 10: Dynamic Programming

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition

Chapter 1 Vector Spaces

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

3.3 Accumulation Sequences

DYNAMIC LECTURE 1 UNIVERSITY OF MARYLAND: ECON 600

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Lecture 4: Numerical solution of ordinary differential equations

Recurrence Relations

1 Linear Regression and Correlation

Lecture Notes in Linear Algebra

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Systems of Linear Equations

A matrix over a field F is a rectangular array of elements from F. The symbol

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Week 9-10: Recurrence Relations and Generating Functions

Lecture Summaries for Linear Algebra M51A

ECE 275A Homework #3 Solutions

Applications. More Counting Problems. Complexity of Algorithms

Advanced Mathematics for Economics, course Juan Pablo Rincón Zapatero

CHAPTER 3 Further properties of splines and B-splines

Math 324 Summer 2012 Elementary Number Theory Notes on Mathematical Induction

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

MTH 464: Computational Linear Algebra

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Math 504 (Fall 2011) 1. (*) Consider the matrices

Exercises Chapter II.

Definition: A sequence is a function from a subset of the integers (usually either the set

ECON2285: Mathematical Economics

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

1 Recursive Algorithms

Math 3108: Linear Algebra

1 Matrices and Systems of Linear Equations. a 1n a 2n

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1 Matrices and Systems of Linear Equations

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Notes on Mathematics

Chapter III. Stability of Linear Systems

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases.

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Dynamical Systems. August 13, 2013

Chapter 2 - Basics Structures MATH 213. Chapter 2: Basic Structures. Dr. Eric Bancroft. Fall Dr. Eric Bancroft MATH 213 Fall / 60

Linear Algebra 1 Exam 2 Solutions 7/14/3

Some Notes on Linear Algebra

ELEMENTARY LINEAR ALGEBRA

Vector Space Concepts

Matrices and systems of linear equations

Notes on Row Reduction

Matrices. Chapter Definitions and Notations

5. Sequences & Recursion

Solving Linear Systems of Equations

1 Determinants. 1.1 Determinant

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Linear equations in linear algebra

Generalized eigenvector - Wikipedia, the free encyclopedia

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter 4. Matrices and Matrix Rings

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

2.3. VECTOR SPACES 25

A Review of Matrix Analysis

Some Review Problems for Exam 2: Solutions

27 th Annual ARML Scrimmage

Discrete Mathematics. Spring 2017

Math113: Linear Algebra. Beifang Chen

2nd-Order Linear Equations

Quaternion Cubic Spline

We shall finally briefly discuss the generalization of the solution methods to a system of n first order differential equations.

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math 24 Spring 2012 Questions (mostly) from the Textbook

EE731 Lecture Notes: Matrix Computations for Signal Processing

Solving Linear Rational Expectations Models

Foundations of Matrix Analysis

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

11.2 Basic First-order System Methods

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

4 Matrix Diagonalization and Eigensystems

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 2030: EIGENVALUES AND EIGENVECTORS

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Section Summary. Sequences. Recurrence Relations. Summations. Examples: Geometric Progression, Arithmetic Progression. Example: Fibonacci Sequence

Calculus C (ordinary differential equations)

Review of Vectors and Matrices

Getting to page 31 in Galí (2008)

ECE133A Applied Numerical Computing Additional Lecture Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Transcription:

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017 September 23rd, plus minor correction typeset from diffeqslides18a.tex

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 2 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 54 Walking as a Simple Difference Equation What is the difference between difference and differential equations? It is relatively common to indicate by: a subscript a discrete time function like m x m ; parentheses a continuous time function like t x(t). Walking on two feet can be modelled as a discrete time process, with time domain T = {0, 1, 2,...} = Z + = {0} N that counts the number of completed steps. After m steps, the respective positions l, r R 2 of the left and right feet on the ground can be described by the two functions T m (l m, r m ).

Walking as a More Complicated Difference Equation Athletics rules limit a walking step to be no longer than a stride. So a walking process that starts with the left foot might be described by the two coupled equations { { λ(r m 1 ) if m is odd l m = l m 1 if m is even and r ρ(l m 1 ) m = for m = 0, 1, 2,.... r m 1 Or, if the length and direction of each pace are affected by the length and direction of its predecessor, by { l m = λ(r m 1, l m 2 ) if m is odd { l m 1 if m is even and r m = if m is odd ρ(l m 1, r m 2 ) if m is even for m = 0, 1, 2,.... r m 1 if m is even if m is odd University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 4 of 54

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 5 of 54 Walking as a Differential Equation Newtonian physics implies that a walker s centre of mass must be a continuous function of time, described by a 3-vector valued mapping R + t (x(t), y(t), z(t)) R 3. The time domain is therefore T := R +. The same will be true for the position of, for instance, the extreme end of the walker s left big toe. Newtonian physics requires that the acceleration 3-vector described by the second derivative d2 (x(t), y(t), z(t)) R3 dt2 should be well defined for all t. The biology of survival requires it to be bounded. Actually, the motion becomes seriously uncomfortable unless the acceleration (or deceleration) is continuous as my driving instructor taught me more than 50 years ago!

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 6 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 7 of 54 Basic Definition Let T = Z + t x t X describe a discrete time process, with X = R (or X = R m ) as the state space. Its difference at time t is defined as x t := x t+1 x t A standard first-order difference equation takes the form x t+1 x t = x t = d t (x t ) where each d t : X X, or equivalently, T X (t, x) d t (x)

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 8 of 54 Equivalent Recurrence Relations Obviously, the difference equation x t+1 x t = x t = d t (x t ) is equivalent to the recurrence relation x t+1 = r t (x t ) where T X (t, x) r t (x) = x + d t (x), or equivalently, d t (x) = r t (x) x. Thus difference equations and recurrence relations are entirely equivalent. We follow standard mathematical practice in using the notation for recurrence relations, even when discussing difference equations. We may write difference equation even when considering a recurrence relation.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 9 of 54 Existence of Solutions Example Consider the difference equation x t = x t 1 1 with x 0 = 5. Evidently x 1 = 5 1 = 2, then x 2 = 2 1 = 1, and next x 3 = 1 1 = 0, leaving x 4 = 0 1 undefined as a real number. The domain of (t, x) x 1 is limited to D := Z + [1, ). Generally, consider a mapping D (t, x) r t (x) whose domain is restricted to a subset D Z + X. For the difference equation x t+1 = r t (x t ) to have a solution one must ensure that (t, x) D = (t + 1, r t (x t )) D for all t Z +

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 10 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 11 of 54 Application: Wealth Accumulation in Discrete Time Consider a consumer who, in discrete time t = 0, 1, 2,... : starts each period t with an amount w t of accumulated wealth; receives income y t ; spends an amount e t ; earns interest on the residual wealth w t + y t e t at the rate r t. The process of wealth accumulation is then described by any of the equivalent equations w t+1 = (1 + r t )(w t + y t e t ) = ρ t (w t x t ) = ρ t (w t + s t ) where, at each time t, ρ t := 1 + r t is the interest factor; x t = e t y t denotes net expenditure; s t = y t e t = x t denotes net saving.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 12 of 54 Compound Interest Define the compound interest factor R t := t 1 k=0 (1 + r k) = t 1 k=0 ρ k with the convention that the product of zero terms equals 1 just as the sum of zero terms equals 0. This compound interest factor is the unique solution to the recurrence relation R t+1 = (1 + r t )R t that satisfies the initial condition R 0 = 1. In the special case when r t = r (all t), it reduces to R t = (1 + r) t = ρ t.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 13 of 54 Present Discounted Value (PDV) We transform the difference equation w t+1 = ρ t (w t x t ) by using the compound interest factor R t = t 1 k=0 ρ k in order to discount both future wealth and expenditure. To do so, define new variables ω t, ξ t for the present discounted values (PDVs) of, respectively: 1. wealth w t at time t as ω t := (1/R t )w t ; 2. net expenditure x t at time t as ξ t := (1/R t )x t. With these new variables, the wealth equation w t+1 = ρ t (w t x t ) becomes R t+1 ω t+1 = ρ t R t (ω t ξ t ) But R t+1 = ρ t R t, so eliminating this common factor reduces the equation to ω t+1 = ω t ξ t, with the evident solution ω t = ω 0 t 1 k=0 ξ k for k = 1, 2,....

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 14 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 15 of 54 General First-Order Linear Equation The general first-order linear difference equation can be written in the form x t a t x t 1 = f t for t = 1, 2,..., T for non-zero constants a t R and a forcing term N t f t R. When this equation holds for t = 1, 2,..., T, where T 6, this equation can be written in the following matrix form: a 1 1 0... 0 0 0 0 a 2 1... 0 0 0 0 0 a 3... 0 0 0......... 0 0 0... a T 1 1 0 0 0 0... 0 a T 1 x 0 x 1 x 2. x T 2 x T 1 x T = f 1 f 2 f 3. f T 1 f T

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 16 of 54 Matrix Form The matrix form of the difference equation is Cx = f, where: 1. C is the T (T + 1) coefficient matrix whose elements are a s if t = s c st = 1 if t = s + 1 0 otherwise for s = 1, 2,..., T and t = 0, 1, 2,..., T ; 2. x is the T + 1-dimensional column vector (x t ) T t=0 of endogenous unknowns, to be determined; 3. f is the T -dimensional column vector (f t ) T t=1 of exogenous shocks.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 17 of 54 Partitioned Matrix Form The matrix equation Cx = f can be written in partitioned form as ( ) ( x T 1 ) U et = f where: x T 1. U is an upper triangular T T matrix; 2. e T = (0, 0, 0,..., 0, 1) is the T th column vector of the canonical basis of the vector space R T ; 3. x T 1 denotes the column vector which is the transpose of the row T -vector (x 0, x 1, x 2,..., x T 2, x T 1 ). In fact the matrix U satisfies (U, e T ) = ( diag(a 1, a 2,..., a T ), e T ) + (0 T 1, I T T ) Hence there are T independent equations in T + 1 unknowns, leaving one degree of freedom in the solution.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 18 of 54 An Initial Condition Consider the difference equation x t a t x t 1 = f t, or Cx = f in matrix form. An initial condition specifies an exogenous value x 0 for the value x 0 at time 0. This removes the only degree of freedom in the system of T equations in T + 1 unknowns. Consider the special case when a t = 1 for all t N. The obvious unique solution of x t x t 1 = f t is then that each x t is the forward sum x t = x 0 + t s=1 f s of the initial state x 0, and of the t exogenously specified succeeding differences f s (s = 1, 2,..., t).

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 19 of 54 A Terminal Condition Alternatively, a terminal condition for the difference equation x t x t 1 = f t specifies an exogenous value x T for the value x T at the terminal time T. It leads to a unique solution as a backward sum of the exogenously specified terminal state x T ; x t = x T T t 1 s=0 preceding backward differences f T s (s = 0, 1,..., T t 1). f T s

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 20 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 21 of 54 Particular and General Solutions We are interested in solving the system Cx = f of T equations in T + 1 unknowns, where C is a T (T + 1) matrix. When the rank of C is T, there is one degree of freedom. The associated homogeneous equation Cx = 0 will have a one-dimensional space of solutions x H t = ξ x H t (ξ R). Given any particular solution xt P satisfying Cx P = f for the particular time series f of forcing terms, the general solution xt G must also satisfy Cx G = f. Simple subtraction leads to C(x G x P ) = 0, so x G x P = x H for some solution x H of the homogeneous equation Cx = 0. So x solves the equation Cx = f iff there exists a scalar ξ R such that x = x P + ξ x H, which leads to the formula x G = x P + ξ x H for the general solution.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 22 of 54 Complementary Solutions Consider again the general first-order linear equation which takes the inhomogeneous form x t a t x t 1 = f t. The associated homogeneous equation takes the form with a zero right-hand side. x t a t x t 1 = 0 (for all t N) The associated complementary solutions make up the one-dimensional linear subspace L of solutions to this homogeneous equation. The space L consists of functions Z + t x t R satisfying x t = x 0 t s=1 a s (for all t N) where x 0 is an arbitrary scaling constant.

From Particular to General Solutions Consider again the inhomogeneous equation for a general RHS f t. x t a t x t 1 = f t The associated homogeneous equation takes the form x t a t x t 1 = 0 Let xt P denote a particular solution, and xt G any alternative general solution, of the inhomogeneous equation. Our assumptions imply that, for each t = 1, 2,..., one has xt P a t xt 1 P = f t xt G a t xt 1 G = f t Subtracting the second equation from the first implies that x G t x P t a t (x G t 1 x P t 1) = 0 This shows that x H t := x G t x P t solves the homogeneous equation. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 23 of 54

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 24 of 54 Characterizing the General Solution Theorem Consider the inhomogeneous equation x t a t x t 1 = f t with forcing term f t. Its general solution x G t is the sum x P t + x H t of any particular solution x P t of the inhomogeneous equation; the general complementary solution x H t of the corresponding homogeneous equation x t a t x t 1 = 0.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 25 of 54 Linearity in the Forcing Term, I Theorem Suppose that x P t and y P t are particular solutions of the two respective difference equations x t a t x t 1 = d t and y t a t y t 1 = e t Then, for any scalars α and β, the equation z t a t z t 1 = αd t + βe t has as a particular solution the corresponding linear combination z P t := αx P t + βy P t. Proof. Routine algebra.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 26 of 54 Linearity in the Forcing Term, II Consider any equation of the form x t a t x t 1 = f t where f t is a linear combination n k=1 α kft k of n forcing terms ft k n k=1. The theorem implies that a particular solution is the corresponding linear combination n k=1 α kxt Pk of particular solutions xt Pk n k=1 to the respective n equations x t a t x t 1 = ft k (k = 1, 2,..., n).

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 27 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 28 of 54 Solving the General Linear Equation Consider a first-order linear difference equation x t+1 = a t x t + f t for a process T t x t R, where each a t 0 (to avoid trivialities). We will prove by induction on t that for t = 0, 1, 2,... there exist suitable non-zero constants p t,k (k = 0, 1, 2,..., t) such that, given any possible value of the initial state x 0 and of the forcing terms f t (t = 0, 1, 2,...), the unique solution can be expressed as x t = p t,0 x 0 + t k=1 p t,kf k 1 The proof, of course, will also involve deriving a recurrence relation for the constants p t,k (k = 0, 1, 2,..., t).

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 29 of 54 Early Terms of the Solution Because x 0 = p 0,0 x 0 = x 0, the first term is obviously p 0, 0 = 1 when t = 0. Next x 1 = a 0 x 0 + f 0 when t = 1 implies that p 1, 0 = a 0 and p 1, 1 = 1. Next, the solution for t = 2 is x 2 = a 1 x 1 + f 1 = a 1 a 0 x 0 + a 1 f 0 + f 1 This formula matches the formula when t = 2 provided that: p 2, 0 = a 1 a 0 ; p 2, 1 = a 1 ; p 2, 2 = 1. x t = p t,0 x 0 + t k=1 p t,kf k 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 30 of 54 Explicit Solution, I Now, substituting the two expansions x t = p t,0 x 0 + t k=1 p t,kf k 1 and x t+1 = p t+1,0 x 0 + t+1 k=1 p t+1,kf k 1 into both sides of the original equation x t+1 = a t x t + f t gives t+1 p t+1,0 x 0 + p t+1,k f k 1 = a t (p t,0 x 0 + k=1 ) t p t,k f k 1 + f t k=1 Equating the coefficients of x 0 and of each f k 1 in this equation implies that for general t one has p t+1,k = a t p t,k for k = 0, 1,..., t, with p t+1,t+1 = 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 31 of 54 Explicit Solution, II The equation p t+1,k = a t p t,k for k = 0, 1,..., t implies that p t, 0 = a t 1 a t 2 a 0 when k = 0 p t, k = a t 1 a t 2 a k when k = 1, 2,..., t or, after defining the product of the empty set of real numbers as 1, p t, k = t k s=1 a t s Inserting these into our formula x t = p t,0 x 0 + t k=1 p t,kf k 1 gives the explicit solution ( t ) x t = x 0 + ( t t k ) f k 1 s=1 a t s k=1 s=1 a t s Putting x 0 = 0 gives one particular solution of x t+1 = a t x t + f t, namely = ( t t k ) f k 1 x P t k=1 s=1 a t s

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 32 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 33 of 54 First-Order Linear Equation with a Constant Coefficient Next, consider the equation x t ax t 1 = f t, where the coefficient a t has become the constant a 0. Evidently one has x 1 = ax 0 + f 1, then x 2 = ax 1 + f 2 = a(ax 0 + f 1 ) + f 2 = a 2 x 0 + af 1 + f 2, then x 3 = ax 2 + f 3 = a(a 2 x 0 + af 1 + f 2 ) + f 2 = a 3 x 0 + a 2 f 1 + af 2 + f 3 etc. One can easily verify by induction the explicit formula x t = a t x 0 + t k=1 at k f k In the very special case when a = 1, this also accords with our earlier sum solution x t = x 0 + t k=1 f k.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 34 of 54 First Special Case An interesting special case occurs when there exists some µ 0 such that f t is the discrete exponential function N t µ t. Then the solution is x t = a t x 0 + S t where S t := t k=1 at k µ k. Note that (a µ)s t = t k=1 at k+1 µ k t k=1 at k µ k+1 = a t µ µ t+1 In the non-degenerate( case when µ a, a t µ t ) ( a it follows that S t = µ and so x t = a t t µ t x 0 + µ a µ a µ This solution can be written as x t = x H t + x P t where: 1. x H t = ξ H a t with ξ H := x 0 + µ/(a µ) is a solution of the homogeneous equation x t ax t 1 = 0; 2. x P t = ξ P µ t with ξ P := µ/(a µ) is a particular solution of the inhomogeneous equation x t ax t 1 = µ t. ).

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 35 of 54 Degenerate Case When µ = a In the degenerate case when µ = a, the solution collapses to x t = a t x 0 + t k=1 at k a k = a t x 0 + t k=1 at = a t (x 0 + t) Again, this solution can be written as x t = x H t + x P t where: 1. x H t = ξ H a t with ξ H := x 0 is a solution of the homogeneous equation x t ax t 1 = 0; 2. x P t = ξ P a t t with ξ P := 1 is a particular solution of the inhomogeneous equation x t ax t 1 = a t. Note carefully the term in a t t, where a t is multiplied by t.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 36 of 54 Second Special Case Another interesting special case is when f t = t r µ t for some r N. Then the explicit solution we found previously takes the form x t = a t x 0 + S t where S t := t k=1 at k k r µ k. We aim to simplify this expression for S t. We first restrict attention to the non-degenerate case when µ a. The solution can still be written as x t = x H t + x P t where: 1. x H t = ξ H a t with scalar ξ H R solves the homogeneous equation x t ax t 1 = 0; 2. x P t = ξ P (t) µ t is any particular solution of the inhomogeneous equation x t ax t 1 = t r µ t. The issue is finding a useful form of the function t ξ P (t) that makes ξ P (t) µ t a solution of x t ax t 1 = t r µ t.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 37 of 54 Method of Undetermined Coefficients We will find a particular solution xt P = ξ P (t)µ t of the inhomogeneous difference equation x t ax t 1 = t r µ t where ξ P (t) = r k=0 ξ kt k is a polynomial in t of degree r, the power of t on the right-hand side. The coefficients (ξ 0, ξ 1,..., ξ r ) of the polynomial are undetermined till we consider the difference equation itself. Note that, by the binomial theorem, ξ P (t 1) = r k=0 ξ k (t 1) k = r k=0 ξ k k j=0 = ( r r k j=0 k=0 1 j kξ k j = ( r r k j=0 k=j ξ k j ) t j ( 1) k j ) ( 1) k j t j where 1 j k denotes 1 if j k, but 0 if j > k. ( ) k t j ( 1) k j j

Determining the Undetermined Coefficients For xt P = µ t r k=0 ξ kt k to solve x t ax t 1 = t r µ t, we need µ t r ξ jt j aµ t 1 r ( ) r k ξ k ( 1) k j t j = t r µ t j=0 j=0 k=j j First consider the non-degenerate case µ a. Equating coefficients of t r implies that µ t ξ r aµ t 1 ξ r = µ t. Dividing by µ t 1 gives µξ r aξ r = µ, and so ξ r = µ(µ a) 1. For j = 0, 1,..., r 1, equating coefficients of t j implies that µ t ξ j aµ ( ) t 1 r k ξ k ( 1) k j = 0 k=j j so ξ j = (µ a) 1 r k=j+1 ξ k( k j ) ( 1) k j. In principle one can solve this system of r + 1 equations in the r + 1 unknowns (ξ r, ξ r 1, ξ r 2,..., ξ 0 ) by backward recursion, starting with ξ r = µ(µ a) 1, ending at ξ 0. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 38 of 54

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 39 of 54 Degenerate Case But in the degenerate case µ = a, the equation µξ r aξ r = µ for ξ r has no solution, so the method does not work. Instead, to solve x t ax t 1 = t r a t, we introduce the new variable y t = a t x t. Then y t = a t (ax t 1 + t r a t ) = y t 1 + t r. The solution is y t = y 0 + S r (t) where S r (n) := n k=1 j r is the much studied sum of rth powers of the first n integers. Hence x t = a t [x 0 + S r (t)]. Theorem The sums S r (n) satisfy the recurrence relation (r + 1)S r (n) = (n + 1) r+1 1 ( ) r 1 r + 1 S k (n) k=0 k

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 40 of 54 Proof of Theorem For j = 1, 2,... n, the binomial theorem implies that (j + 1) r+1 = ( ) r r + 1 j k + j r+1 k=0 k Summing over j, then interchanging the order of summation, gives (n + 1) r+1 1 = n j=1 [(j + 1)r+1 j r+1 ] = n r ( r+1 ) j=1 k=0 k j k = r ) k=0 Sk (n) ( r+1 k Isolating the last term gives (n + 1) r+1 1 = ( ) ( ) r 1 r + 1 r + 1 S k (n) + S r (n) k=0 k r and so, after rearranging (r + 1)S r (n) = (n + 1) r+1 1 ( ) r 1 r + 1 S k (n) k=0 k

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 41 of 54 Important Corollary Corollary Each sum S r (n) := n j=1 j r of the rth powers of the first n natural numbers equals a polynomial r+1 i=0 a rin i of degree r + 1 in n, whose coefficients are rational, with constant term a r0 = 0 and leading coefficient a r,r+1 = 1/(r + 1). Using the theorem, we prove the corollary by induction, starting with the obvious S 0 (n) := n j=1 j 0 = n and the well known S 1 (n) := n j=1 j = 1 2 n(n + 1) = 1 2 n + 1 2 n2.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 42 of 54 Proof by Induction Indeed, consider the induction hypothesis that S q (n) satisfies the corollary for q = 0, 1, 2,..., r 1. This hypothesis implies in particular that each S k (n) is a polynomial of degree k + 1 in n. It follows that the right-hand side of the equation (r + 1)S r (n) = (n + 1) r+1 1 ( ) r 1 r + 1 S k (n) k=0 k is obviously a polynomial of degree at most r + 1 in n. Moreover, it has rational coefficients, a constant term 0, and a leading coefficient 1 attached to the highest power n r+1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 43 of 54 Main Theorem Theorem Consider the inhomogeneous first-order linear difference equation x t ax t 1 = t r µ t, where a 0 and r Z +. Then there exists a particular solution of the form xt P where the function t Q(t) is a polynomial which: in the regular case when µ a, has degree r; in the degenerate case when µ = a, has degree r + 1. The general solution takes the form x t = x P t + x C t where: x P t x C t = Q(t) µ t is any particular solution of the inhomogeneous equation; is any member of the one-dimensional linear space of complementary solutions to the corresponding homogeneous equation x t ax t 1 = 0.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 44 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 45 of 54 Stationary States, I The general first-order equation x t+1 = f t (x t ) is non-autonomous; for it to become autonomous, there should be a mapping x f (x) that is independent of t. Given an autonomous equation x t+1 = f (x t ), a stationary state is a fixed point x R of the mapping x f (x). It earns its name because if x s = x for any finite s, then x t = x for all t = s, s + 1,....

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 46 of 54 Stationary States, II Wherever it exists, the solution of the autonomous equation can be written as a function x t = Φ t s (x s ) (t = s, s + 1,...) of the state x s at the initial time s, as well as of the number of periods t s that the function x f (x) must be iterated in order to determine the state x t at time t. Indeed, the sequence of functions Φ k : R R (k N) is defined iteratively by Φ k (x) = f (Φ k 1 (x)) for all x. Note that any stationary state x is a fixed point of each mapping Φ k in the sequence, as well as of Φ 1 f.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 47 of 54 Local and Global Stability The stationary state x is: globally stable if Φ k (x 0 ) x as k, regardless of the initial state x 0 ; locally stable if there is an (open) neighbourhood N R of x such that whenever x 0 N one has Φ k (x 0 ) x as k. Generally, global stability implies local stability, but not conversely. Global stability also implies that the steady state x is unique. We begin by studying stability for linear equations, where local stability is equivalent to global stability. Later, we will consider the local stability of non-linear equations.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 48 of 54 Stationary States of a Linear Equation Consider the linear (or rather, affine) equation x t+1 = ax t + f for a fixed forcing term f. A stationary state x has the defining property that x t = x = x t+1 = x, which is satisfied if and only x = ax + f. In case a = 1, there is: no stationary state unless f = 0; the whole real line R of stationary states if f = 0. Otherwise, if a 1, the only stationary state is x = (1 a) 1 f.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 49 of 54 Stability of a Linear Equation If a 1, let us denote by y t := x t x the deviation of state x t from the stationary state x = (1 a) 1 f. Then y t+1 = x t+1 x = ax t + f x = a(y t + x ) + f x = ay t. This equation has the obvious solution y t = y 0 a t, or equivalently x t = x + (x 0 x )a t. The solution is evidently both locally and globally stable if and only if a t 0 as t, which is true if and only if a < 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 50 of 54 Lecture Outline Introduction: Difference vs. Differential Equations First-Order Difference Equations First-Order Linear Difference Equations: Introduction General First-Order Linear Equation Particular, General, and Complementary Solutions Explicit Solution as a Sum Constant and Undetermined Coefficients Stationary States and Stability for Linear First-Order Equations Local Stability of Nonlinear First-Order Equations

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 51 of 54 Stationary States Let I be an open interval of the real line, and I x f (x) I a general, possibly nonlinear, function. A fixed point x I of f satisfies f (x) = x. Consider the autonomous difference equation x t+1 = f (x t ). For each natural number n N, define the iterated function I x f n (x) I so that f 1 (x) = f (x) and f n (x) = f (f n 1 (x)) for n = 2, 3,.... A stationary state is any x I with the property that f n (x ) = x for all n N. This implies in particular that x = x s+1 = f (x s ) = f (x ), so any stationary state must be a fixed point of f. Conversely, it is obvious that any fixed point of f must be a stationary state.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 52 of 54 Local Stability The steady state x is locally stable just in case there is a neighbourhood N of x such that whenever x N one has f n (x) x as n. Theorem Let x be an equilibrium state of x t+1 = f (x t ). Suppose that x f (x) is continuously differentiable in an open interval I R that includes x. Then 1. f (x ) < 1 implies that x is locally stable; 2. f (x ) > 1 implies that x is locally unstable. The proof below uses the fact that, by the mean value theorem, if x t I, then there exists a c t I between x t and x such that f (c t ) is a mean value of f (x) in the sense that x t+1 x = f (x t ) f (x ) = f (c t )(x t x )

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 53 of 54 Proof of Local Stability By the hypotheses that f is continuous on I and f (x ) < 1, there exist an ɛ > 0 and a k (0, 1) such that: 1. (x ɛ, x + ɛ) I ; 2. f (x) k for all x (x ɛ, x + ɛ). Suppose that x t x < ɛ. Then the c t between x t and x where x t+1 x = f (c t )(x t x ) must satisfy c t x < ɛ. Hence f (c t ) k, implying that x t+1 x = f (c t )(x t x ) k x t x < kɛ < ɛ By induction on t, if x 0 x < ɛ, it follows that x t x ɛ and in fact x t x k t x 0 x for t = 1, 2,.... Hence x t x 0 as t.

Proof of Local Instability By the hypotheses that f is continuous on I and f (x ) > 1, there exist an ɛ > 0 and a K > 1 such that: 1. (x ɛ, x + ɛ) I ; 2. f (x) K for all x (x ɛ, x + ɛ). Suppose that there exist s, r N such that x t I and 0 < x t x < ɛ for all t T s,r := {s, s + 1,..., s + r 1}, the set of r successive times starting from time s. Then for each t T s,r, any c t I between x t and x where x t+1 x = f (c t )(x t x ) must satisfy c t x < ɛ. This implies that x t+1 x K x t x for all t T s,r. By induction on r, it follows that x s+r x K r x s x. So x s+r x K r x s x > ɛ for r large enough. This proves there is no s N such that x t x < ɛ for all t s. It follows that x t x as t. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 54 of 54