Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Size: px
Start display at page:

Download "Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems"

Transcription

1 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September 24th typeset from diffeqslides17c.tex

2 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 2 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

3 Systems of Linear Difference Equations Many empirical economic models involve simultaneous time series for several different variables. Consider a first-order linear difference equation x t+1 A t x t = f t for an n-dimensional process T t x t R n, where each matrix A t is n n. We will prove by induction on t that for t = 0, 1, 2,... there exist suitable n n matrices P t,k (k = 0, 1, 2,..., t) such that, given any possible value of the initial state vector x 0 and of the forcing terms f t (t = 0, 1, 2,...), the unique solution can be expressed as x t = P t,0 x 0 + t k=1 P t,kf k 1 The proof, of course, will also involve deriving a recurrence relation for these matrices. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 45

4 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 4 of 45 Early Terms of the Matrix Solution Because x 0 = P 0,0 x 0 = x 0, the first term is obviously P 0, 0 = I when t = 0. Next x 1 = A 0 x 0 + f 0 when t = 1 implies that P 1, 0 = A 0, P 1, 1 = I. Next, the solution for t = 2 is x 2 = A 1 x 1 + f 1 = A 1 A 0 x 0 + A 1 f 0 + f 1 This formula matches the formula when t = 2 provided that: P 2, 0 = A 1 A 0 ; P 2, 1 = A 1 ; P 2, 2 = I. x t = P t,0 x 0 + t k=1 P t,kf k 1

5 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 5 of 45 Matrix Solution Now, substituting the two expansions x t = P t,0 x 0 + t k=1 P t,kf k 1 and x t+1 = P t+1,0 x 0 + t+1 k=1 P t+1,kf k 1 into both sides of the original equation x t+1 = A t x t + f t gives P t+1,0 x 0 + t+1 P t+1,kf k 1 k=1 ( = A t P t,0 x 0 + t ) P t,kf k 1 + f t k=1 Equating the matrix coefficients of x 0 and of each f k 1 in this equation implies that for general t one has P t+1,k = A t P t,k for k = 0, 1,..., t, with P t+1,t+1 = I

6 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 6 of 45 Matrix Solution, II The equation P t+1,k = A t P t,k for k = 0, 1,..., t implies that P t, 0 = A t 1 A t 2 A 0 when k = 0 P t, k = A t 1 A t 2 A k when k = 1, 2,..., t or, after defining the product of the empty set of matrices as I, P t, k = t k Inserting these into our formula implies that ( t x t = x t = P t,0 x 0 + t s=1 A t s ) x 0 + t s=1 A t s k=1 P t,kf k 1 k=1 ( t k s=1 A t s ) f k 1

7 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 7 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

8 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 8 of 45 Complementary Solutions to the Homogeneous Equation We are considering the general first-order linear difference equation x t+1 A t x t = f t in R n, where each A t is an n n matrix. The associated homogeneous equation takes the form x t A t x t 1 = 0 (for all t N) Its general solution is the n-dimensional linear subspace of functions N t x t R n satisfying ( t ) x t = x 0 (for all t N) s=1 A s where x 0 R n is an arbitrary constant vector.

9 From Particular to General Solutions The homogeneous equation takes the form x t+1 A t x t = 0 An associated inhomogeneous equation takes the form x t+1 A t x t = f t for a general vector forcing term f t R n. Let x P t denote a particular solution of the inhomogeneous equation and x G t any alternative general solution of the same equation. Our assumptions imply that, for each t = 1, 2,..., one has x P t+1 A t x P t = f t and x G t+1 A t x G t = f t Subtracting the first equation from the second implies that x G t+1 x P t+1 A t (x G t x P t ) = 0 This shows that x H t := x G t x P t solves the homogeneous equation. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 9 of 45

10 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 10 of 45 Characterizing the General Solution So the general solution x G t of the inhomogeneous equation x t+1 A t x t = f t with forcing term f t is the sum x P t + x H t of any particular solution x P t of the inhomogeneous equation; the general complementary solution x H t of the homogeneous equation x t+1 A t x t = 0.

11 Linearity in the Forcing Term Theorem Suppose that x P t and y P t are particular solutions of the two respective difference equations x t+1 A t x t 1 = d t and y t+1 A t y t 1 = e t Then, for any scalars α and β, the linear combination z P t := αx P t + βy P t is a particular solution of the equation z t+1 A t z t 1 = αd t + βe t. This can be proved by routine algebra. Consider any equation of the form x t+1 A t x t 1 = f t whose right-hand side is a linear combination f t = n k=1 α kft k of the n forcing vectors (ft 1,..., fn). 1 The theorem implies that a particular solution is the corresponding linear combination x P t = n k=1 α kx Pk t of particular solutions to the n equations x t+1 A t x t 1 = ft k. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 11 of 45

12 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 12 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

13 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 13 of 45 The Autonomous Case The general first-order equation in R n can be written as x t+1 = F t (x t ) where T R n (t, x) F t (x) R n. In the autonomous case, the function (t, x) F t (x) reduces to x F(x), independent of t. In the linear case with constant coefficients, the function x F(x) takes the affine form F(x) = Ax + f. That is, x t+1 = Ax t + f. In our previous formula, products like t k s=1 A t s reduce to powers A t k. Specifically, P t, k = A t k, where A 0 = I. The solution to x t+1 = Ax t + f is therefore x t = A t x 0 + t k=1 At k f

14 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 14 of 45 Summing the Geometric Series Recall the trick for finding s t := 1 + a + a a t 1 is to multiply each side by 1 a. Because all terms except the first and last cancel, this trick yields the equation (1 a)s t = 1 a t. Hence s t = (1 a) 1 (1 a t ) provided that a 1. Applying the same trick to S t := I + A + A A t 1 yields the two matrix equations (I A)S t = I A t = S t (I A). Provided that (I A) 1 exists, we can pre-multiply (I A)S t = I A t and post-multiply I A t = S t (I A) on each side by this inverse to get S t = (I A) 1 (I A t ) = (I A t )(I A) 1 This leads to the solution x t = A t x 0 + S t f = A t x 0 + (I A t )(I A) 1 f

15 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 15 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

16 First-Order Linear Equation with a Constant Matrix Recall that the solution to the general first-order linear equation x t A t x t 1 = f t takes the form ( t x t = s=1 A t s ) x 0 + t k=1 ( t k s=1 A t s ) f k 1 From now on, we restrict attention to a constant coefficient matrix A t = A, independent of t. Then the solution reduces to x t = A t x 0 + t s=1 At s f s 1 Indeed, this is easily verified by induction. One particular solution, of course, comes from taking x 0 = 0, implying that x t = t s=1 At s f s 1 Now we will start to analyse this particular solution for some special forcing terms f t. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 16 of 45

17 Special Case The special case we consider is when there exists a fixed vector f 0 R n \ {0} such that x t A t x t 1 = f t = µ t f 0 for the discrete exponential or power function Z + t µ t. Then the particular solution satisfying x 0 = 0 is x t = S t f 0 where S t := t k=1 µk 1 A t k. Note that S t (A µi) = t k=1 (µk 1 A t k+1 µ k A t k ) = A t µ t I In the non-degenerate case when µ is not an eigenvalue of A, so A µi is non-singular, it follows that S t = (A t µ t I)(A µi) 1 Then the particular solution we are looking for takes the form x P t = (A t µ t I)f for the particular fixed vector f := (A µi) 1 f 0. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 17 of 45

18 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 18 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

19 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 19 of 45 Characteristic Roots and Eigenvalues Recall the characteristic equation A λi = 0. It is a polynomial equation of degree n in the unknown scalar λ. By the fundamental theorem of algebra, it has a set {λ 1, λ 2,..., λ n } of n characteristic roots, some of which may be repeated. These roots may be real, or appear in conjugate pairs λ = α ± iβ C where α, β R. Because the λ i are characteristic roots, one has A λi = n i=1 (λ i λ) When λ solves A λi = 0, there is a non-trivial eigenspace E λ of eigenvectors x 0 that solve the equation Ax = λx. Then λ is an eigenvalue.

20 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 20 of 45 Linearly Independent Eigenvectors In the matrix algebra lectures, we proved this result: Theorem Let A be an n n matrix, with a collection λ 1, λ 2,..., λ m of m n distinct eigenvalues. Suppose the non-zero vectors u 1, u 2,..., u m in C n are corresponding eigenvectors satisfying Au k = λ k u k for k = 1, 2,..., m Then the set {u 1, u 2,..., u m } must be linearly independent. We also discussed similar and diagonalizable matrices.

21 An Eigenvector Matrix Suppose the n n matrix A has the maximum possible number of n linearly independent eigenvectors, namely {u j } n j=1. A sufficient, but not necessary, condition for this is that A λi = 0 has n distinct characteristic roots. Define the n n eigenvector matrix V = (u j ) n j=1 whose columns are the linearly independent eigenvectors. By definition of eigenvalue and eigenvector, for j = 1, 2,..., n one has Au j = λ j u j. The j column of the n n matrix AV is Au j, which equals λ j u j. But with Λ := diag(λ 1, λ 2,..., λ n ), the elements of Λ satisfy (Λ) kj = δ kj λ j. So the elements of VΛ satisfy (VΛ) ij = n k=1 (V) ikδ kj λ j = (V) ij λ j = λ j (u j ) i = (Au j ) i It follows that AV = VΛ because all the elements are all equal. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 21 of 45

22 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 22 of 45 Diagonalization Recall the hypothesis that the n n matrix A has n linearly independent eigenvectors{u j } n j=1. So the eigenvector matrix V is invertible. We proved on the last slide that AV = VΛ. Pre-multiplying this equation by V 1 yields V 1 AV = Λ, which gives a diagonalization of A. Furthermore, post-multiplying AV = VΛ by the inverse matrix V 1 yields A = VΛV 1. This is a decomposition of A into the product of: 1. the eigenvector matrix V; 2. the diagonal eigenvalue matrix Λ; 3. the inverse eigenvector matrix V 1.

23 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 23 of 45 A Non-Diagonalizable 2 2 Matrix Example ( ) 0 1 The non-symmetric matrix A = cannot be diagonalized. 0 0 Its characteristic equation is 0 = A λi = λ 1 0 λ = λ2. It follows that λ = 0 is the unique eigenvalue. ( ) ( ) ( ) ( ) x1 0 1 x1 x2 The eigenvalue equation is 0 = = x x 2 0 or x 2 = 0, whose only solutions take the form x 2 (1, 0). Thus, every eigenvector is a non-zero multiple of the column vector (1, 0). This makes it impossible to find any set of two linearly independent eigenvectors.

24 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 24 of 45 A Non-Diagonalizable n n Matrix: Specification The following n n matrix also has a unique eigenvalue, whose eigenspace is of dimension 1. Example Consider the non-symmetric n n matrix A whose elements in the first n 1 rows satisfy a ij = δ i,j 1 for i = 1, 2,..., n 1 but whose last row is 0. Such a matrix is upper triangular, and takes the special form ( ) A = In 1. = in which the elements in the first n 1 rows and last n 1 columns make up the identity matrix.

25 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 25 of 45 A Non-Diagonalizable n n Matrix: Analysis Because A λi is also upper triangular, its characteristic equation is 0 = A λi = ( λ) n. This has λ = 0 as an n-fold repeated root. So λ = 0 is the unique eigenvalue. The eigenvalue equation Ax = λx with λ = 0 takes the form Ax = 0 or 0 = n j=1 δ i,j 1x j = x i+1 (i = 1, 2,..., n 1) with an extra nth equation of the form 0 = 0. The only solutions take the form x j = 0 for j = 2,..., n, with x 1 arbitrary. So all the eigenvectors of A are non-zero multiples of e 1 = (1, 0,..., 0), implying that there is just one eigenspace, which has dimension 1.

26 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 26 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

27 Uncoupling via Diagonalization Consider the matrix difference equation x t = Ax t 1 + f t for t = 1, 2,..., with x 0 given. The extra forcing term f t makes the equation inhomogeneous (unless f t = 0 for all t). Consider the case when the n n matrix A has n distinct eigenvalues, or at least a set of n linearly independent eigenvectors making up the columns of an invertible eigenvector matrix V. Define a new vector y t = V 1 x t for each t. This new vector satisfies the transformed matrix difference equation y t = V 1 x t = V 1 (Ax t 1 + f t ) = V 1 AVy t 1 + e t where e t denotes the transformed forcing term V 1 f t. The diagonalization V 1 AV = Λ reduces this equation to the uncoupled matrix difference equation y t = Λy t 1 + e t with initial condition y 0 = V 1 x 0. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 27 of 45

28 Transforming the Uncoupled Equations Consider the uncoupled matrix difference equation y t = Λy t 1 + e t where Λ = diag(λ 1,..., λ n ). Note that, if there is any i for which λ i = 0, then the solution y t = (y ti ) n i=1 must satisfy y ti = e ti for all t = 1, 2,.... So we eliminate all i such that λ i = 0, and assume from now on that λ i 0 for all i. This assumption allows us to define the transformed vector z t := Λ t y t where Λ t = [diag(λ 1,..., λ n )] t = diag(λ t 1,..., λ t n ) = (Λ 1 ) t With this transformation, evidently z t = Λ t y t = Λ t (Λy t 1 + e t ) = Λ 1 t y t 1 + Λ t e t = z t 1 + w t where w t is the transformed forcing term Λ t e t. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 28 of 45

29 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 29 of 45 The Decoupled Solution The solution of z t = z t 1 + w t is obviously z t = z 0 + t s=1 w s Inverting the previous transformation z t = Λ t y t, we see that ( y t = Λ t z t = Λ t z 0 + t ) But z 0 = y 0 and w s = Λ s e s, so one has y t = Λ t y 0 + t s=1 w s s=1 Λt s e s Now, each power Λ k is the diagonal matrix diag(λ k 1,..., λk n). So, for each separate component y ti of y t and corresponding component w si of w s, this solution can be written in the obviously uncoupled form y ti = (λ i ) t y 0i + t s=1 (λ i) t s w si (for i = 1, 2,... n)

30 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 30 of 45 The Recoupled Solution Finally, inverting also the previous transformation y t = V 1 x t, while noting that e s = V 1 f s, one has x t = Vy t = VΛ t V 1 x 0 + t s=1 VΛt s V 1 f s as the solution of the original equation x t = VΛV 1 x t 1 + f t.

31 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 31 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

32 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 32 of 45 Stationary States Given an autonomous equation x t+1 = F(x t ), a stationary state is a fixed point x R n of the mapping R n x F(x) R n. It earns its name because if x s = x for any finite s, then x t = x for all t = s, s + 1,.... Wherever it exists, the solution of the autonomous equation can be written as a function x t = Φ t s (x s ) (t = s, s + 1,...) of the state x s at time s, as well as of the number of periods t s that the function F must be iterated in order to determine the state x t at time t. Indeed, the sequence of functions Φ k : R n R n (k N) is defined iteratively by Φ k (x) = F(Φ k 1 (x)) for all x. Note that any stationary state x is a fixed point of each mapping Φ k in the sequence, as well as of Φ 1 F.

33 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 33 of 45 Local and Global Stability The stationary state x is: globally stable if Φ k (x 0 ) x as k, regardless of the initial state x 0 ; locally stable if there is an (open) neighbourhood N R n of x such that whenever x 0 N one has Φ k (x 0 ) x as k. We begin by studying linear systems, for which local stability is equivalent to global stability. Later, we will consider the local stability of non-linear systems.

34 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 34 of 45 Stability in the Linear Case Recall that the autonomous linear equation takes the form x t+1 = Ax t + d. The vector x R n is a stationary state if and only if x t = x = x t+1 = x, which is true if and only if x = Ax + d, or iff x solves the linear equation (I A)x = d. Of course, if the matrix I A is singular, then there could either be no stationary state, or a continuum of stationary states. For simplicity, we assume that I A has an inverse. Then there is a unique stationary state x = (I A) 1 d.

35 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 35 of 45 Homogenizing the Linear Equation Given the equation x t+1 = Ax t + d and the stationary state x = (I A) 1 d, define the new state as the deviation y := x x of the state x from the stationary state x. This transforms the original equation x t+1 = Ax t + d to y t+1 + x = A(y t + x ) + d = Ay t + Ax + d Because the stationary state satisfies x = Ax + d, this reduces the original equation x t+1 = Ax t + d to the homogeneous equation y t+1 = Ay t, whose obvious solution is y t = A t y 0.

36 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 36 of 45 Stability in the Diagonal Case Suppose that A is the diagonal matrix Λ = diag(λ 1, λ 2,..., λ n ). Then the powers are easy: A t = Λ t = diag(λ t 1, λ t 2,..., λ t n) The homogenized vector equation y t = Ay t 1 can be expressed component by component as the set y i,t = λ i y i,t 1 (i = 1, 2,..., n) of n uncoupled difference equations in one variable. The solution of y t = Ay t 1 with y 0 = z = (z 1, z 2,..., z n ) is then y t = (λ t 1 z 1, λ t 2 z 2,..., λ t nz n ). Hence y t 0 holds for all y 0 if and only if, for i = 1, 2,..., n, the modulus λ i of each root λ i satisfies λ i < 1. Recall that when λ = α ± iβ, the modulus is λ := α 2 + β 2.

37 Warning Example Consider the 2 2 matrix A = ( 1 ) The solution of the difference equation y t = Ay t 1 with y 0 = z = (z 1, z 2 ) is then ( 1 ) t ( ) ( ) ( ) ( y t = 2 0 z1 2 t 0 z1 2 = 0 2 z t = t ) z 1 z 2 2 t z 2 Then y t 0 as t provided that z 2 = 0. But the norm y t + whenever z 2 0. In this case one says that A exhibits saddle point stability because starting with z 2 = 0 allows convergence, but starting with z 2 0 ensures divergence. This explains why one says that the n n matrix A is stable just in case A t y 0 for all y R n. The Fibonacci equation x t+1 = x t + x t 1 also exhibits saddle point stability. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 37 of 45

38 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 38 of 45 A Condition for Stability The solution y t = A t y 0 of the homogeneous equation y t+1 = Ay t is globally stable just in case A t y 0 0 or A t y 0 0 as t, regardless of y 0. This holds if and only if A t 0 n n in the sense that all n 2 elements of the n n matrix A t converge to 0 as n. In case the matrix A is the diagonal matrix Λ = diag(λ 1, λ 2,..., λ n ), stability holds if and only if λ i < 1 for i = 1, 2,..., n. Suppose the matrix A is the diagonalizable matrix VΛV 1, where V is a matrix of linearly independent eigenvectors, and the diagonal elements of the diagonal matrix Λ are eigenvalues. Then A t = VΛ t V 1 0 if and only if Λ t 0, which is true if and only if λ i < 1 for i = 1, 2,..., n.

39 The Classic Stability Condition Definition The n n matrix A is stable just in case A t converges element by element to the zero matrix 0 n n as t. Theorem The n n matrix A is stable if and only if each of its eigenvalues λ (real or complex) has modulus λ < 1. We have already proved this result in case A is diagonalizable. But the same stability condition applies even if A is not diagonalizable. For such a general matrix we will only prove necessity only if. Let λ denote the eigenvalue whose modulus λ is largest, and let x 0 be an associated eigenvector. In case λ 1, the solution x t = A t x = λ t x satisfies x t = λ t x x 0, so A is unstable. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 39 of 45

40 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 40 of 45 Lecture Outline Systems of Linear Difference Equations Complementary, Particular, and General Solutions Constant Coefficient Matrix Some Particular Solutions Diagonalizing a Non-Symmetric Matrix Uncoupling via Diagonalization Stability of Linear Systems Stability of Non-Linear Systems

41 Local Stability Consider the autonomous non-linear system x t+1 = F(x t ) with steady state x. Let ( ) J(x ) = F (x Fi ) = x j ij (x ) denote the n n Jacobian matrix of partial derivatives evaluated at the steady state x. Theorem Suppose that the elements of the Jacobian matrix J(x ) are continuous in a neighbourhood of the steady state x. Let λ denote the eigenvalue of J(x ) whose modulus is largest. The system is locally stable about the steady state x : if λ < 1; only if λ 1. In case λ = 1, the system may or may not be locally stable. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 41 of 45

42 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 42 of 45 Complete Metric Spaces Let (X, d) denote any metric space. Definition A Cauchy sequence (x n ) n N in X is a sequence with the property that, for every ɛ > 0, there exists an N ɛ N for which m, n > N ɛ = d(x m, x n ) < ɛ. Definition A metric space (X, d) is complete just in case all its Cauchy sequences converge. Example Recall that one definition of the real line R is as the smallest complete metric space that includes the set Q of rational numbers with metric given by d(r, r ) = r r for all r, r Q.

43 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 43 of 45 Global Stability: Contraction Mapping Theorem Definition The function F : X X is a contraction mapping on the metric space (X, d) just in case there is a positive contraction factor K < 1 such that d(f (x), F (y)) K d(x, y) for all x, y X. Theorem If F : X X is a contraction mapping on the complete metric space (X, d), then for any x 0 X the process defined by x t = F (x t 1 ) for all t N has a unique steady state x X that is globally stable.

44 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 44 of 45 Cauchy Sequence Because F : X X is a contraction mapping with contraction factor K, and x t = F (x t 1 ) for all t N, one has d(x t+1, x t ) = d(f (x t ), F (x t 1 )) Kd(x t, x t 1 ). It follows by induction on t that d(x t+1, x t ) K t d(x 1, x 0 ). If n > m, then repeated application of the triangle inequality gives d(x m, x n ) n m r=1 d(x m+r 1, x m+r ) n m r=1 K m+r 1 d(x 1, x 0 ) = K m K n 1 K d(x 1, x 0 ) < K m 1 K d(x 1, x 0 ) Hence d(x m, x n ) < ɛ provided that K m ɛ(1 K)/d(x 1, x 0 ) or, since ln K < 0, if m (1/ ln K)[ln ɛ(1 K) ln d(x 1, x 0 )]. This proves that (x t ) t N is a Cauchy sequence.

45 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 45 of 45 Completing the Proof Because (x t ) t N is a Cauchy sequence, the hypothesis that (X, d) is a complete metric space implies that there is a limit point x X such that x t x as t. Then, by the triangle inequality and the contraction property, d(f (x ), x ) d(f (x ), x t+1 ) + d(x t+1, x ) Kd(x, x t ) + d(x t+1, x ) 0 as t, implying that d(f (x ), x ) = 0. Because (X, d) is a metric space, it follows that F (x ) = x, so the limit point x X is a steady state. Finally, if x X is any steady state, then d(x, x) = d(f (x ), F ( x)) Kd(x, x). Hence (1 K)d(x, x) 0, implying that d(x, x) = 0 and so x = x.

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017

More information

Lecture Notes 6: Dynamic Equations Part B: Second and Higher-Order Linear Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part B: Second and Higher-Order Linear Difference Equations in One Variable University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 56 Lecture Notes 6: Dynamic Equations Part B: Second and Higher-Order Linear Difference Equations in One Variable Peter J. Hammond

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Lecture 3 Eigenvalues and Eigenvectors

Lecture 3 Eigenvalues and Eigenvectors Lecture 3 Eigenvalues and Eigenvectors Eivind Eriksen BI Norwegian School of Management Department of Economics September 10, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 3 Eigenvalues and Eigenvectors

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Name: Final Exam MATH 3320

Name: Final Exam MATH 3320 Name: Final Exam MATH 3320 Directions: Make sure to show all necessary work to receive full credit. If you need extra space please use the back of the sheet with appropriate labeling. (1) State the following

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

4 Matrix Diagonalization and Eigensystems

4 Matrix Diagonalization and Eigensystems 14.102, Math for Economists Fall 2004 Lecture Notes, 9/21/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson JUST THE MATHS SLIDES NUMBER 96 MATRICES 6 (Eigenvalues and eigenvectors) by AJHobson 96 The statement of the problem 962 The solution of the problem UNIT 96 - MATRICES 6 EIGENVALUES AND EIGENVECTORS 96

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition

Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 46 Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition Peter J. Hammond Autumn 2012, revised Autumn 2014 University

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2. INTRODUCTION TO LIE ALGEBRAS. LECTURE 2. 2. More examples. Ideals. Direct products. 2.1. More examples. 2.1.1. Let k = R, L = R 3. Define [x, y] = x y the cross-product. Recall that the latter is defined

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

MATH 1553-C MIDTERM EXAMINATION 3

MATH 1553-C MIDTERM EXAMINATION 3 MATH 553-C MIDTERM EXAMINATION 3 Name GT Email @gatech.edu Please read all instructions carefully before beginning. Please leave your GT ID card on your desk until your TA scans your exam. Each problem

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 34 The powers of matrix Consider the following dynamic

More information

Math 205, Summer I, Week 4b:

Math 205, Summer I, Week 4b: Math 205, Summer I, 2016 Week 4b: Chapter 5, Sections 6, 7 and 8 (5.5 is NOT on the syllabus) 5.6 Eigenvalues and Eigenvectors 5.7 Eigenspaces, nondefective matrices 5.8 Diagonalization [*** See next slide

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13-14, Tuesday 11 th November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Eigenvectors

More information

B5.6 Nonlinear Systems

B5.6 Nonlinear Systems B5.6 Nonlinear Systems 1. Linear systems Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Linear systems 1.1 Differential Equations 1.2 Linear flows 1.3 Linear maps

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information