Eigenvalues in Applications

Similar documents
Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Generalized Eigenvectors and Jordan Form

Linear Algebra Practice Problems

4 Matrix Diagonalization and Eigensystems

Detailed Proof of The PerronFrobenius Theorem

MAT1302F Mathematical Methods II Lecture 19

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

Maths for Signals and Systems Linear Algebra in Engineering

21 Linear State-Space Representations

Markov Chains, Stochastic Processes, and Matrix Decompositions

Section 3.9. Matrix Norm

Topic 1: Matrix diagonalization

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Review of Linear Algebra

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Perron Frobenius Theory

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MATH36001 Perron Frobenius Theory 2015

A Brief Outline of Math 355

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Section 1.7: Properties of the Leslie Matrix

MTH 464: Computational Linear Algebra

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Chapter 7: Symmetric Matrices and Quadratic Forms

and let s calculate the image of some vectors under the transformation T.

Lecture 15, 16: Diagonalization

The Singular Value Decomposition

Recall : Eigenvalues and Eigenvectors

Matrix Solutions to Linear Systems of ODEs

6 Matrix Diagonalization and Eigensystems

Chapter 3 Transformations

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore

Jordan Canonical Form

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Homogeneous Linear Systems of Differential Equations with Constant Coefficients

Finite-Horizon Statistics for Markov chains

EE16B - Spring 17 - Lecture 12A Notes 1

Section 8.2 : Homogeneous Linear Systems

Section 3: Complex numbers

Example. We can represent the information on July sales more simply as

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains and Spectral Clustering

Exercise Set 7.2. Skills

Review problems for MA 54, Fall 2004.

Numerical Linear Algebra Homework Assignment - Week 2

Copyright (c) 2006 Warren Weckesser

Solving Homogeneous Systems with Sub-matrices

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

MAT 1302B Mathematical Methods II

Math Ordinary Differential Equations

Math 215 HW #11 Solutions

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Systems of Linear Differential Equations Chapter 7

4. Linear transformations as a vector space 17

Eigenvalues and Eigenvectors

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

LINEAR ALGEBRA SUMMARY SHEET.

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Eigenvalues and Eigenvectors

MAA704, Perron-Frobenius theory and Markov chains.

Math 3191 Applied Linear Algebra

Eigenvalue and Eigenvector Homework

Study Guide for Linear Algebra Exam 2

Symmetric and anti symmetric matrices

MATH 583A REVIEW SESSION #1

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

MIT Final Exam Solutions, Spring 2017

Lecture 1: Review of linear algebra

Math Camp Notes: Linear Algebra II

Conceptual Questions for Review

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

7. Symmetric Matrices and Quadratic Forms

Matrix-Exponentials. September 7, dx dt = ax. x(t) = e at x(0)

Math 1553, Introduction to Linear Algebra

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Lecture: Local Spectral Methods (1 of 4)

Linear Algebra Primer

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Chapter 1. Vectors, Matrices, and Linear Spaces

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Chapter 3. Determinants and Eigenvalues

NOTES ON LINEAR ODES

ADI iterations for. general elliptic problems. John Strain Mathematics Department UC Berkeley July 2013

Final Exam Practice Problems Answers Math 24 Winter 2012

Transcription:

Eigenvalues in Applications Abstract We look at the role of eigenvalues and eigenvectors in various applications. Specifically, we consider differential equations, Markov chains, population growth, and consumption. 1 Differential equations We consider linear differential equations of the form = Au. (1) The complex matrix A is n n and u 1 (t) u 2 (t) u = u(t) =.. (2) u n (t) The equations are linear, since if v and w are two solutions, then so is u = αv + βw, as shown by 1.1 Scalar case (n = 1) = d (αv + βw) = α dv + β dw = αav + βaw = A(αv + βw) = Au. In the scalar case, i.e., n = 1, the differential equation (1) reces to the scalar differential equation = λu, (3) where λ C. Its general solution is u = e λt c where c is an arbitrary constant. 1

By specifying an initial condition u(0) = u 0 we can determine c from u 0 = u(0) = e 0 c = c. Thus, the general solution given an initial condition is u(t) = e λt u 0. (4) The real part of λ determines growth or decay according to Re(λ) < 0: exponential decay, Re(λ) > 0: exponential growth, Re(λ) = 0: neither growth nor decay. A nonzero imaginary part adds oscillation in the form of a sine wave. 1.2 General case (n > 1) In the general case, i.e., n > 1, we can solve (1) similar to the scalar case if the matrix A is diagonalizable. Using the spectral decomposition A = SΛS 1 and multiplying both sides of (1) from the left by S 1, we get d (S 1 u) = Λ(S 1 u). (5) After changing variables from u to y = S 1 u we have which is nothing more than the set dy = Λy, (6) dy 1 = λ 1y 1, dy 2 = λ 1y 2,. =. (7) dy n = λ 1 y n, of n decoupled scalar differential equations. The solutions to (7) are y i (t) = e λit c i for i = 1, 2,..., n. The parameters c i are some arbitrary constants. Using linear algebra notation, we have the solutions y(t) = e λ1t e λ2t... c. (8) e λnt 2

The constants c are determined by the n initial conditions u(0) = u 0 to the original differential equation (1) by u(0) = Sy(0) = Sc = u 0 c = S 1 u 0. (9) We put the pieces back together and find the general solution u(t) = Sy(t) = S e λ1t e λ2t... S 1 u 0 (10) e λnt to the differential equation (1) with initial conditions u 0. 1.3 Matrix exponential We would like to express the general solution of (1) as u(t) = e At u 0. (11) But how do we generalize the exponential function to matrices? Recall that the scalar exponential function e x is defined by the infinite series e x = 1 + x + (1/2)x 2 + (1/6)x 3 +. (12) Let us define the matrix exponential e A by simply replacing x with A in (12), as in e A = I + A + (1/2)A 2 + (1/6)A 3 +. (13) The series (13) is defined for square matrices and it always converges. But is u(t) = e At u 0 (14) actually a solution of (1) if we use the definition (13)? The answer is yes since the derivative of e At u 0 with respect to t is d eat u 0 = (A + A 2 t + (1/2)A 3 t 2 + )u 0 = Ae At u 0. If A has the spectral decomposition A = SΛS 1, then from e At = Se Λt S 1 we see immediate connections with the previous section. Note that the matrix exponential solves the differential equation even when A is not diagonalizable. 3

2 Markov A Markov chain is a random process that can be in one of a finite number of states at any given time and the next state depends only on the current state. We are given n 2 transition probabilities p ij [0, 1]. The number p ij gives the probability that the next state is state i if the current state is state j. By putting the transition probabilities in a square matrix A such that a ij = p ij we obtain a Markov matrix. Definition 1 (Markov matrix). A square matrix A is a Markov matrix if it satisfies the following two properties. First, every entry in A is nonnegative. Second, every column of A adds to 1. Two facts about Markov matrices follow directly from the definition. First, multiplying a Markov matrix A with a nonnegative vector u 0 proces a nonnegative vector u 1 = Au 0. Second, if the components of u 0 add to 1, then so does the components of u 1 = Au 0. The first fact is trivial, and the second fact can be shown as follows. Let e = ( 1 1 ) T be a vector of all 1s. Then e T u 1 = e T (Au 0 ) = (e T A)u 0 = e T u 0 = 1. After k steps in a Markov chain, the initial probability distribution u 0 changes to u k = A k u 0. For many Markov matrices, the limit of u k as k exists and is unqiue. We say that the Markov chain approaches the steady state u. The existence of a steady state also shows that 1 is an eigenvalue of A and the corresponding eigenvector is a steady state. A famous theorem e to Perron and Frobenius shows that a Markov matrix with strictly positive entries has 1 as its largest eigenvalue. That eigenvalue is also simple (multiplicity equal to 1) and the corresponding eigenvector can be scaled so that it has positive entries. Let us show that 1 is indeed an eigenvalue of any Markov matrix. The rows of A I sum to 1 1 = 0, which means that A I is singular. Hence, λ = 1 is an eigenvalue of A. Suppose that A is diagonalizable, i.e., A = SΛS 1. We have u k = A k u 0 = SΛ k S 1 u 0. (15) Thus, if λ 1 = 1 and λ i < 1 for all i 1, then u k approaches a steady state in the direction of the dominant eigenvector s 1. 3 Population A Leslie model models the long-term age distribution and growth rate of a population. It is popular in, e.g., ecology, and it works as follows. Partition the population into n disjoint age groups. The population in the age groups are represented as a vector p with n components. After one time step, the population in each age group is given by Ap, where A is an n n Leslie matrix. 4

The long-term growth rate and age distribution come from the largest eigenvalue and its corresponding eigenvector. The Leslie matrix is constructed as follows. The members of the youngest age group is a proct only of reproctive activity. Formally, we write p (k+1) 1 = n i=1 f i p (k) i, (16) where the coefficients f i reflect the rate of reproction in age group i. If f i = 0, then no reproction occurs in that age group and if f i = 2, then each indivial in the age group proces two (unique) offsprings. In this model, all the members of the oldest age group die off in one time step. The members of the other age groups have a chance of surviving and thereby transition to the next age group. Formally, we write this as p (k+1) i+1 = s i p (k) i, (17) where the coefficients s i reflect the chance of surviving from age group i and transition to age group i + 1. For example, consider n = 4 and combine the formulas (16) and (17) to obtain, in matrix form, the Leslie model f 1 f 2 f 3 f 4 p (k+1) = Ap (k) = s 1 0 0 0 0 s 2 0 0 p(k). 0 0 s 3 0 4 Consumption Let A be a consumption matrix, p the proction levels, and y the demand. The proction levels are given by p = (I A) 1 y. (18) The question is: When does (I A) 1 exist and when is it a nonnegative matrix? Intuitively, if A is small then (I A) 1 exists and the economy can meet any demand, but if A is large then the proction consumes too much of the procts and as a consequence the economy can not meet the demand. What we consider small and large depends entirely on the eigenvalues of A. We show that when the series B = I+A+A 2 + converges, then B(I A) = I and thus B = (I A) 1. Suppose that A is diagonalizable and write B as B = S(I + Λ + Λ 2 + )S 1. (19) The infinite series within the parenthesis is nothing but n independent scalar geometric series. The n scalar series converge if and only if λ i < 1. The matrix B is obviously nonnegative since it is the sum of nonnegative matrices. The inverse of (I A) exists and is nonnegative when all the eigenvalues satisfy λ i < 1. 5