T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

Similar documents
In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Mathematical foundations - linear algebra

Notes on Linear Algebra and Matrix Theory

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

Section 3.9. Matrix Norm

NORMS ON SPACE OF MATRICES

Section 1.7: Properties of the Leslie Matrix

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Review of Some Concepts from Linear Algebra: Part 2

5 Compact linear operators

Geometric Mapping Properties of Semipositive Matrices

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Properties of Matrices and Operations on Matrices

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Spectral theory for compact operators on Banach spaces

Z-Pencils. November 20, Abstract

18.S34 linear algebra problems (2007)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture 7: Positive Semidefinite Matrices

Detailed Proof of The PerronFrobenius Theorem

Chapter 3 Transformations

Recall : Eigenvalues and Eigenvectors

Lecture notes: Applied linear algebra Part 1. Version 2

Optimization Theory. A Concise Introduction. Jiongmin Yong

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Functional Analysis Review

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

The following definition is fundamental.

On the simultaneous diagonal stability of a pair of positive linear systems

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen

Lecture 15 Perron-Frobenius Theory

Symmetric and self-adjoint matrices

Cayley-Hamilton Theorem

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Markov Chains and Stochastic Sampling

Spectral radius, symmetric and positive matrices

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Basic Elements of Linear Algebra

Analysis Preliminary Exam Workshop: Hilbert Spaces

Lax Solution Part 4. October 27, 2016

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Definition (T -invariant subspace) Example. Example

INVARIANT PROBABILITIES ON PROJECTIVE SPACES. 1. Introduction

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Lecture 8 : Eigenvalues and Eigenvectors

Mathematical foundations - linear algebra

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Asymptotic Stability by Linearization

A Brief Introduction to Functional Analysis

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

0.1 Rational Canonical Forms

CS 246 Review of Linear Algebra 01/17/19

Nonlinear Programming Algorithms Handout

Linear Algebra. Session 12

Linear Algebra: Matrix Eigenvalue Problems

Eigenvectors. Prop-Defn

Wavelets and Linear Algebra

Krein-Rutman Theorem on the Spectrum of Compact Positive Operators on Ordered Banach Spaces

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Linear Compartmental Systems-The Basics

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Stabilization of Distributed Parameter Systems by State Feedback with Positivity Constraints

Linear Algebra in Actuarial Science: Slides to the lecture

Nonnegative and spectral matrix theory Lecture notes

Positive Stabilization of Infinite-Dimensional Linear Systems

Linear Algebra Review. Vectors

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Eventual Cone Invariance

Lecture 3: Review of Linear Algebra

On the mathematical background of Google PageRank algorithm

Review of some mathematical tools

Lecture 3: Review of Linear Algebra

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

LECTURE 7. k=1 (, v k)u k. Moreover r

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

MATH36001 Perron Frobenius Theory 2015

Linear Algebra Review

Lecture notes: Applied linear algebra Part 2. Version 1

Matrix functions that preserve the strong Perron- Frobenius property

Math 108b: Notes on the Spectral Theorem

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

Eventual Positivity of Operator Semigroups

Balanced Truncation 1

Eigenvalues and Eigenvectors

Fiedler s Theorems on Nodal Domains

The Spectral Theorem for normal linear maps

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Diagonalization of Matrix

MATRIX ANALYSIS HOMEWORK

Elementary linear algebra

Transcription:

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 A vector x R n is called positive, symbolically x > 0, if all components are nonnegative and at least one is positive. It is called strictly positive, x 0, if all components are positive. A square matrix is called positive if all entries are non-negative numbers and the matrix is not the zero matrix. It is called quasi-positive if it is not the zero matrix and all off-diagonal entries are non-negative numbers. It is called strictly positive if all entries are strictly positive. If n 2, an n n matrix A = (a ik ) is called irreducible if if the following holds: For any proper non-empty subset P of {1,, m} there are k P, j P such that a jk 0. A 1 1 matrix is called irreducible if it is not the 0 matrix. Equivalently A is irreducible if and only if, for all i, k = 1,..., n, there exist numbers j 1,..., j r {1,..., n} such that i = j 1, k = j r and a jl j l+1 0 for all l = 1,... r 1. A non-negative matrix A is irreducible if and only if the matrix exponential e A is strictly positive. A non-negative square matrix A is called primitive if one of its powers, A k, has strictly positive entries. It is easily seen that a non-negative matrix is primitive if it is irreducible and all entries in its main diagonal are strictly positive (Exercise 2). If A is a complex square matrix, a complex number λ is called a spectral value of A if the matrix λ A is singular. The set of spectral values of A is called the spectrum of A and is denoted by σ(a). For a matrix, a spectral value is an eigenvalue and vice versa, i.e., there exists a non-zero vector x, called eigenvector of A, such that Ax = λx. The spectral radius of the matrix A, r(a), is defined as r(a) = max{ λ ; λ σ(a)}, while the spectral bound of the matrix A, s(a) is defined as s(a) = max{rλ; λ σ(a)}. Obviously, s(a) r(a). Theorem 8.1. Let A be a positive matrix. Then its spectral radius, r(a), is an eigenvalue associated both with a positive eigenvector of A and a positive eigenvector of the transposed matrix A. In particular, s(a) = r(a). 29

For a proof see Schaefer (1974), Prop. I.2.3. Theorem 8.2. Let A be a positive matrix and x > 0 be a vector and µ 0 such that A q z µz for some natural number q and some vector z = z z with z, z R m +, z R m +. Then the spectral radius of A satisfied r(a) µ 1/q. Proof: This is the finite dimensional special case of Theorem 2.5 by Krasnosel skii (1964). Theorem 8.3. Let A be a quasi-positive matrix. Then its spectral bound (modulus of stability), s(a), is an eigenvalue of A associated both with a positive eigenvector of A and a positive eigenvector of the transposed matrix A. Moreover if x > 0 is a vector and µ R such that Ax µx, there exists some vector z > 0 and some scalar λ µ such that Az = λz and in particular s(a) µ. Proof: Since all off-diagonal elements of A are non-negative, then the matrix A + νi is non-negative for some (and then all) sufficiently large ν > 0. Let λ C be an eigenvalue of A such that Rλ = s(a). Then there exists a (possibly complex) vector x 0 such that Ax = λx. So (A + ν)x = (λ + ν)x. Let x = ( x 1,..., x n ) be the modulus (or absolute value) of the vector x. Since A+νI is a positive matrix, ν +λ x = (A+ν)x (A+ν) x. By Corollary 8.2 and Theorem 8.1, there exists some r ν+λ ν+s(a) and some vector z > 0 such that (A + ν)z = rz. So Az = (r ν)z. By definition of s(a), r ν s(a). Together with our previous inequality, r ν = s(a) and s(a) is an eigenvalue of A associated with a non-negative eigenvector. Since s(a) = s(a ) and A is a quasi-positive matrix, we can conclude that s(a) is also associated with a positive eigenvector of A. Now let Ax µx for some vector x > 0 and some µ R. Then (A + ν)x (ν + µ)x. Since A + νi is a positive matrix, by Corollary 8.2 and Theorem 8.1, there exists some r (ν + µ) and some vector z > 0 such that (A + ν)z = rz. Obviously Az = (r ν)z and r ν µ. So we choose λ = r ν. By definition of s(a), λ s(a) and so µ s(a). Theorem 8.4. Let A and D be positive matrices, D diagonal with all diagonal elements being positive. Then s(a D) and r(d 1 A) 1 have the same sign, i.e., these numbers are simultaneously positive, zero, or negative. 30

Proof: Let λ = s(a D). Since the off-diagonal elements of A D are non-negative, by Theorem 8.3 there exists some vector x > 0 such that (A D)x = λx. Reorganizing terms, Ax = Dx + λx. Since D is an invertible matrix, D 1 Ax = x + λd 1 x (1 + λɛ)x with ɛ being the reciprocal of the largest of the diagonal elements in D. So r(d 1 A) 1 + λɛ. Now let r = r(d 1 A) 1. By Theorem 1 there exist some vector x > 0 such that D 1 Ax = rx. Reorganizing terms, Ax = rdx. So (A D)x = (r 1)Dx (r 1)dx with d being the smallest diagonal element of D. By Theorem 8.3, s(a D) (r 1)d. The continuous and discrete dynamical systems associated with irreducible quasipositive or even primitive matrices have a strikingly simple large-time behavior. In the following x, y = m k=1 x ky k is the canonical scalar (or inner) product on R m. Theorem 8.5. Let A be a quasi-positive irreducible matrix. Then s = s(a) is an eigenvalue of both A and A with strictly positive eigenvectors v and v and s(a) is larger than the real parts of all other eigenvalues of A. Further any non-negative solution x of the differential equation x = Ax which is not identical 0 satisfies e s(t r) x(t) x(r), v v, v v, t, r > 0. Proof: If A is a quasi-positive irreducible matrix, then A+ν is a positive irreducible matrix for a sufficiently large ν > 0. So all matrices e ta = e νt e t(a+ν) are strictly positive and so form an irreducible uniformly continuous semigroup of compact operators on the Banach lattice R m. The claim now follows from Theorem 9.11 in Heijmans, de Pagter (1987). Remarks. Theorem 8.5 has significant side effects for an irreducible quasi-positive matrix A: (a) Every subspace that is forward invariant under A and contains a positive vector also contains the eigenvector v associated with s(a). In particular (b) Eigenvalues of A different from s(a) have no positive eigenvector or positive generalized eigenvector. (c) There are no generalized eigenvectors associated with s(a) and the eigenspace associated with s(a) is one-dimensional. In other words, s(a) is a simple eigenvalue. 31

(d) s(a) > Rλ for all eigenvalues λ of A that are different from s(a). Proof of (a): Let Y be a subspace of X and x 0 Y positive. Consider the solution x = Ax with x(0) = x 0. Then x(t) = e ta x 0 Y for all t 0 and so is v = lim t v, v x 0, v e s(a)t x(t). (d) Let λ be an eigenvalue of A that is different from s(a), but with Rλ = s(a). Let x be the solution of x = Ax with x(0) = v being the eigenvector associated with λ. Then e s(a)t x(t) = e ı(iλ)t v does not converge, contradicting the statement in Theorem 8.5. Lemma 8.6. Let A be a quasi-positive irreducible matrix and Ax λx or A x λx with λ R and x being a positive vector. Then s(a) λ. Proof: We consider Ax λx, the other case is done similarly. By Theorem 8.5, there exists a strictly positive vector v such that A v = sv with s = s(a). Then λ x, v = λx, v Ax, v = x, A v = x, sv = s x, v. Since the vector x is positive and the vector v strictly positive, x, v > 0 and λ s = s(a) follows by division. Remark. Choosing x with x j = 1 for j = 1,..., m in Lemma 8.6 provides the estimates s(a) max 1 j m k=1 m a jk, s(a) max 1 k m j=1 m a jk. Since the vector x is strictly positive, it is actually sufficient that A is quasi-positive. Theorem 8.7. Let A be a primitive matrix with spectral radius r = r(a) = s(a). Then r k A k x x, v v, v v, k where v and v are strictly positive eigenvectors of A and A associated with the eigenvalue r according to Theorem 8.5. A perhaps more intuitive equivalent formulation is the following one. 32

Corollary 8.8. Let A be a primitive matrix with spectral radius r = r(a) = s(a). Then, for every positive vector x and for every norm on R m, A k x A k x v v, k, where v is a positive eigenvector of A associated with the eigenvalue r according to Theorem 8.5. Proof: Corollary 8.8 obviously follows from Theorem 8.7 by the continuity of the norm. The converse follows by choosing the norm x = x, v where x is the vector ( x 1,..., x n ) and v a strictly positive eigenvector of A associated with r. Theorem 8.7 is only valid for primitive matrices. Actually, for non-negative matrices, primitivity is equivalent to the ergodicity statement in Theorem 8.7 (Schaefer, 1974, I.Proposition 7.3). But mean ergodicity still holds for irreducible matrices, which means that the convergence in Theorem 8.7 holds in average (Schaefer, 1974, end of Section I.6). Notice that the convergence statement in Theorem 8.7 implies the convergence statement in Theorem 8.9 (Exercise 2). Theorem 8.9. Let A be an irreducible positive matrix with spectral radius r = r(a). Then 1 k + 1 k j=0 r j A j x x, v v, v v, k where v and v are the strictly positive eigenvectors of A and A associated with the eigenvalue r according to Theorem 8.5. Noticing that A j x, v = r j x, v, Theorem 8.8 can be reformulated as follows. Theorem 8.10. Let A be an irreducible positive matrix with spectral radius r = r(a) and x a positive vector. Then 1 k + 1 k j=0 A j x A j x, v v v, v, k where v and v are the strictly positive eigenvectors of A and A associated with the eigenvalue r according to Theorem 8.5. 33

Exercises 1. Show that an irreducible non-negative matrix is primitive if all entries in the main diagonal are strictly positive. 2. Let (z(l)) be a convergent sequence of vectors in a normed vector space. Show that the averages converge to the same limit as l. 1 l + 1 l z(l) l=0 34