The Leslie Matrix. The Leslie Matrix (/2)

Similar documents
n α 1 α 2... α m 1 α m σ , A =

Chapter 2: Time Series Modelling. Chapter 3: Discrete Models. Notes. Notes. Intro to the Topic Time Series Discrete Models Growth and Decay

4.1 Markov Processes and Markov Chains

Markov Chains and Pandemics

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Lectures on Probability and Statistical Models

Chapter 2: Discrete Models

Chapter 5. Eigenvalues and Eigenvectors

MAT 1302B Mathematical Methods II

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

A Note on the Eigenvalues and Eigenvectors of Leslie matrices. Ralph Howard Department of Mathematics University of South Carolina

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

2 Discrete-Time Markov Chains

2. The Power Method for Eigenvectors

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

Math Camp Notes: Linear Algebra II

Lecture 15, 16: Diagonalization

Eigenvalues in Applications

The Eigenvector. [12] The Eigenvector

Statistical NLP: Hidden Markov Models. Updated 12/15

Math 166: Topics in Contemporary Mathematics II

Section 9.2: Matrices.. a m1 a m2 a mn

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Discrete Markov Chain. Theory and use

MATH 310, REVIEW SHEET 2

Discrete random structures whose limits are described by a PDE: 3 open problems

Markov Processes. Stochastic process. Markov process

Lecture Notes: Markov chains

Chapter 4 & 5: Vector Spaces & Linear Transformations

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Chapter 3. Determinants and Eigenvalues

Population Ecology and the Distribution of Organisms. Essential Knowledge Objectives 2.D.1 (a-c), 4.A.5 (c), 4.A.6 (e)

Math 205, Summer I, Week 4b:

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

Markov Chains and Transition Probabilities

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Further Mathematical Methods (Linear Algebra) 2002

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 301 Final Exam. Dr. Holmes. December 17, 2007

Recall : Eigenvalues and Eigenvectors

Math Final December 2006 C. Robinson

Chapter 4 Lecture. Populations with Age and Stage structures. Spring 2013

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

and let s calculate the image of some vectors under the transformation T.

Eigenspaces. (c) Find the algebraic multiplicity and the geometric multiplicity for the eigenvaules of A.

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Stochastic modelling of epidemic spread

LINEAR ALGEBRA REVIEW

5. DISCRETE DYNAMICAL SYSTEMS

Section 1.7: Properties of the Leslie Matrix

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

Ch. 4 - Population Ecology

Non-Linear Models. Non-Linear Models Cont d

Math 1553, Introduction to Linear Algebra


Math 3191 Applied Linear Algebra

Convex Optimization CMU-10725

The Transition Probability Function P ij (t)

each nonabsorbing state to each absorbing state.

Math Matrix Algebra

Introduction. Genetic Algorithm Theory. Overview of tutorial. The Simple Genetic Algorithm. Jonathan E. Rowe

A&S 320: Mathematical Modeling in Biology

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov Processes Hamid R. Rabiee

Math Homework 8 (selected problems)

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)

Main Points. Note: Tutorial #1, help session, and due date have all been pushed back 1 class period. Will send Tutorial #1 by end of day.

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Chapter 3 Transformations

Stochastic modelling of epidemic spread

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Notes on basis changes and matrix diagonalization

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Conceptual Questions for Review

APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

9 Demography. 9.1 Survival. last revised: 25 February 2009

Undergraduate Research and Creative Practice

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

6 EIGENVALUES AND EIGENVECTORS

Probability, Random Processes and Inference

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Markov Chains Handout for Stat 110

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Lecture 6 Positive Definite Matrices

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Lecture 3 Eigenvalues and Eigenvectors

High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Transcription:

The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by: p 1 n α 1 α 2... α m 1 α m p 2 ṇ p n =., A = σ 1 0... 0 0. 0 σ 2.... (3.42) 0 0... σ m 1 0 p m n α i, σ i, the number of births in age class i in year n & probability that i year-olds survive to i + 1 years old, respectively. 183 / 328 The Leslie Matrix (/2) Long-term population demographics found as with Eqn.(3.21) using λ i s of A in Eqn.(3.42) & det(a λi) = 0 to give Leslie characteristic equation: n 1 λ n α 1 λ n 1 α 2 σ 1 λ n 2 α 3 σ 1 σ 2 λ n 3 α n i=1 σ i = 0 (3.43) α i, σ i, are births in age class i in year n & the fraction that i year-olds live to i + 1 years old, respectively. 184 / 328

The Leslie Matrix (/3) Eqn.(3.43) has one +ive eigenvalue λ & corresponding eigenvector, v. For a general solution like Eqn.(3.19) P n = c 1 λ n 1 v 1 + 0 c 2 λ n 2 v 2 + + 0 c n λ n mv m, with dominant eigenvalue λ 1 = λ gives long-term solution: P n c 1 λ n 1 v 1 (3.44) with stable age distribution v 1 = v. The relative magnitudes of its elements give stable state proportions. 185 / 328 The Leslie Matrix (/4): Example 3.4 Example 3.4: Leslie Matrix for a Salmon Population Salmon have 3 age classes & females in the 2nd & 3rd produce 4 & 3 offspring, each season. Suppose 50% of females in 1st age class survive to 2nd age class & 25% of females in 2nd age class live on into 3rd. The Leslie Matrix (c.f. Eqn.(3.43)) for this population is: 0 4 3 A = 0.5 0 0 (3.45) 0 0.25 0 Fig. 3.4 shows the growth of age classes in the population. 186 / 328

Leslie Matrix (/5): Example 3.4 Example 3.4: Leslie Matrix for a Salmon Population FIGURE 3.4: Growth of Salmon Age Classes 187 / 328 The Leslie Matrix (/6): Example 3.4 Example 3.4: Leslie Matrix for a Salmon Population The eigenvalues of the Leslie matrix may be shown to be λ 1 0 0 1.5 0 0 Λ = 0 λ 2 0 = 0 1.309 0 (3.46) 0 0 λ 3 0 0 0.191 and the eigenvector matrix S to be given by S = 0.9474 0.9320 0.2259 0.3158 0.356 0.591 0.0526 0.0680 0.7741 (3.47) Dominant e-vector: (0.9474, 0.3158, 0.0526) T, can be normalized (divide by sum), to (0.72, 0.24, 0.04) T. 188 / 328

The Leslie Matrix (/7): Example 3.4 Example 3.4: Leslie Matrix for a Salmon Population cont d Long-term, 72% of pop n are in 1st age class, 24% in 2nd and 4% in 3rd. Thus, due to principal e-value λ 1 = 1.5, population increases. Can verify by taking any initial age distribution & multiplying it by A. It always converges to the proportions above. 189 / 328 The Leslie Matrix (/8) A side note on matrices similar to the Leslie matrix. Any lower diagonal matrix of the form 0 0 0 1 0 0 0 1 0 (3.48) Can move a vector of age classes forward by 1 generation e.g. 0 0 0 1 0 0 0 1 0 a b c = 0 a b (3.49) 190 / 328

Stability in Difference Equations If difference equation system has the form u n = Au n 1 then growth as n depends on the λ i thus: If all eigenvalues λ i < 1, system is stable & u n 0 as n. Whenever all values satisfy λ i 1, system is neutrally stable & u n is bounded as n. Whenever at least one value satisfies λ i > 1, system is unstable & u n is unbounded as n. 191 / 328 Markov Processes Often with difference equations don t have certainties of events, but probabilities. So with Leslie Matrix Eqn.(3.42): p n = p 1 n p 2 ṇ. p m n, A = α 1 α 2... α m 1 α m σ 1 0... 0 0 0 σ 2......... 0 0... σ m 1 0 (3.50) σ i is probability that i year-olds survive to i + 1 years old. Leslie model resembles a discrete-time Markov chain Markov chain: discrete random process with Markov property Markov property: state at t n+1 depends only on that at t n. The difference between Leslie model & Markov model, is: In Markov α m + σ m must = 1 for each m. Leslie model may have these sums <> 1. 192 / 328

Markov Processes (/2) Stochastic Processes A Markov Process is a particular case of a Stochastic 8 Process. A Markov Process is a Stochastic Process where probability to enter a state depends only on last state & on governing Transition matrix. If Transition Matrix has terms constant between subsequent timesteps, process is Stationary. FIGURE 3.5: General Case of a Markov Process Max Heimel, TÜ Berlin 8 One where probabilities govern entering a state 193 / 328 Markov Processes (/3) General form of discrete-time Markov chain is given by: u n+1 = Mu n where u n, M are given by: u n = u 1 n u 2 n.. u p n, M = m 11 m 12... m 1 p 1 m 1p m 21 m 22... m 2 p 1 m 2p........... m p1 m p2... m p p 1 m pp (3.51) M is p p Transition matrix & its m ij terms are called Transition probabilities such that p mij = 1. i=1 m ij is probability that that item goes from state i at t n to state j at t n+1. 194 / 328

Markov Processes (/4): Example 3.5 Example 3.5: Two Tree Forest Ecosystem In a forest there are only two kinds of trees: oaks and cedars. At any time n sample space of possible outcomes is (O, C) Here O = % of tree population that is oak in a particular year and C, = % that is cedar. If same life spans & on death same chance an oak is replaced by an oak or a cedar But that cedars are more likely (p = 0.74) to be replaced by an oak than another cedar (p = 0.26). How can we track changes in the different tree types with time? 195 / 328 Markov Processes (/5): Example 3.5 Example 3.5: Two Tree Forest Ecosystem This is a Markov Process as oak/cedar fractions at t n+1 etc are defined by those at t n. Transition Matrix (from Eqn.(3.51)) is Table 3.1: From Oak Cedar Oak 0.5 0.74 To Cedar 0.5 0.26 TABLE 3.1: Tree Transition Matrix Table 3.1 in matrix form is: M = ( 0.5 0.74 0.5 0.26 ) (3.52) 196 / 328

Markov Processes (/6): Example 3.5 Example 3.5: Two Tree Forest Ecosystem To track system changes, let u n = (o n, c n ) T be probability of oak & cedar after n generations. If forest is initially 50% oak and 50% cedar, then u 0 = (0.5, 0.5) T. Hence u n = Mu n 1 = M n u 0 (3.53) M can be shown to have one positive λ & corresponding eigenvector (0.597, 0.403) T This is the distribution of oaks and cedars in the nth generation. 197 / 328 Markov Processes (/7): Example 3.6 Example 3.6: Soft Drink Market Share In a soft drinks market there are two Brands: Coke & Pepsi. At any time n sample space of possible outcomes is (P, C) Here P = % market share that is Pepsi s in one year and C, = % that is Coke s. Know that chance of switching from Coke to Pepsi is 0.1 And the chances of someone switching from Pepsi to Coke are 0.3. How can the changes in the different proportions be modelled? 198 / 328

Markov Processes (/8): Example 3.6 Example 3.6: Soft Drink Market Share This is a Markov Process as shares of Coke/Pepsi at t n+1 are defined by those at t n. Transition Matrix (from Eqn.(3.51)) is Table 3.2: To From Coke Pepsi Coke 0.9 0.3 Pepsi 0.1 0.7 TABLE 3.2: Soft Drink Market Share Matrix Table 3.2 in matrix form: M = ( 0.9 0.3 0.1 0.7 ) (3.54) 199 / 328 Markov Processes (/9): Example 3.6 Example 3.6 can also be represented using a Transition Diagram, thus: FIGURE 3.6: Market Share Transition Diagram The eigenvalues of the matrix in Eqn(3.53) are 1, 3 5. The largest eigenvector, can be found to be (0.75, 0.25) T. This is the proportions of Coke and Pepsi in the nth generation. 200 / 328

Markov Processes (/10): Markov States Absorbing States A state of a Markov Process is said to be absorbing or Trapping if M ii = 1 and M ij = 0 j Absorbing Markov Chain A Markov Chain is absorbing if it has one or more absorbing states. If it has one absorbing state (for instance state i, then the steady state is given by the eigenvector X where X i = 1 and X j = 0 j i 201 / 328 Markov Processes (/11): Example 3.6 Example 3.6: Soft Drink Market Share, Revisited As Soft Drinks market is liquid, KulKola decides to trial product Brand X. Despite its name, Brand X has potential 9 to Shift the Paradigm in Cola consumption. They think, inside 5 years, they can capture nearly all the market. Investigate if this is true, given that they take 20% of Coke s share and 30% of Pepsi s per annum. 9 from KulKola s Marketing viewpoint 202 / 328

Markov Processes (/11): Example 3.6 Example 3.6: Soft Drink Market Share Again, shares of Coke/Pepsi/Brand X at n + 1 etc are defined by those at n. Transition Matrix (from Eqn.(3.51)) is Table 3.3: From Coke Pepsi Brand X Coke 0.6 0.4 0 To Pepsi 0.2 0.3 0 Brand X 0.2 0.3 1 TABLE 3.3: Soft Drink Market Share Matrix Revisited Table 3.3 in matrix form: M = ( 0.6 0.4 0 0.2 0.3 0 0.2 0.3 1 ) (3.55) 203 / 328 Markov Processes (/12): Example 3.6 The Transition Diagram corresponding to Example 3.6Revisited is: FIGURE 3.7: Market Share Transition Diagram λ max of the matrix in Eqn(3.55) is 1. v max, is (0, 0, 1) T giving the shares of Coke, Pepsi and Brand X in the nth generation, respectively. 204 / 328

Markov Processes (/13): Hidden Markov Models Markov Models have a visible state So transition probabilities & matrix are observable. In Hidden Markov Models visibility restriction is relaxed The transition probabilities are generally not known. Possibly observer sees underlying variable thro noise layer. FIGURE 3.8: Hidden Markov Process Max Heimel, TÜ Berlin 205 / 328 Applications of Non-Linear Models: Logistic Growth Linear difference equations are useful as permit closed-form solutions to be easily obtained. However, solutions often have don t agree with observation. In many areas of science & esp. population biology, non-linear models are better (i.e. more realistic). Here look at simple non-linear models for population growth over time. The simplest model is thelogistic equation This can have stability problems but is a very useful, basic model Look at logistic equation in discrete & later continuous form. 206 / 328