n α 1 α 2... α m 1 α m σ , A =

Similar documents
The Leslie Matrix. The Leslie Matrix (/2)

Chapter 2: Time Series Modelling. Chapter 3: Discrete Models. Notes. Notes. Intro to the Topic Time Series Discrete Models Growth and Decay

4.1 Markov Processes and Markov Chains

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Markov Chains and Pandemics

2 Discrete-Time Markov Chains

Chapter 5. Eigenvalues and Eigenvectors

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 166: Topics in Contemporary Mathematics II

2. The Power Method for Eigenvectors

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 2: Discrete Models

Discrete Markov Chain. Theory and use

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

MAT 1302B Mathematical Methods II

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

Math Camp Notes: Linear Algebra II

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Lecture 15, 16: Diagonalization

The Eigenvector. [12] The Eigenvector

Eigenvalues in Applications

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

A simple dynamic model of labor market

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Further Mathematical Methods (Linear Algebra) 2002

Lectures on Probability and Statistical Models

Convex Optimization CMU-10725

Stochastic modelling of epidemic spread

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

A Note on the Eigenvalues and Eigenvectors of Leslie matrices. Ralph Howard Department of Mathematics University of South Carolina

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Statistical NLP: Hidden Markov Models. Updated 12/15

Markov Chains and Transition Probabilities

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

Stochastic modelling of epidemic spread

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

9 Demography. 9.1 Survival. last revised: 25 February 2009

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Recall : Eigenvalues and Eigenvectors

Introduction. Genetic Algorithm Theory. Overview of tutorial. The Simple Genetic Algorithm. Jonathan E. Rowe

Math Final December 2006 C. Robinson

RISKy Business: An In-Depth Look at the Game RISK

0.1 Naive formulation of PageRank

Math 3191 Applied Linear Algebra

Solving Linear Systems of Equations

and let s calculate the image of some vectors under the transformation T.

CS 246 Review of Linear Algebra 01/17/19

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Section 1.7: Properties of the Leslie Matrix

Math 1553, Introduction to Linear Algebra

Markov Processes. Stochastic process. Markov process

Section 9.2: Matrices.. a m1 a m2 a mn

Linear algebra and differential equations (Math 54): Lecture 13

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

The Transition Probability Function P ij (t)

Announcements Monday, November 06

each nonabsorbing state to each absorbing state.

Math Matrix Algebra

MATH 310, REVIEW SHEET 2

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

1.3 Convergence of Regular Markov Chains

Chapter 4 Lecture. Populations with Age and Stage structures. Spring 2013

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Markov Chains, Stochastic Processes, and Matrix Decompositions

APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

LINEAR ALGEBRA REVIEW

The biggest Markov chain in the world

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Probability, Random Processes and Inference

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Chapter 4 & 5: Vector Spaces & Linear Transformations

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Chapter 3. Determinants and Eigenvalues

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Lecture 3 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 7.2 Diagonalization

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

Eigenvalues, Eigenvectors, and Diagonalization

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Homework Solution 4 for APPM4/5560 Markov Processes

EXAM. Exam #3. Math 2360, Spring April 24, 2001 ANSWERS

Eigenspaces. (c) Find the algebraic multiplicity and the geometric multiplicity for the eigenvaules of A.

Math E-21b Spring 2018 Homework #10

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

The Leslie Matrix Part II

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2

Chapter 1. Vectors, Matrices, and Linear Spaces

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

5. DISCRETE DYNAMICAL SYSTEMS

Lecture Notes: Markov chains

Eigenvalue and Eigenvector Homework

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Transcription:

The Leslie Matrix The Leslie matrix is a generalization of the above. It is a matrix which describes the increases in numbers in various age categories of a population year-on-year. As above we write p n+1 = Ap n where p n, A are given by: p 1 n α 1 α 2... α m 1 α m pn 2 p n =., A = σ 1 0... 0 0. 0 σ 2.... pn m 0 0... σ m 1 0 (2.42) where the Leslie matrix A is made up of α i, σ i, the number of births in a given age class i in year n & the fraction/probability that i year-olds survive to be i +1 years old, respectively. 58/226 The Leslie Matrix cont d Long-term population demographics are found as with Eqn.(2.21) using the eigenvalues of A in Eqn.(2.42) & det(a λi) = 0 to give the general characteristic Leslie equation: n 1 λ n α 1 λ n 1 α 2 σ 1 λ n 2 α 3 σ 1 σ 2 λ n 3 α n i=1 σ i = 0 (2.43) where, again, the Leslie matrix A is made up of α i, σ i, the number of births in a given age class i in year n & the fraction/probability that i year-olds survive to be i + 1 years old, respectively. 59/226

The Leslie Matrix cont d Eqn.(2.43) has one +ive eigenvalue λ & corresponding eigenvector, v. for a general solution like Eqn.(2.19)) P n = c 1 λ n 1v 1 + 0 c 2 λ n 2v 2 + + 0 c n λ n mv m, with dominant eigenvalue λ 1 = λ gives long-term solution: P n c 1 λ n 1 v 1 (2.44) with stable age distribution v 1 = v. The relative magnitudes of its elements give the proportions in the stable state. 60/226 The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population A salmon population has three age classes & females in the second and third age classes produce 4 and 3 offspring, respectively after each iteration. Suppose that 50% of the females in the first age class live on into the second age class and that 25% of the females in the second age class live on into the third age class. The Leslie Matrix (c.f. Eqn.(eqn:1.14) for this population is: 0 4 3 A = 0.5 0 0 (2.45) 0 0.25 0 Fig. 2.4 shows the growth of age classes in the population. 61/226

Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population 1400 1200 First age class Second age class Third age class 1000 Population 800 600 400 200 0 0 1 2 3 4 5 6 7 8 9 10 Time Figure 2.4 : Growth of Salmon Age Classes 62/226 The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population The eigenvalues of the Leslie matrix may be shown to be λ 1 0 0 1.5 0 0 Λ = 0 λ 2 0 = 0 1.309 0 0 0 λ 3 0 0 0.191 (2.46) and the eigenvector matrix S to be given by S = 0.9474 0.9320 0.2259 0.3158 0.356 0.591 0.0526 0.0680 0.7741 (2.47) Dominant e-vector: (0.9474,0.3158,0.0526) T, can be normalized (divide by its sum), to give (0.72,0.24,0.04) T. 63/226

The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population cont d Thus, long-term, 72% of pop n will be in first age class, 24% in second and 4% in third. As principal e-value λ 1 = 1.5, population always increases. Can verify answer by taking any initial distribution of ages & multiplying it by A. It always converges to the proportions above. 64/226 The Leslie Matrix cont d A side note on matrices similar to the Leslie matrix. Any lower diagonal matrix of the form 0 0 0 1 0 0 (2.48) 0 1 0 Can be used to move a vector of age classes forward by one generation e.g. 0 0 0 1 0 0 0 1 0 a b c = 0 a b (2.49) 65/226

Stability in Difference Equations If a system represented by a difference equation can be written in the form u n = Au n 1 then the growth of the sequence as n will depend on the eigenvalues of A, in the following way: Whenever all eigenvalues satisfy λ i < 1, the system is said to be stable and u n 0 as n. Whenever all eigenvalues satisfy λ i 1, the system is said to be neutrally stable and u n is bounded as n. Whenever at least one eigenvalue satisfies λ i > 1, the system is said to be unstable and u n is unbounded as n. 66/226 Markov Processes Often with difference equations don t have certainties of events, but probabilities. So with Leslie Matrix Eqn.(2.42): p n = p 1 n p 2 n., A = α 1 α 2... α m 1 α m σ 1 0... 0 0 0 σ 2.... pn m 0 0... σ m 1 0 (2.50) σ i is probability that i year-olds survive to i +1 years old. Leslie model resembles a discrete-time Markov chain Markov chain: discrete random process with Markov property Markov property: state at t n+1 depends only on that at t n. The difference between Leslie model & Markov model, is: In Markov α m +σ m must = 1 for each m. Leslie model may have these sums <> 1.. 67/226

Stochastic Processes A Markov Process is a particular case of a Stochastic Process. A Stochastic Process is one where probabilities are associated with entering a particular state. A Markov Process is a particular kind of Stochastic Process where the probability of entering a particular state depends only on the last state occupied as well as on the matrix governing the process. Such a matrix is termed the Transition matrix. If the terms in the Transition Matrix remain constant from one timestep to the next, we say that the process is Stationary. 68/226 General form of discrete-time Markov chain is given by: where u n, M are given by: u n+1 = Mu n u 1 n m 11 m 12... m 1 p 1 m 1p un 2 u n =., M = m 21 m 22... m 2 p 1 m 2p....... un p m p1 m p2... m p p 1 m pp (2.51) M is the p p Transition matrix & its m ij are called Transition probabilities with property that p i=1 m ij = 1. m ij represents the probability that that item will go from state i at t n to state j at t n+1. 69/226

Example 4: Two Tree Forest Ecosystem In a forest there are only two kinds of trees: oaks and cedars. At any time n sample space of possible outcomes is (O,C) where O = % of tree population that is oak in a particular year and C, = % that is cedar. If life spans are similar & on death it is equally likely that an oak is replaced by an oak or a cedar but that cedars are more likely (probability 0.74) to be replaced by an oak than another cedar (probability 0.26). How can the changes in the different types of trees be tracked over the generations? 70/226 Example 4: Two Tree Forest Ecosystem This is a Markov Process as proportions of oak/cedar at time n+1 etc are defined by those at time n. Transition Matrix (from Eqn.(2.51)) is Table 2.1: To From Oak Cedar Oak 0.5 0.74 Cedar 0.5 0.26 Table 2.1 : Tree Transition Matrix Table 2.1 in matrix form: M = ( 0.5 0.74 0.5 0.26 ) (2.52) 71/226

Example 4: Two Tree Forest Ecosystem To track time changes in the system, define u n = (o n,c n ) T describing probability of oak & cedar in the forest after n generations. If forest is initially 50% oak and 50% cedar, then u 0 = (0.5,0.5) T. Hence u n = Mu n 1 = M n u 0 (2.53) M can be shown to have one positive eigenvalue 1 & corresponding eigenvector (0.597,0.403) T which is the distribution of oaks and cedars in the nth generation. 72/226 Example 5: Soft Drink Market Share In a soft drinks market there are two Brands: Coke & Pepsi. At any time n sample space of possible outcomes is (P,C) where P = % of market share that drinks Pepsi in a particular year and C, = % that is Coke. It is found that the chances of someone switching from Coke to Pepsi is 0.1 and the chances of someone switching from Pepsi to Coke are 0.3. How can the changes in the different proportions be modelled? 73/226

Example 5: Soft Drink Market Share Again, this is a Markov Process as proportions of Coke/Pepsi at time n+1 etc are defined by those at time n. Transition Matrix (from Eqn.(2.51)) is Table 2.2: To From Coke Pepsi Coke 0.9 0.3 Pepsi 0.1 0.7 Table 2.2 : Soft Drink Market Share Matrix Table 2.1 in matrix form: M = ( 0.9 0.3 0.1 0.7 ) (2.54) 74/226 Example 5 can also be represented using a Transition Diagram, thus: Coke 0.3 Pepsi 0.1 0.9 0.7 Figure 2.5 : Market Share Transition Diagram The eigenvalues of the matrix in Eqn(2.53) are 1, 3 5. The largest eigenvector, can be found to be (0.75,0.25) T. This is the proportions of Coke and Pepsi in the nth generation. 75/226

Absorbing States A state of a Markov Process is said to be absorbing or Trapping if M ii = 1 and M ij = 0 j Absorbing Markov Chain A Markov Chain is absorbing if it has one or more absorbing states. If it has one absorbing state (for instance state i, then the steady state is given by the eigenvector X where X i = 1 and X j = 0 j i 76/226 Example 5: Soft Drink Market Share, Revisited Seeing that the Soft Drinks market is a highly liquid one, another manufacturer (KulKola) decides to trial an experimental product Brand X. Despite an unappealing name, Brand X has the potential (so the KulKola s Marketing department maintain) to Shift the Paradigm in Cola consumption. They believe that, inside five years, they can capture nearly all the Cola market. Investigate whether this claim can be substantiated, given that they can take 20% of Coke s share and 30% of Pepsi s per annum. 77/226

Example 5: Soft Drink Market Share Again, the proportions of Coke/Pepsi/Brand X at n+1 etc are defined by those at n. Transition Matrix (from Eqn.(2.51)) is Table 2.3: From Coke Pepsi Brand X Coke 0.6 0.4 0 To Pepsi 0.2 0.3 0 Brand X 0.2 0.3 1 Table 2.3 : Soft Drink Market Share Matrix Revisited Table?? in matrix form: M = 0.6 0.4 0 0.2 0.3 0 0.2 0.3 1 (2.55) 78/226 The Transition Diagram corresponding to Example 5 Revisited is: Brand X 1.0 0.2 0.3 Coke 0.4 Pepsi 0.2 0.6 0.3 Figure 2.6 : Market Share Transition Diagram The largest eigenvalue of the matrix in Eqn(2.55) is 1. The largest eigenvector, can be found to be (0,0,1) T giving the proportions of Coke, Pepsi and Brand X in the nth generation, respectively. 79/ 226

Markov Models have a visible state and so the state transition probabilities and transition matrix can be noted by the observer. In another type of model, the Hidden Markov Model this visibility restriction is relaxed and these probabilities are generally not known. These kind of models are very useful in Computational Biology for gene sequence analysis as well as many other applications. 80/226