The Leslie Matrix The Leslie matrix is a generalization of the above. It is a matrix which describes the increases in numbers in various age categories of a population year-on-year. As above we write p n+1 = Ap n where p n, A are given by: p 1 n α 1 α 2... α m 1 α m pn 2 p n =., A = σ 1 0... 0 0. 0 σ 2.... pn m 0 0... σ m 1 0 (2.42) where the Leslie matrix A is made up of α i, σ i, the number of births in a given age class i in year n & the fraction/probability that i year-olds survive to be i +1 years old, respectively. 58/226 The Leslie Matrix cont d Long-term population demographics are found as with Eqn.(2.21) using the eigenvalues of A in Eqn.(2.42) & det(a λi) = 0 to give the general characteristic Leslie equation: n 1 λ n α 1 λ n 1 α 2 σ 1 λ n 2 α 3 σ 1 σ 2 λ n 3 α n i=1 σ i = 0 (2.43) where, again, the Leslie matrix A is made up of α i, σ i, the number of births in a given age class i in year n & the fraction/probability that i year-olds survive to be i + 1 years old, respectively. 59/226
The Leslie Matrix cont d Eqn.(2.43) has one +ive eigenvalue λ & corresponding eigenvector, v. for a general solution like Eqn.(2.19)) P n = c 1 λ n 1v 1 + 0 c 2 λ n 2v 2 + + 0 c n λ n mv m, with dominant eigenvalue λ 1 = λ gives long-term solution: P n c 1 λ n 1 v 1 (2.44) with stable age distribution v 1 = v. The relative magnitudes of its elements give the proportions in the stable state. 60/226 The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population A salmon population has three age classes & females in the second and third age classes produce 4 and 3 offspring, respectively after each iteration. Suppose that 50% of the females in the first age class live on into the second age class and that 25% of the females in the second age class live on into the third age class. The Leslie Matrix (c.f. Eqn.(eqn:1.14) for this population is: 0 4 3 A = 0.5 0 0 (2.45) 0 0.25 0 Fig. 2.4 shows the growth of age classes in the population. 61/226
Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population 1400 1200 First age class Second age class Third age class 1000 Population 800 600 400 200 0 0 1 2 3 4 5 6 7 8 9 10 Time Figure 2.4 : Growth of Salmon Age Classes 62/226 The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population The eigenvalues of the Leslie matrix may be shown to be λ 1 0 0 1.5 0 0 Λ = 0 λ 2 0 = 0 1.309 0 0 0 λ 3 0 0 0.191 (2.46) and the eigenvector matrix S to be given by S = 0.9474 0.9320 0.2259 0.3158 0.356 0.591 0.0526 0.0680 0.7741 (2.47) Dominant e-vector: (0.9474,0.3158,0.0526) T, can be normalized (divide by its sum), to give (0.72,0.24,0.04) T. 63/226
The Leslie Matrix cont d Example 3: Leslie Matrix for a Salmon Population cont d Thus, long-term, 72% of pop n will be in first age class, 24% in second and 4% in third. As principal e-value λ 1 = 1.5, population always increases. Can verify answer by taking any initial distribution of ages & multiplying it by A. It always converges to the proportions above. 64/226 The Leslie Matrix cont d A side note on matrices similar to the Leslie matrix. Any lower diagonal matrix of the form 0 0 0 1 0 0 (2.48) 0 1 0 Can be used to move a vector of age classes forward by one generation e.g. 0 0 0 1 0 0 0 1 0 a b c = 0 a b (2.49) 65/226
Stability in Difference Equations If a system represented by a difference equation can be written in the form u n = Au n 1 then the growth of the sequence as n will depend on the eigenvalues of A, in the following way: Whenever all eigenvalues satisfy λ i < 1, the system is said to be stable and u n 0 as n. Whenever all eigenvalues satisfy λ i 1, the system is said to be neutrally stable and u n is bounded as n. Whenever at least one eigenvalue satisfies λ i > 1, the system is said to be unstable and u n is unbounded as n. 66/226 Markov Processes Often with difference equations don t have certainties of events, but probabilities. So with Leslie Matrix Eqn.(2.42): p n = p 1 n p 2 n., A = α 1 α 2... α m 1 α m σ 1 0... 0 0 0 σ 2.... pn m 0 0... σ m 1 0 (2.50) σ i is probability that i year-olds survive to i +1 years old. Leslie model resembles a discrete-time Markov chain Markov chain: discrete random process with Markov property Markov property: state at t n+1 depends only on that at t n. The difference between Leslie model & Markov model, is: In Markov α m +σ m must = 1 for each m. Leslie model may have these sums <> 1.. 67/226
Stochastic Processes A Markov Process is a particular case of a Stochastic Process. A Stochastic Process is one where probabilities are associated with entering a particular state. A Markov Process is a particular kind of Stochastic Process where the probability of entering a particular state depends only on the last state occupied as well as on the matrix governing the process. Such a matrix is termed the Transition matrix. If the terms in the Transition Matrix remain constant from one timestep to the next, we say that the process is Stationary. 68/226 General form of discrete-time Markov chain is given by: where u n, M are given by: u n+1 = Mu n u 1 n m 11 m 12... m 1 p 1 m 1p un 2 u n =., M = m 21 m 22... m 2 p 1 m 2p....... un p m p1 m p2... m p p 1 m pp (2.51) M is the p p Transition matrix & its m ij are called Transition probabilities with property that p i=1 m ij = 1. m ij represents the probability that that item will go from state i at t n to state j at t n+1. 69/226
Example 4: Two Tree Forest Ecosystem In a forest there are only two kinds of trees: oaks and cedars. At any time n sample space of possible outcomes is (O,C) where O = % of tree population that is oak in a particular year and C, = % that is cedar. If life spans are similar & on death it is equally likely that an oak is replaced by an oak or a cedar but that cedars are more likely (probability 0.74) to be replaced by an oak than another cedar (probability 0.26). How can the changes in the different types of trees be tracked over the generations? 70/226 Example 4: Two Tree Forest Ecosystem This is a Markov Process as proportions of oak/cedar at time n+1 etc are defined by those at time n. Transition Matrix (from Eqn.(2.51)) is Table 2.1: To From Oak Cedar Oak 0.5 0.74 Cedar 0.5 0.26 Table 2.1 : Tree Transition Matrix Table 2.1 in matrix form: M = ( 0.5 0.74 0.5 0.26 ) (2.52) 71/226
Example 4: Two Tree Forest Ecosystem To track time changes in the system, define u n = (o n,c n ) T describing probability of oak & cedar in the forest after n generations. If forest is initially 50% oak and 50% cedar, then u 0 = (0.5,0.5) T. Hence u n = Mu n 1 = M n u 0 (2.53) M can be shown to have one positive eigenvalue 1 & corresponding eigenvector (0.597,0.403) T which is the distribution of oaks and cedars in the nth generation. 72/226 Example 5: Soft Drink Market Share In a soft drinks market there are two Brands: Coke & Pepsi. At any time n sample space of possible outcomes is (P,C) where P = % of market share that drinks Pepsi in a particular year and C, = % that is Coke. It is found that the chances of someone switching from Coke to Pepsi is 0.1 and the chances of someone switching from Pepsi to Coke are 0.3. How can the changes in the different proportions be modelled? 73/226
Example 5: Soft Drink Market Share Again, this is a Markov Process as proportions of Coke/Pepsi at time n+1 etc are defined by those at time n. Transition Matrix (from Eqn.(2.51)) is Table 2.2: To From Coke Pepsi Coke 0.9 0.3 Pepsi 0.1 0.7 Table 2.2 : Soft Drink Market Share Matrix Table 2.1 in matrix form: M = ( 0.9 0.3 0.1 0.7 ) (2.54) 74/226 Example 5 can also be represented using a Transition Diagram, thus: Coke 0.3 Pepsi 0.1 0.9 0.7 Figure 2.5 : Market Share Transition Diagram The eigenvalues of the matrix in Eqn(2.53) are 1, 3 5. The largest eigenvector, can be found to be (0.75,0.25) T. This is the proportions of Coke and Pepsi in the nth generation. 75/226
Absorbing States A state of a Markov Process is said to be absorbing or Trapping if M ii = 1 and M ij = 0 j Absorbing Markov Chain A Markov Chain is absorbing if it has one or more absorbing states. If it has one absorbing state (for instance state i, then the steady state is given by the eigenvector X where X i = 1 and X j = 0 j i 76/226 Example 5: Soft Drink Market Share, Revisited Seeing that the Soft Drinks market is a highly liquid one, another manufacturer (KulKola) decides to trial an experimental product Brand X. Despite an unappealing name, Brand X has the potential (so the KulKola s Marketing department maintain) to Shift the Paradigm in Cola consumption. They believe that, inside five years, they can capture nearly all the Cola market. Investigate whether this claim can be substantiated, given that they can take 20% of Coke s share and 30% of Pepsi s per annum. 77/226
Example 5: Soft Drink Market Share Again, the proportions of Coke/Pepsi/Brand X at n+1 etc are defined by those at n. Transition Matrix (from Eqn.(2.51)) is Table 2.3: From Coke Pepsi Brand X Coke 0.6 0.4 0 To Pepsi 0.2 0.3 0 Brand X 0.2 0.3 1 Table 2.3 : Soft Drink Market Share Matrix Revisited Table?? in matrix form: M = 0.6 0.4 0 0.2 0.3 0 0.2 0.3 1 (2.55) 78/226 The Transition Diagram corresponding to Example 5 Revisited is: Brand X 1.0 0.2 0.3 Coke 0.4 Pepsi 0.2 0.6 0.3 Figure 2.6 : Market Share Transition Diagram The largest eigenvalue of the matrix in Eqn(2.55) is 1. The largest eigenvector, can be found to be (0,0,1) T giving the proportions of Coke, Pepsi and Brand X in the nth generation, respectively. 79/ 226
Markov Models have a visible state and so the state transition probabilities and transition matrix can be noted by the observer. In another type of model, the Hidden Markov Model this visibility restriction is relaxed and these probabilities are generally not known. These kind of models are very useful in Computational Biology for gene sequence analysis as well as many other applications. 80/226