Markov Chains and Pandemics

Size: px
Start display at page:

Download "Markov Chains and Pandemics"

Transcription

1 Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 8, 2016 Page 1 of 16

2 Abstract Markov Chain Theory is a powerful tool used in statistical analysis to make predictions about future events based solely upon a system model in its present state. Throughout the course of this paper we will be describing the many ways in which this theory is used for tracking the growth and possible outcomes of diseases as they rise in human populations. We will establish a simple example and will add additional elements to further refine our hypothetical model over time, and ultimately we will have a model that will mirror actual prediction models used for diseases in the real world. Page 1 of 16

3 1. Markov chain theory is one of the most widely used stochastic processes for making predictions about future events in the world based upon mathematical abstractions and models of the processes that are being predicted. It is the root of the Leslie Matrix in conservation, tracking weather patterns, and modeling much more. The Markov theory was invented by Andrey Markov to make predictions about future states of a system based solely upon the current state of a system. What this ultimately means is if we have a system with multiple different possible conditions (e.g. like a coin about to be tossed), we can predict the likely-hood of which state the system will be in (either heads or tails) after a set number of trials (coin flips). There are two limiting requirements a system must meet in order to be modeled through a Markov Chain. Firstly, each trial must lead to a finite set of outcomes. Without a limited set of outcomes it is impossible to determine the odds of any one outcome relative to all other outcomes, which is essential when for Markov Chain predictions, which are based on the likelyhood of any one outcome occurring. Secondly, the outcome of any trial depends at most on the outcome of the immediately preceding trial. This stipulation is required because if a trial 5 states previous dictates the outcome of the present trial, it could disagree with what the present state of the system would give as the next state. To clarify, previous system states dictating future system states can contradict what the present state implies about future system states and in such a case it would be impossible to know which prediction was correct. Through these very simple stipulations we can now form very powerful predictive models of just about anything in the world. Over the course of this paper we will demonstrate what elements make up Markov Chains, how these elements interact to make future predictions, and how it can be specifically applied when looking at the spread of diseases in populations. Page 1 of 16

4 3 2 1 Figure 1: A basic transition diagram Page 2 of Transition Diagrams A common visualization for Markov Chains is the transition diagram shown above in Figure 1: This diagram shows a three state probability chain, with each point having a determined probability of moving to another or staying in place. When added together the percentages of probability for the different states must add up to one hundred percent, as it is a closed system of chance. Another way to look at a Markov process is in matrix form, but first we ll need to define some notation.

5 1.2. Notation Setup With the requirements mentioned previously, one can set up what is called a transition matrix to express trials of a Markov Chain. A transition matrix consists of rows with fractional values that add up to one, seeing as a Markov process has to do with ratios of a whole in probability. The transition matrix never changes, and is reused for each trial. An example of a first trial through a transition matrix is shown below: For the values of our Transition Matrix, we ll use T ij for our notation: T 11 T T 1j (x (0) 1, x(0) 2,..., x(0) i ) T 21 T T 2j.... = (x(1) T i1 T i2... T ij 1, x(1) 2,..., x(1) i ) (1) The x (0) is what is known as the Initial Condition Vector. One trial, or one run of the initial condition vector through a transition matrix, gives us the values for x (1). The written out calculations for multiplying the vector through the matrix is shown below: T 11 x (0) 1 + T 21 x (0) T i1x (0) i = x (1) 1 T 12 x (0) 1 + T 22 x (0) T i2x (0) i = x (1) 2. T 1j x (0) 1 + T 2j x (0) T ijx (0) i = x (1) i After one trial of the transition matrix, we get a new condition vector for the next trial: x (1) = (x (1) 1, x(1) 2,..., x(1) i ) To calculate the next trial, x (1) is simply ran through the transition matrix again, providing us with x (2). Page 3 of 16

6 Based on this we can for representing any x (k), we can use the notation: T 11 T T 1j x (k) = (x (k 1) 1, x (k 1) 2,..., x (k 1) T 21 T T 2j i ).... T i1 T i2... T ij Which can be expanded into the following expressions: x (k) 1 = T 11 x (k 1) 1 +T 21 x (k 1) T i1 x (k 1) i x (k) 2 = T 12 x (k 1) 1 +T 22 x (k 1) T i2 x (k 1) i x (k) i. = T 1j x (k 1) 1 +T 2j x (k 1) T ij x (k 1) i Any condition vector in a Markov Chain is calculated by the immediately preceding initial condition vector. Now that we understand the process of calculating Markov Chains via matrices, we can continue to our scenario below Requirements for Modeling To be able to model a disease scenario with Markov Chains, we must ensure the requirements of Markov Chains are being satisfied. These requirements are that each trial must lead to a finite set of outcomes, and that the outcomes of any trial are only influenced at most by the immediately preceding trial. The latter stipulation is really simple to ensure and isn t something we design into our model, but a design principle we use successfully construct our model. With modeling a disease scenario in any population, we are fortunate because the rates of susceptibility don t change Page 4 of 16

7 as the populations shift between the various states, which ensures that at most we are only influenced by the immediately preceding trial and that we have a finite set of outcome for each trial Model Setup A common model in tracking the progression of epidemics is known as the SIR model. The SIR model consists of three columns which we will call susceptible, infected, and recovered. We will label each state by its letter; S, I, and R. The model also consists of three rows, which will also be labeled the same letters as the columns. As stated previously in the notation section, the rows of this matrix will represent the ratio of odds for the entire population and each must add up to one (i.e. one hundred percent). Together the rows and columns of the matrix will describe the likely-hood of the population transitioning from the various states to others. The the transition diagram 2 below, can be used to help us visualize what will be occurring in the described matrix. As can be seen, not every point goes to all of the others. This is a common scenario in Markov Chains, especially in a disease scenario. In the case of some diseases under the SIR model, an individual cannot recover unless they get sick, and once recovered one cannot be any longer susceptible. What this means is that once inside of this state, an individual will never leave it. This state is an absorptive state in our SIR model, R is referred to as absorptive. This information is critical as it helps us determine what values the R row of our matrix must contain. S I R S T 11 T 12 T 23 I T 21 T 22 T 23 (2) R While the values needed for our susceptible and infected rows have yet to be determined, based on the fact that recovered state is absorptive we know that one hundred percent of people recovered will remain recovered and so the recovered column is the only value in the recovered row. This is shown in matrix (2) above. Page 5 of 16

8 I S R Page 6 of 16 Figure 2: This is a basic SIR transition matrix

9 All that remains to have a complete transition matrix would be to apply the determined rates of transition from state to state to the appropriate positions. What this means is we would need to have the odds of an uninfected person becoming infected, and the odds of an infected person recovering. With these values, we can then determine the likely-hood of an uninfected individual remaining in that state, or an infected person remaining infected. We can do this because the percentages are always out of one hundred percent. So if we have the odds of an uninfected person getting infected, we can subtract that percentage from the total and the result will be the percentage of uninfected individuals remaining in that state. With our SIR transition matrix understood and only in need of actual values to be complete, we can now devise our initial condition vector, x (0). We can use the same guiding principle for this row vector as the rows in the transition matrix we have already begun organizing; that is all terms in the row must add up to one because they are all ratios of the total population. Using this principal and understanding our vector must have compatible dimensions to be multiplied into the matrix, we know x (0) must look something like this: x (0) = (x (0) S, x(0) I, 1 x (0) S x(0) I ) = (x (0) S, x(0) I, x (0) R ) As can be seen in the above equation, our initial condition vector will have three terms representing the percentages of the population in the different states of our model. Also visible above is the relationship between all three terms as a total of the entire population Assigning Values Lets say for example that before disease X became a concern to monitor, 90 percent of the population was susceptible, 7 percent were infected, and 3 percent were recovered. These numbers will become our initial condition vector, and we can (.90,.07,.03) (3) Page 7 of 16

10 Now we will set up our transition matrix for disease X using our SIR model from Figure (??) above. This system is much easier to manipulate and use when in matrix form. We will allow our values for this SIR model to be as shown above, with T 11 =.85, T 12 =.15, T 13 = 0, T 21 = 0, T 22 =.12, T 23 =.88, T 31 = 0, T 32 = 0, and T 33 = 1.0 We can now place these values onto our SIR transition diagram: I Page 8 of S R 1.0

11 Setting up our scenario in matrix form, however, makes for easier manipulation and calculations. S I R S I (4) R The SIR matrix is a transition matrix that can serve as a week span, referred to as one trial. As can be seen in the Matrix above, each row adds up to one. This is typical of any probability calculation of a whole. In row S, 85 percent of people that are susceptible remain in that state after one week. The remaining 15 percent becomes infected. For row I, 12 percent of those infected remain infected, while the remaining 88 percent recover. Once recovered, people have a 100 percent probability of remaining recovered. Now that our values are in place, we can calculate a trial within the population as the disease spreads (.90,.07,.03) Our new initial condition vector, x (1), is 0.85(.90) + 0(.07) + 0(.03) = (.90) (.07) (.03) = (.90) + 0(.07) + 1.0(.03) = (0.765, 0.143, 0.092) This shows that after one week, 76.5 percent of the population is predicted to be susceptible, 14.3 percent is predicted to be infected, and 9.2 percent is predicted to be recovered. Page 9 of 16

12 To calculate for week two, we can now plug in x (1) to our SIR matrix using the same method of calculations (0.765, 0.143, 0.092) = (0.650, 0.132, 0.218) This is the predicted outcome of two weeks from the current initial condition. Since a Markov chain is only based on immediately preceding conditions, the matrix input ratio will remain the same every week. Thus, exponentially multiplying a Markov Matrix provides a way to calculate multiple weeks time. The formula looks like this: (InitialConditionV ector) (T ransitionmatrix) t where t is the number of trials. With this new formula we can now calculate more distant probabilities without having to repeatedly multiply by SIR. In order for this to be possible, the rows of a matrix must be probability vectors, (adding up to one), and the matrix must be square. Lastly, all elements of the matrix must be positive integers. Thus, to find the predicted probability of five weeks from now, we can simply plug in 5 for t (.90,.07,.03) (.399,.082,.519) Next, if we plug in t for ten weeks, we can see what direction the disease seems to be heading over time (.90,.07,.03) (.178,.036,.786) As can be seen, the further out in time we predict, the more the population ratio tilts toward recovered. This is due to the fact that the recovery state is absorptive. Page 10 of 16

13 Steady State Vector Another common goal in using a Markov transition matrix is to find when a scenario will fixed in place. One could look at this as the equation of: (InitialConditionV ector) (M arkovm atrix) = (InitialConditionV ector) This can be simpler to imagine with the initial condition vector p in column form. Thus, we ll transpose our transition matrix and have the equation of: Mp = p We can now solve for P to see if the scenario will ever remain fixed in a condition. Essentially, we will find the eigenvalues and eigenvectors. This is referred to as the steady state vector. Mp = p Mp p = 0 (M I)p = (M I) After row reducing, we get the eigenvector of p = (0, 0, 1) Page 11 of 16

14 Thus, the only steady state for this Markov process is once the entire population has been absorbed into the recovered state. Since the SIR matrix is lower diagonalized, it can be seen there there are also two other eigenvalues, and thus two other potential eigenvectors for the steady state vector. However, the eigenvalues 0.85 and 0.12 both result in eigenvectors with one or more negative values, as shown below = (0.633, 0.130, 0.763) 0.12 = (0., 0.707, 0.707) It only stands to reason that we can t have a negative portion of the population in any state, because that ratio has to add up to that of the whole population. Therefore, the absorptive row R provides the only usable eigenvector for the steady state vector. p = (0, 0, 1), which makes sense seeing that any of the population that becomes recovered stays recovered. Eventually, all of the population will be within that state. Another way we can check this is be deriving a limit as trials go to infinite amount of weeks. This should give us our end result for the Markov process, which should match up with our only steady state vector. t (.90,.07,.03) lim (.90,.07,.03) t (0, 0, 1) As can be seen, as t approaches infinity, the condition vector matches that of our steady state vector, (0, 0, 1), staying consistent with our concept of an absorptive state. Page 12 of 16

15 3.2. Additional Absorptive States For our last example of a Markov Matrix manipulation, we must add an additional row and column to represent a new state. Let s say that in our SIR model there is also now a D, for deceased. This will be a second absorptive row, and we ll shift some of the numbers for our example. S I R D S I R D For this SIR-D model, the disease is a bit more aggressive, and 10 percent of people that get infected die. Now, rather than calculate trials by matrix power as before, we will introduce what is known as the IODA form. Through manipulating the matrix and taking advantage of absorptive states, we can shift the SIR-D matrix to be more beneficial. An IODA form matrix consists of categorized parts, as shown below: ( ) I O D The I section will an Identity matrix of the absorptive states, the O will be a null matrix, D consists of the probabilities of switching to states that are not absorptive, and A consists of transitions from non-absorptive to absorptive states. With a few simple row operations, we can transform our SIR-D matrix into IODA form. We can place rows three and four as one and two respectively, and do the same with our columns. A R D S I R D S I Page 13 of 16

16 Partitioned into IODA sections, our matrix looks like this: We can now consider our IODA model to be four separate matrices: I, O, D, and A. Now, to further manipulate our findings, we will solve N = (I A) 1 ( ( ) ( ) ) 1 (I A) ( ) ( ) (1/.12).0.15 ( ) We can call this new matrix matrix N. The significance of N is the probability of absorption via a certain amount of trials, or the average. To interpret our matrix, we can divide by 1.25: ( ) This shows that the average amount of trials that a member within the population goes through before being absorbed into either the state of being recovered or deceased 5.34 trials. Page 14 of 16

17 4. Through our disease scenario we have shown a few of the many manipulations and calculations that can be made through Markov Chain processes. Markov Chains can be used for an endless amount of purposes, and have proven useful in any field when it comes to future predictions. By simply plugging in probabilities, one can use this discrete stochastic process for almost any topic. Steady states, long and short term predictions, and averages are just a few of the many manipulations that are available when dealing with Markov Chain Theory. Page 15 of 16

18 References [1] David Arnold. Writing Scientific Papers in L A TEX [2] David Arnold. The Leslie Matrix [3] Rose-Hulman Institute of Technology. Markov Chains [4] Bernadette H. Perham and Arnold E. Perham. Topics in Discrete Mathematics: Markov Chain Theory Page 16 of 16

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step. 2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

CHAPTER 6. Markov Chains

CHAPTER 6. Markov Chains CHAPTER 6 Markov Chains 6.1. Introduction A(finite)Markovchainisaprocess withafinitenumberofstates (or outcomes, or events) in which the probability of being in a particular state at step n+1depends only

More information

Probability, For the Enthusiastic Beginner (Exercises, Version 1, September 2016) David Morin,

Probability, For the Enthusiastic Beginner (Exercises, Version 1, September 2016) David Morin, Chapter 8 Exercises Probability, For the Enthusiastic Beginner (Exercises, Version 1, September 2016) David Morin, morin@physics.harvard.edu 8.1 Chapter 1 Section 1.2: Permutations 1. Assigning seats *

More information

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. 35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)

More information

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques 1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2

More information

P (E) = P (A 1 )P (A 2 )... P (A n ).

P (E) = P (A 1 )P (A 2 )... P (A n ). Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Designing Information Devices and Systems I Discussion 4B

Designing Information Devices and Systems I Discussion 4B Last Updated: 29-2-2 9:56 EECS 6A Spring 29 Designing Information Devices and Systems I Discussion 4B Reference Definitions: Matrices and Linear (In)Dependence We ve seen that the following statements

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

Modeling Prey and Predator Populations

Modeling Prey and Predator Populations Modeling Prey and Predator Populations Alison Pool and Lydia Silva December 15, 2006 Abstract In this document, we will explore the modeling of two populations based on their relation to one another. Specifically

More information

The Leslie Matrix. The Leslie Matrix (/2)

The Leslie Matrix. The Leslie Matrix (/2) The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:

More information

Three Disguises of 1 x = e λx

Three Disguises of 1 x = e λx Three Disguises of 1 x = e λx Chathuri Karunarathna Mudiyanselage Rabi K.C. Winfried Just Department of Mathematics, Ohio University Mathematical Biology and Dynamical Systems Seminar Ohio University November

More information

Markov Chains and Related Matters

Markov Chains and Related Matters Markov Chains and Related Matters 2 :9 3 4 : The four nodes are called states. The numbers on the arrows are called transition probabilities. For example if we are in state, there is a probability of going

More information

MATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010

MATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010 MATH 9B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 00 This handout is meant to provide a collection of exercises that use the material from the probability and statistics portion of the course The

More information

n α 1 α 2... α m 1 α m σ , A =

n α 1 α 2... α m 1 α m σ , A = The Leslie Matrix The Leslie matrix is a generalization of the above. It is a matrix which describes the increases in numbers in various age categories of a population year-on-year. As above we write p

More information

, some directions, namely, the directions of the 1. and

, some directions, namely, the directions of the 1. and 11. Eigenvalues and eigenvectors We have seen in the last chapter: for the centroaffine mapping, some directions, namely, the directions of the 1 coordinate axes: and 0 0, are distinguished 1 among all

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

MAT Mathematics in Today's World

MAT Mathematics in Today's World MAT 1000 Mathematics in Today's World Last Time We discussed the four rules that govern probabilities: 1. Probabilities are numbers between 0 and 1 2. The probability an event does not occur is 1 minus

More information

Summer HSSP Lecture Notes Week 1. Lane Gunderman, Victor Lopez, James Rowan

Summer HSSP Lecture Notes Week 1. Lane Gunderman, Victor Lopez, James Rowan Summer HSSP Lecture Notes Week 1 Lane Gunderman, Victor Lopez, James Rowan July 6, 014 First Class: proofs and friends 1 Contents 1 Glossary of symbols 4 Types of numbers 5.1 Breaking it down...........................

More information

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2 Probability Probability is the study of uncertain events or outcomes. Games of chance that involve rolling dice or dealing cards are one obvious area of application. However, probability models underlie

More information

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics Probability Rules MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Introduction Probability is a measure of the likelihood of the occurrence of a certain behavior

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc. Chapter 14 From Randomness to Probability Copyright 2012, 2008, 2005 Pearson Education, Inc. Dealing with Random Phenomena A random phenomenon is a situation in which we know what outcomes could happen,

More information

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra spring, 2016. math 204 (mitchell) list of theorems 1 Linear Systems THEOREM 1.0.1 (Theorem 1.1). Uniqueness of Reduced Row-Echelon Form THEOREM 1.0.2 (Theorem 1.2). Existence and Uniqueness Theorem THEOREM

More information

Math 2J Lecture 16-11/02/12

Math 2J Lecture 16-11/02/12 Math 2J Lecture 16-11/02/12 William Holmes Markov Chain Recap The population of a town is 100000. Each person is either independent, democrat, or republican. In any given year, each person can choose to

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

The Eigenvector. [12] The Eigenvector

The Eigenvector. [12] The Eigenvector The Eigenvector [ The Eigenvector Two interest-bearing accounts Suppose Account yields 5% interest and Account yields 3% interest. Represent balances in the two accounts by a -vector x (t) = x (t+) = [.05

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Basic Probability. Introduction

Basic Probability. Introduction Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

1 Ways to Describe a Stochastic Process

1 Ways to Describe a Stochastic Process purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased

More information

Practice problems for Exam 3 A =

Practice problems for Exam 3 A = Practice problems for Exam 3. Let A = 2 (a) Determine whether A is diagonalizable. If so, find a matrix S such that S AS is diagonal. If not, explain why not. (b) What are the eigenvalues of A? Is A diagonalizable?

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

If something is repeated, look for what stays unchanged!

If something is repeated, look for what stays unchanged! LESSON 5: INVARIANTS Invariants are very useful in problems concerning algorithms involving steps that are repeated again and again (games, transformations). While these steps are repeated, the system

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution Random Variable Theoretical Probability Distribution Random Variable Discrete Probability Distributions A variable that assumes a numerical description for the outcome of a random eperiment (by chance).

More information

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces. Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for

More information

1 Normal Distribution.

1 Normal Distribution. Normal Distribution.. Introduction A Bernoulli trial is simple random experiment that ends in success or failure. A Bernoulli trial can be used to make a new random experiment by repeating the Bernoulli

More information

Stochastic Processes

Stochastic Processes qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot

More information

Eigenvalues and eigenvectors.

Eigenvalues and eigenvectors. Eigenvalues and eigenvectors. Example. A population of a certain animal can be broken into three classes: eggs, juveniles, and adults. Eggs take one year to become juveniles, and juveniles take one year

More information

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history. c Dr Oksana Shatalov, Fall 2010 1 9.1: Markov Chains DEFINITION 1. Markov process, or Markov Chain, is an experiment consisting of a finite number of stages in which the outcomes and associated probabilities

More information

RISKy Business: An In-Depth Look at the Game RISK

RISKy Business: An In-Depth Look at the Game RISK Rose-Hulman Undergraduate Mathematics Journal Volume 3 Issue Article 3 RISKy Business: An In-Depth Look at the Game RISK Sharon Blatt Elon University, slblatt@hotmail.com Follow this and additional works

More information

P (A) = P (B) = P (C) = P (D) =

P (A) = P (B) = P (C) = P (D) = STAT 145 CHAPTER 12 - PROBABILITY - STUDENT VERSION The probability of a random event, is the proportion of times the event will occur in a large number of repititions. For example, when flipping a coin,

More information

Math 166: Topics in Contemporary Mathematics II

Math 166: Topics in Contemporary Mathematics II Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 14 A Review A Markov process is a finite sequence of experiments

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

No Solution Equations Let s look at the following equation: 2 +3=2 +7

No Solution Equations Let s look at the following equation: 2 +3=2 +7 5.4 Solving Equations with Infinite or No Solutions So far we have looked at equations where there is exactly one solution. It is possible to have more than solution in other types of equations that are

More information

4.1 Markov Processes and Markov Chains

4.1 Markov Processes and Markov Chains Chapter Markov Processes. Markov Processes and Markov Chains Recall the following example from Section.. Two competing Broadband companies, A and B, each currently have 0% of the market share. Suppose

More information

5.3 Conditional Probability and Independence

5.3 Conditional Probability and Independence 28 CHAPTER 5. PROBABILITY 5. Conditional Probability and Independence 5.. Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

More information

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our

More information

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4.

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4. I. Probability basics (Sections 4.1 and 4.2) Flip a fair (probability of HEADS is 1/2) coin ten times. What is the probability of getting exactly 5 HEADS? What is the probability of getting exactly 10

More information

1 Dirac Notation for Vector Spaces

1 Dirac Notation for Vector Spaces Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes

More information

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences Random Variables Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

Leslie matrices and Markov chains.

Leslie matrices and Markov chains. Leslie matrices and Markov chains. Example. Suppose a certain species of insect can be divided into 2 classes, eggs and adults. 10% of eggs survive for 1 week to become adults, each adult yields an average

More information

PRIMES Math Problem Set

PRIMES Math Problem Set PRIMES Math Problem Set PRIMES 017 Due December 1, 01 Dear PRIMES applicant: This is the PRIMES 017 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 01.

More information

Lesson One Hundred and Sixty-One Normal Distribution for some Resolution

Lesson One Hundred and Sixty-One Normal Distribution for some Resolution STUDENT MANUAL ALGEBRA II / LESSON 161 Lesson One Hundred and Sixty-One Normal Distribution for some Resolution Today we re going to continue looking at data sets and how they can be represented in different

More information

Chapter 8: An Introduction to Probability and Statistics

Chapter 8: An Introduction to Probability and Statistics Course S3, 200 07 Chapter 8: An Introduction to Probability and Statistics This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.7 Applications to Population Distributions 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.7. Applications to Population Distributions Note. In this section we break a population into states and

More information

CS 124 Math Review Section January 29, 2018

CS 124 Math Review Section January 29, 2018 CS 124 Math Review Section CS 124 is more math intensive than most of the introductory courses in the department. You re going to need to be able to do two things: 1. Perform some clever calculations to

More information

Eigenvector Review. Notes. Notes. Notes. Problems pulled from archived UBC final exams https://www.math.ubc.ca/ugrad/pastexams/

Eigenvector Review. Notes. Notes. Notes. Problems pulled from archived UBC final exams https://www.math.ubc.ca/ugrad/pastexams/ Eigenvector Review Problems pulled from archived UBC final exams https://www.math.ubc.ca/ugrad/pastexams/ Note: many eigenvector problems involve differential equations (our next topic) If you re not aware,

More information

Probability (Devore Chapter Two)

Probability (Devore Chapter Two) Probability (Devore Chapter Two) 1016-345-01: Probability and Statistics for Engineers Fall 2012 Contents 0 Administrata 2 0.1 Outline....................................... 3 1 Axiomatic Probability 3

More information

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors

More information

x x 1 0 1/(N 1) (N 2)/(N 1)

x x 1 0 1/(N 1) (N 2)/(N 1) Please simplify your answers to the extent reasonable without a calculator, show your work, and explain your answers, concisely. If you set up an integral or a sum that you cannot evaluate, leave it as

More information

Conditional Probability

Conditional Probability Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the

More information

Stochastic Processes and Advanced Mathematical Finance. Stochastic Processes

Stochastic Processes and Advanced Mathematical Finance. Stochastic Processes Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 2

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 2 EECS 6A Designing Information Devices and Systems I Spring 9 Lecture Notes Note Vectors and Matrices In the previous note, we introduced vectors and matrices as a way of writing systems of linear equations

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

1.20 Formulas, Equations, Expressions and Identities

1.20 Formulas, Equations, Expressions and Identities 1.0 Formulas, Equations, Expressions and Identities Collecting terms is equivalent to noting that 4 + 4 + 4 + 4 + 4 + 4 can be written as 6 4; i.e., that multiplication is repeated addition. It s wise

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 34 To start out the course, we need to know something about statistics and This is only an introduction; for a fuller understanding, you would

More information

Toss 1. Fig.1. 2 Heads 2 Tails Heads/Tails (H, H) (T, T) (H, T) Fig.2

Toss 1. Fig.1. 2 Heads 2 Tails Heads/Tails (H, H) (T, T) (H, T) Fig.2 1 Basic Probabilities The probabilities that we ll be learning about build from the set theory that we learned last class, only this time, the sets are specifically sets of events. What are events? Roughly,

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING EMILY GENTLES Abstract. This paper introduces the basics of the simple random walk with a flair for the statistical approach. Applications in biology

More information

Matrix Basic Concepts

Matrix Basic Concepts Matrix Basic Concepts Topics: What is a matrix? Matrix terminology Elements or entries Diagonal entries Address/location of entries Rows and columns Size of a matrix A column matrix; vectors Special types

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

3.2 Probability Rules

3.2 Probability Rules 3.2 Probability Rules The idea of probability rests on the fact that chance behavior is predictable in the long run. In the last section, we used simulation to imitate chance behavior. Do we always need

More information

Chapter 4: An Introduction to Probability and Statistics

Chapter 4: An Introduction to Probability and Statistics Chapter 4: An Introduction to Probability and Statistics 4. Probability The simplest kinds of probabilities to understand are reflected in everyday ideas like these: (i) if you toss a coin, the probability

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Divisibility = 16, = 9, = 2, = 5. (Negative!)

Divisibility = 16, = 9, = 2, = 5. (Negative!) Divisibility 1-17-2018 You probably know that division can be defined in terms of multiplication. If m and n are integers, m divides n if n = mk for some integer k. In this section, I ll look at properties

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

The Central Limit Theorem

The Central Limit Theorem The Central Limit Theorem Patrick Breheny March 1 Patrick Breheny STA 580: Biostatistics I 1/23 Kerrich s experiment A South African mathematician named John Kerrich was visiting Copenhagen in 1940 when

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 2002 Solutions For Problem Sheet 4 In this Problem Sheet, we revised how to find the eigenvalues and eigenvectors of a matrix and the circumstances under which

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Math 301 Final Exam. Dr. Holmes. December 17, 2007

Math 301 Final Exam. Dr. Holmes. December 17, 2007 Math 30 Final Exam Dr. Holmes December 7, 2007 The final exam begins at 0:30 am. It ends officially at 2:30 pm; if everyone in the class agrees to this, it will continue until 2:45 pm. The exam is open

More information

Eigenvalues & Eigenvectors

Eigenvalues & Eigenvectors Eigenvalues & Eigenvectors Page 1 Eigenvalues are a very important concept in linear algebra, and one that comes up in other mathematics courses as well. The word eigen is German for inherent or characteristic,

More information

Probabilistic models

Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became the definitive formulation

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information