Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

Similar documents
IEOR 6711: Professor Whitt. Introduction to Markov Chains

30 Classification of States

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Math 304 Handout: Linear algebra, graphs, and networks.

Lecture 20 : Markov Chains

Study of Topological Surfaces

Name: MATH 3195 :: Fall 2011 :: Exam 2. No document, no calculator, 1h00. Explanations and justifications are expected for full credit.

Relations Graphical View

The Boundary Problem: Markov Chain Solution

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Finite Mathematics : A Business Approach

Math 166: Topics in Contemporary Mathematics II

This section is an introduction to the basic themes of the course.

7.6 The Inverse of a Square Matrix

Notes. Relations. Introduction. Notes. Relations. Notes. Definition. Example. Slides by Christopher M. Bourke Instructor: Berthe Y.

Markov Decision Processes

Section 1.2. Row Reduction and Echelon Forms

CHAPTER 6. Markov Chains

Homework set 2 - Solutions

Markov Chains Absorption Hamid R. Rabiee

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Markov Chains and Related Matters

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS

Markov Chains Handout for Stat 110

Walks, Springs, and Resistor Networks

MTH 2032 Semester II

Algebraic Methods in Combinatorics

Calculus BC: Section I

1/19 2/25 3/8 4/23 5/24 6/11 Total/110 % Please do not write in the spaces above.

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Math Homework 5 Solutions

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

6.842 Randomness and Computation March 3, Lecture 8

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

Interlude: Practice Final

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

5. Partitions and Relations Ch.22 of PJE.

Physical Metaphors for Graphs

Introduction to Systems of Equations

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Eigenvectors Via Graph Theory

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

6 Evolution of Networks

EE263 Review Session 1

The Simplex Method: An Example

Matrices and Vectors

Definition: A binary relation R from a set A to a set B is a subset R A B. Example:

Online Social Networks and Media. Link Analysis and Web Search

SUCCESSION INTRODUCTION. Objectives. A Markov Chain Model of Succession

Introduction to Algebra: The First Week

Equilibrium. Why? Model 1 A Reversible Reaction. At what point is a reversible reaction completed?

Permutations and Combinations

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

MATH Mathematics for Agriculture II

Eigenvalues and Eigenvectors: An Introduction


Ma/CS 6b Class 12: Graphs and Matrices

Calculus II - Basic Matrix Operations

Week 4. (1) 0 f ij u ij.

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Markov Processes Hamid R. Rabiee

Eigenvalues and Eigenvectors

12.4 The Diagonalization Process

Further discussion of Turing machines

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

RESERVE DESIGN INTRODUCTION. Objectives. In collaboration with Wendy K. Gram. Set up a spreadsheet model of a nature reserve with two different

Chapter 11 Advanced Topic Stochastic Processes

Stochastic Processes

Consider a complete bipartite graph with sets A and B, each with n vertices.

Conceptual Explanations: Simultaneous Equations Distance, rate, and time

Math.3336: Discrete Mathematics. Chapter 9 Relations

1 Locally computable randomized encodings

Acyclic Digraphs arising from Complete Intersections

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Notes on Mathematics Groups

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Uses of finite automata

STEP Support Programme. STEP 2 Matrices Topic Notes

2. Transience and Recurrence

Markov Chains Absorption (cont d) Hamid R. Rabiee

Metric spaces and metrizability

Exact Inference I. Mark Peot. In this lecture we will look at issues associated with exact inference. = =

17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs

Matrices. Introduction to Matrices Class Work How many rows and columns does each matrix have? 1. A = ( ) 2.

ENEE 459E/CMSC 498R In-class exercise February 10, 2015

Prof. Dr. Lars Schmidt-Thieme, L. B. Marinho, K. Buza Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course

1 Counting spanning trees: A determinantal formula

Matrix Basic Concepts

Social network analysis: social learning

Week 8. 1 LP is easy: the Ellipsoid Method

Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Transcription:

29 Markov Chains Definition of a Markov Chain Markov chains are one of the most fun tools of probability; they give a lot of power for very little effort. We will restrict ourselves to finite Markov chains. This is not much of a restriction. We get much of the most interesting cases and avoid much of the theory. 1 This chapter defines Markov chains and gives you one of the most basic tools for dealing with them: matrices. A Markov chain can be thought of in terms of probability graphs. A Markov chain has a non-empty collection of states. Each state is represented by a vertex of the graph. Leaving each vertex are arcs (or an arc). These arcs are labeled with non-negative numbers representing probabilities; the numbers leaving an arc add up to one. We restrict ourselves to graphs that are connected (it is possible to travel from any vertex to any other vertex along the arcs although not necessarily along the directions of the arcs). Using the terminology of Chapter 1, we say we are interested in graphs that are at least weakly connected (if we can go from any node to any other node always along the directions of the arcs, the graph is strongly connected). We also restrict ourselves to graphs with finite numbers of vertices and arcs. 1 Infinite Markov chains require calculus. However, finite Markov chains are powerful in their own right. 1

Figure 1 Figure 1 gives the graph of a five state Markov chain. The property of a chain that the name Markov refers to is independence. For example, if one is in state b, the probability of going to state a is.5 regardless of which states were occupied before state a (and of course the probability of going to state c is.2, and the probability of going to state d is.3). Once entering a state, one always leaves it. This is the reason that there are always arcs leaving each node. The labels of these arcs must add up to one since the probability of going somewhere is 1. If we in fact want to stay at a state, we can have an arc from the state to itself as in state a. In this case, once we enter state a, we remain there because the only arc leaving a returns to a (again, such an arc A Markov Chain with Fiveis called a loop). States. State Vectors and Transition Matrices A state vector is a row matrix that represents the probability of being in each state. For example, still using the Markov chain of Figure 1, consider the following state vectors: S a describes the case when there is a.2 chance of the chain being in each state. S b is the case where the chain is in state b. S c is the case where the chain has.3 probability of being in state 2

b,.4 probability of being in state c, and.3 probability of being in state d. A transition matrix is the matrix representation of a Markov chain. For example the matrix corresponding to the Markov chain of Figure 1 is: Each row corresponds to a particular vertex. Ordinarily we will order the rows in the natural order, a-b-c-, and so on. The columns are given the same order. In this case, the rows correspond in order to a-b-c-d-e, and the columns are ordered in the same way. The rows and columns should enumerate the vertices in the same order as does a state vector. Therefore each element of the matrix represents a pair of vertices. For example, instead of speaking of the {2,3} element as we did in Chapter 6, we can speak of the {b,c} element. The interpretation of the second row, is that on leaving vertex b, the chain goes to vertex a with probability.5; the chain goes to vertex b with probability 0; to vertex c with probability.2; to vertex d with probability.3, and to vertex e with probability 0. Entries on the main diagonal of the matrix correspond the probabilities of a vertex going to itself. In this example, the element {1,1} corresponds to the fact that the vertex a goes to itself with probability 1. A matrix is a transition matrix if and only if each row has non-negative entries that add up to 1. Notice that the columns do not have to add up to anything. If you notice a strong similarity between transition matrices and the incidence matrices of Chapter 6, you are correct. They can be operated on in the same manner. Let us consider the Markov chain in Figure 1. We will denote its transition matrix by M (as in the example above). Let S i be the state vector after transition i. That is, the chain will start with state vector S 0. The state vector representing the chain after one transition will be S 1. After two transitions we have 3

state vector S 2. Let us define S 0 = (0,0,0,1,0). That is, we will start the chain in state d. After 1 transition we must be in state c. Hence, S 2 = (0,0,1,0,0). After one more transition we are in state e; so S 3 = (0,0,0,0,1). From e we have a.5 probability of going to state b, and a.5 probability of going to state c. Hence S 4 = (0,.5,.5,0,0). If we start with state vector S 0, and with transition matrix, M, then after n transitions, we have state vector: S n = S 0 AM n. In the example just given, where S 0 = (0,0,0,1,0) represents the case we start in state d, S 0 AM 128 = (.99999925,.00000016,.00000026,.00000005,.00000029). That is, we end up in state a, with probability.99999925 and in state e, with probability.00000029. In fact, you should be able to convince yourself that in this example, that wherever you start, you should eventually wind up in state a. Once in state a, you can't leave. Such a state (that returns to itself immediately with probability 1) is called an absorbing state. The proof that the matrix multiplication of state vectors and transitions matrices has the above property is not difficult; it can be done by careful examination of the meaning of matrix multiplication. 1 Example 1 The proof is in much the same spirit as the proof of the properties of incidence matrices which was briefly discussed in Section 18. 4

Figure 2 Markov Chain with Five States. If in the Markov chain of Figure 2 if we start in state A, then the initial state vector is S 0 = (1,0,0,0,0). If we denote the transition matrix by M, then the state vector after 1 transition is given by S 1 = SAM. The state vector after 2 transitions is S 2 = (SAM)AM = SAM 2 ; after 3 transitions it is S 3 = ((SAM)AM)AM = SAM 3 ; and after 5 transitions we have S 5 = SAM 5. Using this approach S 5, S 10, through S 30 are computed in Figure 3. After 5 transitions, the state vector is S 5 = (.09,.023,.023,.77,.096). That is, starting in state A, after 5 transitions, the probability that we are in state A is.09; the probability we are in state B is.023, and so on. After 15 transitions the probabilities are fixed at (.073, Figure 3 State Probabilities at 5, 10, 15, 20, 25, 30 Transitions. 5

.037,.073,.735,.082). 1 If we start in state C, that is if we start with initial vector S 0 = (0,0,1,0,0) as in Figure 4, the probabilities become fixed at the same values. 1 The alert reader may notice that we have not found fixed probabilities but that we have probabilities repeat every 5 transitions. This is suggested by the fact that there are 5 states and we are looking at intervals of 5 transitions. However, this could only happen if the graph were periodic, and we will see later that this is not the case. 6

For this particular Markov chain, the long term probabilities are independent of where we start. We will explore this in Section 34. Figure 4 State Probabilities at 5, 10, 15, 20, 25, 30 Transitions 7

For the following exercises use Figure 5. If you need to take powers and products of matrices, an excellent tool is a spreadsheet. When working by hand, if you need a matrix to a high power, the quickest approach is to use powers of 2. For example, if you want the 20'th power of M, square M successively until you get M 2, M 4, M 8, and M 16. Then M 20 = M 16 AM 4. Figure 5 A Markov Chain with Four States G Exercise 1 Give the transition matrix for the Markov chain in Figure 5. G Exercise 2 If we start in state B, what is the probability of being in state A after 8 transitions? (If you can't do the matrix multiplications, at least write out the matrix expression indicating the answer and precisely where in the result the answer is.) G Exercise 3 If we start in state B, what is the probability of being in state D after 8 transitions? G Exercise 4 If we start in state C, what is the probability of being in state A after 16 transitions? What is the probability of being in state D? 8

1. 2. The answer is.989. 3. The answer is.011 (as shown on the previous problem). 4. The probability of ending in state A (after 16 transitions) is.89 and state D is.11. (Note that both answers are rounded off, since the probability of being in state B or C is greater than 0, but just barely.) 9