Lectures on Probability and Statistical Models
|
|
- Esmond George
- 5 years ago
- Views:
Transcription
1 Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered
2 13 Imprecise (intuitive) definition. A Markov process is a random process that forgets its past, in the following sense: Pr(Future = y Present = x and Past = z) = Pr(Future = y Present = x). Thus, given the past and the present state of the process, only the present state is of use in predicting the future.
3 Equivalently, Pr(Future = y and Past = z Present = x) = Pr(Future = y Present = x) Pr(Past = z Present = x), so that, given the present state of the process, its past and its future are independent. If the set of states S is discrete, then the process is called a Markov chain. Remark. At first sight this definition might appear to cover only trivial examples, but note that the current state could be complicated and could include a record of the recent past.
4 Andrei Andreyevich Markov (Born: 14/06/1856, Ryazan, Russia; Died: 20/07/1922, St Petersburg, Russia) Markov is famous for his pioneering work on, which launched the theory of stochastic processes. His early work was in number theory, analysis, continued fractions, limits of integrals, approximation theory and convergence of series.
5 Example. There are two rooms, labelled A and B. There is a spider, initially in Room A, hunting a fly that is initially in Room B. They move from room to room independently: every minute each changes rooms (with probability p for the spider and q for the fly) or stays put, with the complementary probabilities. Once in the same room, the spider eats the fly and the hunt ceases. The hunt can be represented as a Markov chain with three states: (0) the spider and the fly are in the same room (the hunt has ended), (1) the spider is in Room A and the fly is in Room B, and, (2) the spider is in Room B and the fly is in Room A.
6 Eventually we will be able to answer questions like What is the probability that the hunt lasts more than two minutes? Let X n be the state of the process at time n (that is, after n minutes). Then, X n S = {0, 1, 2}. The set S is called the state space. The initial state is X 0 = 1. State 0 is called an absorbing state, because the process remains there once it is reached.
7 Definition. A sequence {X n,n = 0, 1,...} of random variables is called a discrete-time stochastic process; X n usually represents the state of the process at time n. If {X n } takes values in a discrete state space S, then it is called a Markov chain if Pr(X m+1 = j X m = i, X m 1 = i m 1,...,X 0 = i 0 ) = Pr(X m+1 = j X m = i). (1) for all time points m and all states i 0,...,i m 1, i,j S. If the right-hand side of (1) is the same for all m, then the Markov chain is said to be time homogeneous.
8 We will consider only time-homogeneous chains, and we shall write p (n) ij = Pr(X m+n = j X m = i) = Pr(X n = j X 0 = i) for the n-step transition probabilities and p ij := p (1) ij = Pr(X m+1 = j X m = i) = Pr(X 1 = j X 0 = i) for the 1-step transition probabilities (or simply transition probabilities).
9 By the law of total probability, we have that j S p (n) ij = j S Pr(X n = j X 0 = i) = 1, and in particular that j S p ij = 1. The matrix P (n) = (p (n) ij, i,j S) is called the n-step transition matrix and P = (p ij, i,j S) is called the 1-step transition matrix (or simply transition matrix).
10 Remarks. (1) Matrices like this (with non-negative entries and all row sums equal to 1) are called stochastic matrices. Writing 1 = (1, 1,...) T (where T denotes transpose), we see that P1 = 1. Hence, P (and indeed any stochastic matrix) has an eigenvector 1 corresponding to an eigenvalue λ = 1. (2) We may usefully set P (0) = I, where, as usual, I denotes the identity matrix: { p (0) 1 if i = j, ij = δ ij := 0 if i j.
11 Example. Returning to the hunt, the three states were: (0) the spider and the fly are in the same room, (1) the spider is in Room A and the fly is in Room B, and, (2) the spider is in Room B and the fly is in Room A. Since the spider changes rooms with probability p and the fly changes rooms with probability q, P = r (1 p)(1 q) pq, r pq (1 p)(1 q) where r = p(1 q) + q(1 p) = p + q 2pq = 1 [(1 p)(1 q) + pq].
12 For example, if p = 1/4 and q = 1/2, then P = 1/2 3/8 1/8. 1/2 1/8 3/8 What is the chance that the hunt is over by n minutes? Can we calculate the chance of being in each of the various states after n minutes?
13 By the law of total probability, we have p (n+m) ij = Pr(X n+m = j X 0 = i) But, = k S Pr(X n+m = j X n = k,x 0 = i) Pr(X n+m = j X n = k,x 0 = i) Pr(X n = k X 0 = i). = Pr(X n+m = j X n = k) (Markov property) = Pr(X m = j X 0 = k) (time homogeneous)
14 and so, for all m,n 1, p (n+m) ij = k S p (n) ik p(m) kj, i,j S, or, equivalently, in terms of transition matrices, P (n+m) = P (n) P (m). Thus, in particular, we have P (n) = P (n 1) P (remembering that P := P (1) ). Therefore, P (n) = P n, n 1. Note that since P (0) = I = P 0, this expression is valid for all n 0.
15 Example. Returning to the hunt, if the spider and the fly change rooms with probability p = 1/4 and q = 1/2, respectively, then P = 1/2 3/8 1/8. 1/2 1/8 3/8 A simple calculation gives P 2 = 3/4 5/32 3/32, 3/4 3/32 5/32
16 P 3 = /8 9/128 7/128, 7/8 7/128 9/128 et cetera, and, to four decimal places, P 15 = Recall that X 0 = 1, so p (n) 10 is the probability that the hunts ends by n minutes. What, then, is the probability that the hunt lasts more than two minutes? Answer: 1 3/4 = 1/4.
17 Arbitrary initial conditions. What if we are unsure about where the process starts? Let π (n) j = Pr(X n = j) and define a row vector π (n) = (π (n) j, j S), being the distribution of the chain at time n. Suppose that we know the initial distribution π (0), that is, the distribution of X 0 (in the previous example we had π (0) = (0 1 0)).
18 By the law of total probability, we have π (n) j = Pr(X n = j) = i S Pr(X n = j X 0 = i) Pr(X 0 = i) = i S π (0) i p (n) ij, and so π (n) = π (0) P n, n 0. Definition. If π (n) = π is the same for all n, then π is called a stationary distribution. If lim n π (n) exists and equals π, then π is called a limiting distribution.
19 Example. Returning to the hunt with p = 1/4 and q = 1/2, suppose that, at the beginning of the hunt, each creature is equally likely to be in either room, so that π (0) = (1/2 1/4 1/4). Then, π (n) = π (0) P n = (1/2 1/4 1/4) /2 3/8 1/8 1/2 1/8 3/8 n.
20 For example, π (3) = (1/2 1/4 1/4) /2 3/8 1/8 1/2 1/8 3/8 = (1/2 1/4 1/4) /8 9/128 7/128 7/8 7/128 9/128 = (15/16 1/32 1/32). So, if, initially, each creature is equally likely to be in either room, then the probability that the hunt ends within 3 minutes is 15/16.
21 The two state chain. Let S = {0, 1} and let ( ) 1 p p P =, q 1 q where p,q (0, 1). It can be shown that ( ) ( ) ( ) P = 1 1 p 1 0 q p p + q 1 q 0 r 1 1, where r = 1 p q. This is of the form P = V DV 1. Check it! (The procedure is called diagonalization.)
22 This is good news because P 2 = (V DV 1 )(V DV 1 ) = V D(V 1 V )DV 1 = V (DID)V 1 = V D 2 V 1. Similarly, P n = V D n V 1 for all n 1. Hence, ( ) ( 1 p q P (n) = 1 p + q = 1 p + q 1 q ( q + pr n q qr n ) ( r n ) p 1 1 ) p pr n p + qr n.
23 Thus we have an explicit expression for the n-step transition probabilities. Remark. The above procedure generalizes to any Markov chain with a finite state space.
24 If the initial distribution is π (0) = (a b), then, since π (n) = π (0) P n, Pr(X n = 0) = Pr(X n = 1) = q + (ap bq)rn p + q p (ap bq)rn p + q,. (You should check this for n = 0 and n = 1.) Notice that when ap = bq, we have Pr(X n = 0) = 1 Pr(X n = 1) = q/(p + q), for all n 0, so that π = (q/(p + q) p/(p + q)) is a stationary distribution.
25 Notice also that r < 1, since p,q (0, 1). Therefore, π is also a limiting distribution because lim Pr(X n = 0) = q/(p + q), n lim Pr(X n = 1) = p/(p + q). n Remark. If, for a general Markov chain, a limiting distribution π exists, then it is a stationary distribution, that is, πp = π (π is a left eigenvector corresponding to the eigenvalue 1). For details (and the converse), you will need a more advanced course on Stochastic Processes.
26 Example. Max (a dog) is subjected to a series of trials, in each of which he is given a choice of going to a dish to his left, containing tasty food, or a dish to his right, containing food with an unpleasant taste. Suppose that if, on any given occasion, Max goes to the left, then he will return there on the next occasion with probability 0.99, while if he goes to the right, he will do so on the next occasion with probability 0.1 (Max is smart, but he is not infallible).
27 Poppy and Max
28 Let X n be 0 or 1 according as Max chooses the dish to the left or the dish to the right on trial n. Then, {X n } is a two-state Markov chain with p = 0.01 and q = 0.9 and hence r = Therefore, if the first dish is chosen at random (at time n = 1), then Max chooses the tasty food on the n-th trial with probability (0.09)n 1, the long-term probability being 90/91.
29 Birth-death chains. Their state space S is either the integers, the non-negative integers, or {0, 1,...,N}, and, jumps of size greater than 1 are not permitted; their transition probabilities are therefore of the form p i,i+1 = a i, p i,i 1 = b i and p ii = 1 a i b i, with p ij = 0 otherwise. The birth probabilities (a i ) and the death probabilities (b i ) are strictly positive and satisfy a i + b i 1, except perhaps at the boundaries of S, where they could be 0. If a i = a and b i = b, the chain is called a random walk.
30 Gambler s ruin. A gambler successively wagers a single unit in an even-money game. X n is his capital after n bets and S = {0, 1,...,N}. If his capital reaches N he stops and leaves happy, while state 0 corresponds to bust. Here a i = b i = 1/2, except at the boundaries (0 and 1 are absorbing states). It is easy to show that the player goes bust with probability 1 i/n if his initial capital is i.
31 The Ehrenfest diffusion model. N particles are allowed to pass through a small aperture between two chambers A and B. We assume that at each time epoch n, a single particle, chosen uniformly and at random from the N, passes through the aperture. Let X n be the number in chamber A at time n. Then, S = {0, 1,...,N} and, for i S, a i = 1 i/n and b i = i/n. In this model, 0 and N are reflecting barriers. It is easy to show that the stationary distribution is binomial B(N, 1/2).
32 Population models. Here X n is the size of the population time n (for example, at the end of the n-th breeding cycle, or at the time of the n-th census). S = {0, 1,...}, or S = {0, 1,...,N} when there is an upper limit N on the population size (frequently interpretted as the carrying capacity). Usually 0 is an absorbing state, corresponding to population extinction, and N is reflecting.
33 Example. Take S = {0, 1,...} with a 0 = 0 and, for i 1, a i = a > 0 and b i = b > 0, where a + b = 1. It can be shown that extinction occurs with probability 1 when a b, and with probability (b/a) i when a > b, where i is the initial population size. This is a good simple model for a population of cells: a = λ/(λ + µ) and b = µ/(λ + µ), where µ and λ are, respectively, the death and the cell division rates.
34 The logistic model. This has S = {0,...,N}, with 0 absorbing and N reflecting, and, for i = 1,...,N 1, a i = λ(1 i/n) µ + λ(1 i/n), b i = µ µ + λ(1 i/n). Here λ and µ are birth and death rates. Notice that the birth and the death probabilities depend on i only through i/n, a quantity which is proportional to the population density: i/n = (i/area)/(n/area). Models with this property are called density dependent.
35 Telecommunications. (1) A communications link in a telephone network has N circuits. One circuit is held by each call for its duration. Calls arrive at rate λ > 0 and are completed at rate µ > 0. Let X n be the number of calls in progress at the n-th time epoch (when an arrival or a departure occurs). Then, S = {0,...,N}, with 0 and N both reflecting barriers, and, for i = 1,...,N 1, a i = λ λ + iµ, b i = iµ λ + iµ.
36 (2) At a node in a packet-switching network, data packets are stored in a buffer of size N. They arrive at rate λ > 0 and are transmitted one at a time (in the order in which they arrive) at rate µ > 0. Let X n be the number of packets yet to be transmitted just after the n-th time epoch (an arrival or a departure). Then, S = {0,...,N}, with 0 and N both reflecting barriers, and, for i = 1,...,N 1, a i = λ λ + µ, b i = µ λ + µ.
37 Genetic models. The simplest of these is the Wright-Fisher model. There are N individuals, each of two genetic types, A-type and a-type. Mutation (if any) occurs at birth. We assume that A-types are selectively superior in that the relative survival rate of A-type over a-type individuals in successive generations is γ > 1. Let X n be the number of A-type individuals, so that N X n is the number of a-type.
38 Wright and Fisher postulated that the composition of the next generation is determined by N Bernoulli trials, where the probability p i of producing an A-type offspring is given by p i = γ[i(1 α) + (N i)β] γ[i(1 α) + (N i)β] + [iα + (N i)(1 β)], where α and β are the respective mutation probabilities. We have S = {0,...,N} and ( ) N p ij = p j i j (1 p i) N j, i,j S.
Markov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationLecture 4 - Random walk, ruin problems and random processes
Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More information4 Branching Processes
4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationModule 6:Random walks and related areas Lecture 24:Random woalk and other areas. The Lecture Contains: Random Walk.
The Lecture Contains: Random Walk Ehrenfest Model file:///e /courses/introduction_stochastic_process_application/lecture24/24_1.htm[9/30/2013 1:03:45 PM] Random Walk As already discussed there is another
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationBudapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány
Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of
More informationThe Leslie Matrix. The Leslie Matrix (/2)
The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:
More informationChapter 11 Advanced Topic Stochastic Processes
Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationGambler s Ruin with Catastrophes and Windfalls
Journal of Grace Scientific Publishing Statistical Theory and Practice Volume 2, No2, June 2008 Gambler s Ruin with Catastrophes and Windfalls B unter, Department of Mathematics, University of California,
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationQuestion Points Score Total: 70
The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.
More informationMATH HOMEWORK PROBLEMS D. MCCLENDON
MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X
More informationMarkov Chains on Countable State Space
Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},
More informationThe Boundary Problem: Markov Chain Solution
MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationSo in terms of conditional probability densities, we have by differentiating this relationship:
Modeling with Finite State Markov Chains Tuesday, September 27, 2011 1:54 PM Homework 1 due Friday, September 30 at 2 PM. Office hours on 09/28: Only 11 AM-12 PM (not at 3 PM) Important side remark about
More information18.600: Lecture 32 Markov Chains
18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationMarkov Chains. October 5, Stoch. Systems Analysis Markov chains 1
Markov Chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 5, 2011 Stoch. Systems
More information18.440: Lecture 33 Markov Chains
18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a
More informationLECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)
LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG) A discrete time Markov Chains is a stochastic dynamical system in which the probability of arriving in a particular state at a particular time moment
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More informationCS 798: Homework Assignment 3 (Queueing Theory)
1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationSome Definition and Example of Markov Chain
Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and
More information6 Continuous-Time Birth and Death Chains
6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationAPM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92
APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains Jay Taylor Fall 2013 Jay Taylor (ASU) APM 541 Fall 2013 1 / 92 Outline 1 Motivation 2 Markov Processes 3 Markov Chains: Basic Properties
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationLecture 4a: Continuous-Time Markov Chain Models
Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationLesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains
AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent
More informationSTOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for
More informationMarkov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India
Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College
More informationMarkov Chains. Contents
6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationMathematical Games and Random Walks
Mathematical Games and Random Walks Alexander Engau, Ph.D. Department of Mathematical and Statistical Sciences College of Liberal Arts and Sciences / Intl. College at Beijing University of Colorado Denver
More informationMATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems
Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors
More informationMarkov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to
Markov chains MC. Show that the usual Markov property is equivalent to P(Future Present, Past) = P(Future Present) P(Future, Past Present) = P(Future Present) P(Past Present). MC 2. Suppose that X 0, X,...
More informationMarkov Chains and Transition Probabilities
Hinthada University Research Journal 215, Vol. 6, No. 1 3 Markov Chains and Transition Probabilities Ko Ko Oo Abstract Markov chain is widely applicable to the study of many real-world phenomene. We represent
More informationMatrix Multiplication
3.2 Matrix Algebra Matrix Multiplication Example Foxboro Stadium has three main concession stands, located behind the south, north and west stands. The top-selling items are peanuts, hot dogs and soda.
More informationProbability Distributions
Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationIntroduction to Stochastic Processes
18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course
More informationUncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.
Uncertainty Runs Rampant in the Universe C. Ebeling circa 2000 Markov Chains A Stochastic Process Into each life a little uncertainty must fall. Our Hero - Andrei Andreyevich Markov Born: 14 June 1856
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationThe SIS and SIR stochastic epidemic models revisited
The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule
More informationDefinition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.
Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationRandom Walks on Graphs. One Concrete Example of a random walk Motivation applications
Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationMAS275 Probability Modelling Exercises
MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.
More informationChapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains
Chapter 0 Finite-State Markov Chains Introductory Example: Googling Markov Chains Google means many things: it is an Internet search engine, the company that produces the search engine, and a verb meaning
More information12 Markov chains The Markov property
12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationEigenvalues in Applications
Eigenvalues in Applications Abstract We look at the role of eigenvalues and eigenvectors in various applications. Specifically, we consider differential equations, Markov chains, population growth, and
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More information1 Random Walks and Electrical Networks
CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.
More informationMath 166: Topics in Contemporary Mathematics II
Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 14 A Review A Markov process is a finite sequence of experiments
More informationNo class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.
Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly
More informationOn asymptotic behavior of a finite Markov chain
1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationKevin James. MTHSC 3110 Section 2.1 Matrix Operations
MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
(February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe
More informationElementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.
Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More informationFinite-Horizon Statistics for Markov chains
Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update
More informationMarkov Chain Model for ALOHA protocol
Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability
More informationn α 1 α 2... α m 1 α m σ , A =
The Leslie Matrix The Leslie matrix is a generalization of the above. It is a matrix which describes the increases in numbers in various age categories of a population year-on-year. As above we write p
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationLecturer: Olga Galinina
Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Motivation Modulated models; Continuous Markov models Markov modulated models; Batch Markovian arrival process; Markovian arrival process; Markov
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationElementary Linear Algebra Review for Exam 3 Exam is Friday, December 11th from 1:15-3:15
Elementary Linear Algebra Review for Exam 3 Exam is Friday, December th from :5-3:5 The exam will cover sections: 6., 6.2, 7. 7.4, and the class notes on dynamical systems. You absolutely must be able
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More information