P(X 0 = j 0,... X nk = j k )
|
|
- Nigel Roberts
- 5 years ago
- Views:
Transcription
1 Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that (Z n ) n 0 is Markov chain with transition matrix P k. Solution. First we prove a general result: if (X n ) n 0 is a Markov chain then for any collection of states i,..., i k and any collection of indices 0 n <... < n k we have P(X nk = i k X nk = i k,..., X n = i ) = P(X nk = i k X nk = i k ). Equivalently, we will show that P(X n = i,..., X nk = i k ) = P(X nk = i k X nk = i k ) P(X n2 = i 2 X n = i ). Note that where A n,...,n k i,...,i k P(X n = i,..., X nk = i k ) = j A n,...,n k i,...,i k P(X 0 = j 0,... X nk = j k ) = {(j 0,..., j nk ) S n k+ : j nl = i l for l =,..., k}. But by induction we have P(X 0 = j 0,... X n = j n ) = P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ). Hence for n m and i m, i n S we have P(X n = i n X m = i m ) = P(X n = i n, X m = i m ) P(X m = i m ) j A m,n P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im,in = j A P(X m m = j m X m = j m ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im = P(X n = j n X n = j n ) P(X m+ = j m+ X m = j m ) j B m,n im,in where B m,n i m,i n = {(j m,..., j n ) S n m+ : j m = i m, j n = i m }. The claim now follows. Returning to the specific problem, we have P(Z n = i n Z n = i n,..., Z 0 = i 0 ) = P(X kn = i n X k(n ) = i n,..., X 0 = i 0 ) = P(X kn = i n X k(n ) = i n ) = (P k ) in,i n = P(Z n = i n Z n = i n ) and (Z n ) n is a Markov chain with transition matrix P k.
2 Problem 2. Let U, U 2,... be a sequence of independent random variable uniformly distributed on [0, ]. Given a function G : S [0, ] S, let (X n ) n be defined recursively by X n+ = G(X n, U n+ ). Show that (X n ) n is a Markov chain on S. Prove that all Markov chains can be realized in this fashion with a suitable choice of the function G. Solution 2. Let i 0,..., i n+ be any collection of points in S. Since U n+ is independent of X 0,..., X n we have P(X n+ = i n+ X 0 = i 0,..., X n = i n ) = P(G(X n, U n+ ) = i n+ X 0 = i 0,..., X n = i n ) = P(G(i n, U n+ ) = i n+ ) = P(G(X n, U n+ ) = i n+ X n = i n ) = P(X n+ = i n+ X n = i n ) and hence (X n ) n is a Markov chain. The transition probabilities are then given by p ij = P(G(i, U ) = j). Conversely, suppose P = (p ij ) i,j is an arbitrary stochastic matrix. Let G : S [0, ] S be defined by j j G(i, u) = j if p ik u < p ik. If U is a random variable uniformly distributed on [0, ] then k= P(G(i, U) = j) = p ij by construction. Hence the Markov chain with transition matrix P can be realized by the recurrence X n+ = G(X n, U n+ ) where U, U 2,... is a sequence of independent uniform [0, ] random variables. Problem 3. Let X, X 2,... be a sequence of independent random variables with k= P(X n = ) = P(X n = ) = 2 for all n. Let S n = X X n. Prove that (S n ) n 0 is a recurrent Markov chain on the set of integers Z = {0, ±, ±2,...}. [You might find Stirling s formula useful: n! 2πn n+/2 e n.] Solution 3. The stochastic process (S n ) n is a Markov chain since for all integers i,..., i n+ we have P(S n+ = i n+ S = i,..., S n = i n ) = P(S n + X n+ = i n+ S = i,..., S n = i n ) = P(X n+ = i n+ i n ) = P(S n+ = i n+ S n = i n ) 2
3 where we have used the fact that X n+ is independent of S,... S n. We can prove recurrence in two ways. First, by counting the left and right turns of the random walk we have { 0 if n is odd P(S n = 0) = 2 n( n n/2 By Stirling s formula, we have the approximation ( ) 2k 22k k πk and which implies p 00 (n) = n= ) if n is even. P(S n = 0) =. n= The state zero is thus a recurrent state. Since the Markov chain is irreducible, it is recurrent. Alternatively, let X n = S n + X 0 and define the stopping time T = inf{n 0 : X n = 0}. Let h i = P(T < X 0 = i). By conditioning on the first step of the walk we have h i = 2 h i + 2 h i+. All solutions to this linear difference equation are of the form h i = a + bi for constants a, b R. Since we have the bound 0 h i then b = 0. Also, we have the boundary condition h 0 = so a =. In particular, the P(S n = 0 for some n } = 2 h + 2 h = and the random walk is recurrent. Problem 4. Fix a natural number d, and let X, X 2,... be a sequence of independent vector-valued random variables with for all i d, where P(X n = e i ) = P(X n = e i ) = 2d e i = (0,..., 0,, 0,..., 0) }{{}}{{} i d i Let S n = X + + X n. Prove that (S n ) n 0 is a transient Markov chain on Z d if and only if d 3. [You may find the following inequality useful: If i i d = dn then i! i d! (n!) d.] Solution 4. As before, by counting the left, right, up, down, etc. turns of the random walk we have { 0 if n is odd P(S n = 0) = (2d) n n! i + +i d =n/2 (i!) 2 (i d if n is even.!) 2 For d = 2, a miracle occurs and P(S 2k = 0) = [2 2k( 2k k ) ] 2 exactly. By Stirling s formula we have P(S 2k = 0) (πk), and since n p 00(n) = the walk is recurrent. 3
4 We can bound the probability of {S 2k = 0} by (2d) 2k i + +i d =k (2k)! (2d) 2k (2k)! (i!) 2 (i d!) 2 k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k![(k/d)!] d Ck d/2 i + +i d =k k! (i!) (i d!) for some constant C > 0, at least when k is divisible by d, where we have used the fact k! i + +i d =k = (i!) (i d!) dk. (If k is not divisible by d we have min i + +i d =k(i!) (i d!) = ( k/d!) d( {k/d}) ( + k/d!) d{k/d} where x is the greatest integer less than or equal to x and {x} = x x is the fractional part of x, in which case the asymptotics still hold.) If d 3 the sum n p 00(n) converges and 0 is a transient state, and hence the chain is transient. Problem 5. Let T and T 2 be stopping times for a Markov chain (X n ) n 0 on S. Prove that each of the following are also stopping times:. T = min{n : X n = i} for some fixed i S. 2. T (ω) = N for all ω Ω for a fixed N N. 3. T = min{t, T 2 }. 4. T = max{t, T 2 }. 5. T = T + T 2. Solution 5. By definition, a random variable T : Ω N { } is a stopping time if and only if for every n N there is a function f n : S n+ {0, } such that {T =n} = f n (X 0,..., X n ). Notice that that if T is a stopping time then there exist functions g n such that {T n+} = g n (X 0,..., X n ) since {T n+} = {T n} = n g k (X 0,..., X k ) Conversely, if a random variable T has the property that for every n N there is a function g n such that {T n+} = g n (X 0,..., X n ) then T is a stopping time, since {T =n} = {T n} {T n+} = g n (X 0,..., X n ) g n (X 0,..., X n ). Similarly, T is a stopping time if and only if there exist then there exists h n such that {T n} = h n (X 0,..., X n ). 4 k=0
5 {T =n} = {X i,...,x n i,x n=i} {τ=n} = { if n = N 0 if n N {T n+} = {T n+,t 2 n+} = {T n+} {T2 n+}. {T n} = {T n,t 2 n} = {T n} {T2 n}. 5. {T =n} = n k=0 {T =k} {T2 =n k}. Problem 6. A flea hops randomly on the vertices of a triangle with vertices labelled,2, and 3, hopping to each of the other vertices with equal probability. If the flea starts at vertex, find the probability that after n hops the flea is back to vertex. A second flea also starts at vertex and hops about on the vertices of a triangle, but this flea is twice as likely to jump clockwise as anticlockwise. What is the probability that after n hops this second flea is back to vertex? Solution 6. At any time, the flea is either on vertex (state A), or it is on one of the two other vertices (state A). If it is in state a, it hops to state b with probability. On the other hand, if it is in state B, it hops to state A with probability /2 and stays in state B with probability /2. Hence the transition matrix is ( 0 P = 2 The eigenvalues of P are and /2. We know from general principle that (P n ) = a + b( /2) n for some constants a, b. But (P 0 ) = and (P ) = 0 so The desired probability is 2 ). (P n ) = ( /2)n. Now label the vertices of the triangle states, 2, and 3. The transition matrix is P = with eigenvalues and ± i 2 2 = 3 3 /2 e ±iπ/6, where i =. We know that the entries of the matrix P n are of the form (P n ) ij = a + ( 3 /2 ) n (b cos(πn/6) + c sin(πn/6)). 5
6 Since (P 0 ) =, (P ) = 0, and (P 2 ) = 4 we have 9 (P n ) = ( 3 /2 ) n cos(nπ/6) In both cases, it saves time to notice that the invariant distribution puts probability /3 on vertex, so that the constant term a = /3. Problem 7. Consider the second flea of Problem 6. What is the expected number of hops the flea makes before it is first back to vertex? What is the expected number of times the flea visits vertex 3 before first reaching vertex 2? (Assume that the vertices are labelled so that is clockwise.) Solution 7. Consider the Markov chain (X n ) n 0 on {, 2, 3} with transition matrix P = For each vertex i let T i = inf{n : X n = i}. Since the chain is irreducible and positive recurrent, and has unique invariant distribution π = (/3, /3, /3), we have E (T ) = π = 3. Now we are asked to compute E ( T 2 n= {X n=3}). Here are two approaches: First, let k i = E T2 i n= {X n=3}. Then by conditioning on X we have T 2 k = E( {Xn=3} X 0 = ) n= T 2 = E( {Xn=3} X 0 =, X = 2)P(X = 2 X 0 = ) n= T 2 +E( {Xn=3} X 0 =, X = 3)P(X = 3 X 0 = ) n= = (0)(2/3) + ( + k 3 )(/3) where we have used the time homogeneity of the chain. Similarly, k 3 = (0)(/3) + (k )(2/3) so that k = 3/7. Here is another approach. Let N = T 2 n= {X n=3}. Note that P (N = 0) = P(X = 2 X 0 = ) = 2/3. 6
7 and for k, so that P (N = k) = P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k = 2} +P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k =, X 2k+ = 2} = (/3) k+ (2/3) k + (/3) k (2/3) k+ = (7/6)(2/9) k E (N) = k(7/6)(2/9) k = 3/7 k= as before. [Thanks to those who contributed to these solutions during the examples class.] Although irrelevant to the question, note that k 2 = E 2 (N) = γ (2) 3 = π 3 /π 2 = is the expected number of times the flea visits vertex 3 before returning to vertex 2, assuming the flea began at vertex 2. 7
P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationSolution: (Course X071570: Stochastic Processes)
Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationStochastic Simulation
Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More information12 Markov chains The Markov property
12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationMarkov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to
Markov chains MC. Show that the usual Markov property is equivalent to P(Future Present, Past) = P(Future Present) P(Future, Past Present) = P(Future Present) P(Past Present). MC 2. Suppose that X 0, X,...
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationTreball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,
More informationInterlude: Practice Final
8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationFINITE MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,
More informationFall 2017 Test II review problems
Fall 2017 Test II review problems Dr. Holmes October 18, 2017 This is a quite miscellaneous grab bag of relevant problems from old tests. Some are certainly repeated. 1. Give the complete addition and
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More information(i) What does the term communicating class mean in terms of this chain? d i = gcd{n 1 : p ii (n) > 0}.
Paper 3, Section I 9H Let (X n ) n 0 be a Markov chain with state space S. 21 (i) What does it mean to say that (X n ) n 0 has the strong Markov property? Your answer should include the definition of the
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results
Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of
More informationIEOR 6711: Professor Whitt. Introduction to Markov Chains
IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine
More informationA = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1
Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationMATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13
MATH383W LECTURE NOTES: WEEK 6 //00 Recursive sequences (cont.) Examples: () a =, a n+ = 3 a n. The first few terms are,,, 5 = 5, 3 5 = 5 3, Since 5
More informationOn asymptotic behavior of a finite Markov chain
1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationMarkov Chain Monte Carlo
Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationConsider an infinite row of dominoes, labeled by 1, 2, 3,, where each domino is standing up. What should one do to knock over all dominoes?
1 Section 4.1 Mathematical Induction Consider an infinite row of dominoes, labeled by 1,, 3,, where each domino is standing up. What should one do to knock over all dominoes? Principle of Mathematical
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationPUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes
PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES Notes. x n+ = ax n has the general solution x n = x a n. 2. x n+ = x n + b has the general solution x n = x + (n )b. 3. x n+ = ax n + b (with a ) can be
More information1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.
1. LECTURE 1: APRIL 3, 2012 1.1. Motivating Remarks: Differential Equations. In the deterministic world, a standard tool used for modeling the evolution of a system is a differential equation. Such an
More informationSome Definition and Example of Markov Chain
Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and
More information< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1
List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationMARKOV CHAINS AND HIDDEN MARKOV MODELS
MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe
More informationProbability & Computing
Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationChapter 11 Advanced Topic Stochastic Processes
Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo
Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More information1 Random walks: an introduction
Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and
More informationZdzis law Brzeźniak and Tomasz Zastawniak
Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing
More informationBirth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.
Birth-death chains Birth-death chain special type of Markov chain Finite state space X := {0,...,L}, with L 2 N X n X k:n k,n 2 N k apple n Random variable and a sequence of variables, with and A sequence
More informationIEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9
IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent
More informationMarkov Chains (Part 4)
Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationRandom Walks on Graphs. One Concrete Example of a random walk Motivation applications
Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationBootstrap random walks
Bootstrap random walks Kais Hamza Monash University Joint work with Andrea Collevecchio & Meng Shi Introduction The two and three dimensional processes Higher Iterations An extension any (prime) number
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationStochastic Processes
MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that
More informationDiscrete time Markov chains
Chapter Discrete time Markov chains In this course we consider a class of stochastic processes called Markov chains. The course is roughly equally divided between discrete-time and continuous-time Markov
More informationInformation Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST
Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it
More informationErgodic Properties of Markov Processes
Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems
More informationABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015
ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING by Nolan Outlaw May 215 Chair: Dr. Gail Ratcliff, PhD Major Department: Mathematics A common question in the study of random processes pertains
More informationQuestion Points Score Total: 70
The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.
More informationarxiv:math/ v1 [math.co] 13 Jul 2005
A NEW PROOF OF THE REFINED ALTERNATING SIGN MATRIX THEOREM arxiv:math/0507270v1 [math.co] 13 Jul 2005 Ilse Fischer Fakultät für Mathematik, Universität Wien Nordbergstrasse 15, A-1090 Wien, Austria E-mail:
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More informationA sequence of triangle-free pseudorandom graphs
A sequence of triangle-free pseudorandom graphs David Conlon Abstract A construction of Alon yields a sequence of highly pseudorandom triangle-free graphs with edge density significantly higher than one
More informationHomework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4
Do the following exercises from the text: Chapter (Section 3):, 1, 17(a)-(b), 3 Prove that 1 3 + 3 + + n 3 n (n + 1) for all n N Proof The proof is by induction on n For n N, let S(n) be the statement
More informationPROBABILITY VITTORIA SILVESTRI
PROBABILITY VITTORIA SILVESTRI Contents Preface. Introduction 2 2. Combinatorial analysis 5 3. Stirling s formula 8 4. Properties of Probability measures Preface These lecture notes are for the course
More informationMathematical Methods for Computer Science
Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationChapter 2: Markov Chains and Queues in Discrete Time
Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called
More informationChapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn
Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn Formulation of the D&C principle Divide-and-conquer method for solving a problem instance of size n: 1. Divide n c: Solve the problem
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationk-protected VERTICES IN BINARY SEARCH TREES
k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from
More informationWhat you learned in Math 28. Rosa C. Orellana
What you learned in Math 28 Rosa C. Orellana Chapter 1 - Basic Counting Techniques Sum Principle If we have a partition of a finite set S, then the size of S is the sum of the sizes of the blocks of the
More informationMAS275 Probability Modelling Exercises
MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.
More informationMarkov and Gibbs Random Fields
Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The
More informationGenealogy of Pythagorean triangles
Chapter 0 Genealogy of Pythagorean triangles 0. Two ternary trees of rational numbers Consider the rational numbers in the open interval (0, ). Each of these is uniquely in the form q, for relatively prime
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte
More information