Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
|
|
- Flora Harrison
- 5 years ago
- Views:
Transcription
1 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if for F n := σ(x 0,, X n ), P(X n+1 A F n ) = P(X n+1 A X n ) n 0 and A S. (1.1) If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. Examples: 1. Random walks; 2. Branching processes; 3. Polya s urn. Remark. Note that any stochastic process (X n ) n 0 taking values in S can be turned into a Markov chain if we enlarge the state space from S to n N Sn, and change the process from (X n ) n 0 to ( X n ) n 0 with X n = (X 0, X 1,, X n ) S n+1, namely the process becomes Markov if we take its entire past to be its present state. A more concrete way of characterizing Markov chains is by transition probabilities. Definition 1.2 Markov chain transition probabilities] A function p : S S 0, 1] is called a transition probability, if (i) For each x S, A p(x, A) is a probability measure on (S, S). (ii) For each A S, x p(x, A) is a measurable function on (S, S). We say a Markov chain (X n ) n 0 has transition probabilities p n if P(X n A F n 1 ) = p n (X n 1, A) (1.2) almost surely for all n N and A S. If p n p for all n N, then we call (X n ) n 0 a time-homogeneous Markov chain, or a Markov chain with stationary transition probabilities. If the underlying state space (S, S) is nice, then the distribution of a Markov chain X satisfying (1.1) can be characterized by the initial distribution µ of X 0 and the transition probabilities (p n ) n N. In particular, if S is a complete separable metric space with Borel σ- algebra S, then regular conditional probability distributions always exist, which guarantees the existence of transition probabilities p n. Conversely, a given family of transition probabilities p n and an initial law µ for X 0 uniquely determine a consistent family of finite dimensional distributions: P µ (X i A i, 0 i n) = µ(dx 0 ) p 1 (x 0, dx 1 ) p n (x n 1, dx n ), (1.3) A 0 A 1 A n which are the finite-dimensional distributions of (X n ) n 0. When (S, S) is a Polish space with Borel σ-algebra S, by Kolmogorov s extension theorem (see 1, Section A.7]), the law of (X n ) n 0, regarded as a random variable taking values in (S N 0, S N 0 ), is uniquely determined. Here N 0 := {0} N and S N 0 is the Borel σ-algebra generated by the product topology on S N 0. 1
2 Theorem 1.3 Characterization of Markov chains via transition probabilities] Suppose that (S, S) is a Polish space equipped with the Borel σ-algebra. Then to any collection of transition probabilities p n : S S 0, 1] and any probability measure µ on (S, S), there corresponds a Markov chain (X n ) n 0 with state space (S, S), initial distribution µ, and finite dimensional distributions given as in (1.3). Conversely if (X n ) n 0 is a Markov chain with initial distribution µ, then we can construct a family of transition probabilities (p n ) n N such that the finite dimensional distributions of X satisfy (1.3). From (1.3), it is also easily seen that P µ ( ) = P x ( )µ(dx), where P x denotes the law of the Markov chain starting at X 0 = x. Remark. When there is no other randomness involved besides the Markov chain (X n ) n 0, it is customary to let (S N 0, S N 0, P µ ) be the canonical probability space for X with initial distribution µ. From the one-step transition probabilities (p n ) n N, we can easily construct the transition probabilities between times k < l, i.e., P(X l A F k ). Define p k,l (x, A) = p k+1 (x, dy k+1 )p k+2 (y k+1, dy k+2 ) p l (y l 1, A). It is an easy exercise to show that S S Theorem 1.4 Chapman-Kolmogorov equations] The transition probabilities (p k,m ) 0 k<m satisfy the relations p k,n (x, A) = p k,m (x, dy)p m,n (y, A) (1.4) for all k < m < n, x S and A S. In convolution notation, this reads p k,n = p k,m p m,n. In particular, for any 0 m < n, S P(X n A F m ) = p m,n (X m, A) a.s. Time homogeneous Markov chains are determined by their one-step transition probabilities p = p n 1,n for all n N. We call p (k) = p n,n+k the k-step transition probabilities. The Chapman-Kolmogorov equation (1.4) then reads p (m+n) = p (m) p (n). 2 The Markov and strong Markov property We now restrict ourselves to time-homogeneous Markov chains. The Markov property asserts that given the value of X n, the law of (X n, X n+1, ) is the same as that of a Markov chain starting from X n, while the strong Markov property asserts that the same is true if we replace n by a stopping time τ. When the stopping time is a hitting time of a particular point x 0 S, the strong Markov property tells us that the process renews itself and has no memory of the past. Such renewal structures are particularly useful in the study of Markov chains. We will formulate the Markov property as an equality in law in terms of conditional expectations of bounded measurable functions. 2
3 Theorem 2.1 The Markov property] Let (S N 0, S N 0, P µ ) be the canonical probability space of a homogeneous Markov chain X with initial distribution µ, and let F n = σ(x 0,, X n ). Let θ n : S N 0 S N 0 denote the shift map with (θ n X) m = X m+n for m 0. Then for any bounded measurable function f : S N 0 R, E µ f(θ n X) F n ] = E Xn f], (2.5) where E µ (resp. E Xn ) denotes expectation w.r.t. the Markov chain with initial law µ (resp. δ Xn ). Proof. It suffices to show that for all A F n and all bounded measurable f, E µ f(θ n X)1 A ] = E µ EXn f]1 A ]. (2.6) We can use the π-λ theorem to restrict our attention to sets of the form A = {ω S N 0 : ω 0 A 0, ω 1 A 1,, ω n A n }, and use the monotone class theorem to restrict our attention to functions of the form f(ω) = k i=0 g i(ω i ) for some k N and bounded measurable g i : S R. For A and f of the forms specified above, by successive conditioning and the fact that the transition probabilities p of the Markov chain are regular conditional probabilities, E µ f(θ n X)1 A ] = E µ g k (X n+k ) g 0 (X n )1 An (X n ) 1 A0 (X 0 )] = µ(dx 0 ) p(x 0, dx 1 ) p(x n 1, dx n )g 0 (x n ) A 0 A 1 A n p(x n, dx n+1 )g 1 (x n+1 ) p(x n+k 1, dx n+k )g k (x n+k ) = E µ EXn g 0 g k ]1 A ] = Eµ EXn f]1 A ]. (2.7) Given f = k i=0 g i(ω i ), the collection of sets A F n which satisfy (2.7) is a λ-system, while sets of the form A = {ω S N 0 : ω 0 A 0,, ω n A n } is a π-system. Therefore by π-λ theorem, (2.7) holds for all A F n. Now we fix A F n. Let H denote the set of bounded measurable functions for which (2.7) holds. We have shown that H contains all functions of the form f(ω) = k i=0 g i(ω i ). In particular, H contains indicator functions of sets of the form A = {ω S N 0 : ω 0 A 0,, ω k A k }, which is a π-system that generates the σ-algebra S N 0. Clearly H is closed under addition, scalar multiplication, and increasing limits. Therefore by the monotone class theorem, H contains all bounded measurable functions. Theorem 2.2 Monotone class theorem] Let Π be a π-system which contains the full set Ω, and let H be a collection of real-valued functions satisfying (i) If A Π, then 1 A H. (ii) If f, g H, then f + g H, and cf H for any c R. (iii) If f n H are non-negative, and f n f where f is bounded, then f H. Then H contains all bounded measurable functions w.r.t. the σ-algebra generated by Π. The monotone class theorem is a simple consequence of the π-λ theorem. See e.g. Durrett 1] for a proof. 3
4 Theorem 2.3 The strong Markov property] Following the setup of Theorem 2.1, let τ be an (F n ) n 0 stopping time. Let (f n ) n 0 be a sequence of uniformly bounded measurable functions from S N 0 R. Then Proof. Let A F τ. Then E µ f τ (θ τ X) F τ ]1 {τ< } = E Xτ f τ ]1 {τ< } a.s. (2.8) E µ f τ (θ τ X)1 A {τ< } ] = E µ f n (θ n X)1 A {τ=n} ]. Since A {τ = n} F n, by the Markov property (2.5), the right hand side equals which proves (2.8). n=0 E µ E Xn f n ]1 A {τ=n} ] = E µ E Xτ f τ ]1 A {τ< } ], n=0 To illustrate the use of the strong Markov property and the reason for introducing the dependence of the functions f n on n, we prove the following. Example 2.4 Reflection principle for simple symmetric random walks] Let X n = n i=1 ξ i, where ξ i are i.i.d. with P(ξ i = ±1) = 1 2. Then for any a N, P( max 1 i n X i a) = 2P(X n a + 1) + P(X n = a). (2.9) Proof. Let τ a = inf{0 k n : X k = a} with τ a = if the set is empty. Then max 1 i n X i a if and only if τ a n. Therefore P( max 1 i n X i a) = P(τ a n) = P(τ a n, X n < a) + P(τ a n, X n > a) + P(τ a n, X n = a). Note that P(τ a n, X n > a) = P(X n > a) because X is a nearest-neighbor random walk, and similarly P(τ a n, X n = a) = P(X n = a), while P(τ a n, X n < a) = E1 {τa n}p(x n < a F τa )] = E1 {τa n}p a (X n τa < a)], where we have applied (2.8) with f k = 1 {Xn k <a} if 0 k n and f k = 0 otherwise. By symmetry, conditional on τ a, we have P a (X n τa < a) = P a (X n τa > a). Therefore which then implies (2.9). P(τ a n, X n < a) = P(τ a n, X n > a) = P(X n > a), Remark. The proof of Theorem 2.3 shows that a discrete time Markov chain is always strong Markov. However, this conclusion is false for continuous time Markov processes. The reason is that there are uncountable number of times which may conspire together to make the strong Markov property fail, even though the Markov property holds almost surely at deterministic times. One way to guarantee the strong Markov property is to require the transition probabilities p t (x, ) to be continuous in t and x, which is called the Feller property. 4
5 3 Markov chains with a countable state space We now focus on time-homogeneous Markov chains with a countable state space S. Let (p(x, y)) x,y S denote the 1-step transition probability kernel of the Markov chain (X n ) n 0, which is a matrix with non-negative entries and y S p(x, y) = 1 for all x S. Such matrices are called stochastic matrices. The n-step transition probability kernel of the Markov chain is then given by the n-th power of p, i.e., p (n) (x, y) = z S p(n 1) (x, z)p(z, y). We first consider the following subclass of Markov chains. Definition 3.1 Irreducible Markov chains] A Markov chain with a countable state space S is called irreducible if for all x, y S, p (n) (x, y) > 0 for some n 0. In other words, every state communicates with every other state. A markov chain fails to be irreducible either because the state space can be partitioned into non-communicating disjoint subsets, or there are subsets of the Markov chain acting as syncs: once the Markov chain enters the subset, it can never leave it. Definition 3.2 Transience, null recurrence, and positive recurrence] Let τ y := inf{n > 0 : X n = y} be the first hitting time (after time 0) of the state y S by the Markov chain X. Any state x S can then be classified into the following three types: (i) Transient, if P x (τ x < ) < 1. (ii) Null recurrent, if P x (τ x < ) = 1 and E x τ x ] =. (iii) Positive recurrent, if P x (τ x < ) = 1 and E x τ x ] <. It turns out that for an irreducible Markov chain, all states are of the same type. Therefore transience, null recurrence and positive recurrence will also be used to classify irreducible Markov chains. Before proving this claim, we first prove some preliminary results. Lemma 3.3 Let ρ xy = P x (τ y < ) for x, y S. Let G(x, y) = n=0 P x(x n = y) = n=0 p(n) (x, y). If y is transient, then ρ xy if x y, G(x, y) = 1 if x = y. (3.10) If y is recurrent, then G(x, y) = for all x S with ρ xy > 0. Proof. Assuming X 0 = y, let Ty 0 = 0, and define inductively Ty k = inf{i > Ty k 1 : X i = y}. Namely, Ty k are the successive return times to y. By the strong Markov property, P y (Ty k < Ty k 1 < ) = P y (Ty 1 < ) = ρ yy. By successive conditioning, we thus have P y (Ty k < ) = ρ k yy. Therefore G(y, y) = P y (Ty k < ) = ρ k 1 yy =. (3.11) k=0 k=0 Therefore G(y, y) = if and only if ρ yy = 1, i.e., y is recurrent. 5
6 For x y, we first have to wait till X visits y, and G(x, y) = P x (Ty k < ) = k=1 ρ xy, (3.12) where we used the fact that P x (T 1 y < ) = ρ xy. This completes the proof of the lemma. Lemma 3.4 If x S is recurrent, y x, and ρ xy := P x (τ y < ) > 0, then P x (τ y < τ x ) > 0, ρ yx := P y (τ x < ) = 1 = ρ xy, and y is also recurrent. Proof. If P x (τ y < τ x ) = 0 so that the Markov chain starting from x returns to x before visiting y almost surely, then when it returns to x, it starts afresh and will not visit y before a second return to x. Iterating this reasoning, the Markov chain will visit x infinitely often before visiting y, which means it will never visit y, contradicting the assumption. Suppose that ρ yx < 1. Let k = inf{i > 0 : p (k) (x, y) > 0}. Since P x (τ y < τ x ) > 0, there exists k 1 and y 1,, y k 1 S, all distinct from x and y such that p(x, y 1 )p(y 1, y 2 ) p(y k 1, y) > 0. Then P x (τ x = ) p(x, y 1 ) p(y k 1, y)(1 ρ yx ) > 0, which contradicts the recurrence of x. Hence ρ yx = 1. Since upon each return to x, with probability P x (τ y < τ x ) > 0, the Markov chain will visit y before returning to x, it follows that ρ xy = 1 because the Markov chain returns to x infinitely often by recurrence, and the events that y is visited between different consecutive returns to x are independent by the strong Markov property. Since ρ yx = ρ xy = 1, almost surely the Markov chain starting from y will visit x and then return to y. Therefore y is also recurrent. We are now ready to prove Theorem 3.5 For an irreducible Markov chain, all states are of the same type. Proof. Lemma 3.4 has shown that if x is recurrent, then so is any other y S by the irreducibility assumption. It remains to show that if x is positive recurrent, then so is any y S. Let p = P x (τ y < τ x ), which is positive by Lemma 3.4. Then E x τ x ] P x (τ y < τ x )E y τ x ]. Therefore E y τ x ] 1 p E xτ x ] <. On the other hand, E x τ y ] E x 1 {τy<τ x}τ x ] + E x 1 {τx<τ y}τ y ] = E x 1 {τy<τ x}τ x ] + E x 1 {τx<τ y}eτ y F τx ]] = E x 1 {τy<τ x}τ x ] + E x 1{τx<τ y}(τ x + E x τ y ]) ] Therefore E x τ y ] 1 p E xτ x ], and which proves the positive recurrence of y. = E x τ x ] + (1 p)e x τ y ]. E y τ y ] E y τ x ] + E x τ y ] 2 p E xτ x ] <, Remark. Theorem 3.5 allow us to classify an irreducible countable state space Markov chain to be either transient, null recurrent, or positive recurrent, depending on the type of its states. References 1] R. Durrett, Probability: Theory and Examples, 2nd edition, Duxbury Press, Belmont, California,
Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationChapter 2: Markov Chains and Queues in Discrete Time
Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationprocess on the hierarchical group
Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationT. Liggett Mathematics 171 Final Exam June 8, 2011
T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationTranslation Invariant Exclusion Processes (Book in Progress)
Translation Invariant Exclusion Processes (Book in Progress) c 2003 Timo Seppäläinen Department of Mathematics University of Wisconsin Madison, WI 53706-1388 December 11, 2008 Contents PART I Preliminaries
More informationMarkov Chains. Chapter Existence and notation. B 2 B(S) and every n 0,
Chapter 6 Markov Chains 6.1 Existence and notation Along with the discussion of martingales, we have introduced the concept of a discrete-time stochastic process. In this chapter we will study a particular
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationThe Lebesgue Integral
The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationErgodic Properties of Markov Processes
Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationStochastic Processes (Stochastik II)
Stochastic Processes (Stochastik II) Lecture Notes Zakhar Kabluchko University of Ulm Institute of Stochastics L A TEX-version: Judith Schmidt Vorwort Dies ist ein unvollständiges Skript zur Vorlesung
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More information12 Markov chains The Markov property
12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More information1 Stat 605. Homework I. Due Feb. 1, 2011
The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationRECURRENCE IN COUNTABLE STATE MARKOV CHAINS
RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationTreball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationStochastic Simulation
Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................
More informationCountable state discrete time Markov Chains
Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationSummary of Results on Markov Chains. Abstract
Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,
More informationTransience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:
Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More information25.1 Ergodicity and Metric Transitivity
Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces
More informationAn Introduction to Entropy and Subshifts of. Finite Type
An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview
More informationIn terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.
1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t
More informationMATH 202B - Problem Set 5
MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there
More informationHitting Probabilities
Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and
More informationMarkov Chains for Everybody
Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität
More informationIntertwining of Markov processes
January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2
More informationPROBABILITY THEORY II
Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationDynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)
Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationProbability Theory. Richard F. Bass
Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2
More informationFINITE MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationNecessary and sufficient conditions for strong R-positivity
Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationMATH 6605: SUMMARY LECTURE NOTES
MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic
More informationWeak quenched limiting distributions of a one-dimensional random walk in a random environment
Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationPart III Advanced Probability
Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationTopology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:
Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework
More informationAn Introduction to Stochastic Processes in Continuous Time
An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................
More informationStochastic Realization of Binary Exchangeable Processes
Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,
More informationExercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).
Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution
More informationThe Essential Equivalence of Pairwise and Mutual Conditional Independence
The Essential Equivalence of Pairwise and Mutual Conditional Independence Peter J. Hammond and Yeneng Sun Probability Theory and Related Fields, forthcoming Abstract For a large collection of random variables,
More informationChapter 2. Markov Chains. Introduction
Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationAdvanced Computer Networks Lecture 2. Markov Processes
Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More information1 Random walks: an introduction
Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationLecture 7. Sums of random variables
18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1 Outline Definitions Sums of random variables 18.175 Lecture 7 2 Outline Definitions Sums of random variables 18.175 Lecture
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte
More information